id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.10359 | Prompt, Condition, and Generate: Classification of Unsupported Claims
with In-Context Learning | Unsupported and unfalsifiable claims we encounter in our daily lives can
influence our view of the world. Characterizing, summarizing, and -- more
generally -- making sense of such claims, however, can be challenging. In this
work, we focus on fine-grained debate topics and formulate a new task of
distilling, from such claims, a countable set of narratives. We present a
crowdsourced dataset of 12 controversial topics, comprising more than 120k
arguments, claims, and comments from heterogeneous sources, each annotated with
a narrative label. We further investigate how large language models (LLMs) can
be used to synthesise claims using In-Context Learning. We find that generated
claims with supported evidence can be used to improve the performance of
narrative classification models and, additionally, that the same model can
infer the stance and aspect using a few training examples. Such a model can be
useful in applications which rely on narratives , e.g. fact-checking. | Peter Ebert Christensen, Srishti Yadav, Serge Belongie | 2023-09-19T06:42:37Z | http://arxiv.org/abs/2309.10359v1 | # Prompt, Condition, and Generate: Classification of Unsupported Claims with In-Context Learning
###### Abstract
Unsupported and unfalsifiable claims we encounter in our daily lives can influence our view of the world. Characterizing, summarizing, and - more generally - making sense of such claims, however, can be challenging. In this work, we focus on fine-grained debate topics and formulate a new task of distilling, from such claims, a countable set of narratives. We present a crowdsourced dataset of 12 controversial topics, comprising more than 120k arguments, claims, and comments from heterogeneous sources, each annotated with a narrative label. We further investigate how large language models (LLMs) can be used to synthesise claims using In-Context Learning. We find that generated claims with supported evidence can be used to improve the performance of narrative classification models and, additionally, that the same model can infer the stance and aspect using a few training examples. Such a model can be useful in applications which rely on narratives, e.g. fact-checking.
## 1 Introduction
Online platforms have revolutionized the landscape of public discourse, facilitating extensive debates across a wide range of topics. However, these online discussions often suffer from a lack of coherent and concise arguments. Despite this inherent challenge, it is possible to discern particular motions Levy et al. (2014), opinions Li et al. (2020), human values Kiesel et al. (2022), and narratives Christensen et al. (2022) within the seemingly disorganized discourse. The ability to identify narratives in online debates is paramount for fact-checking and argument mining, as it enables the evaluation of unsupported claims and their validity.
In our methodology, narratives are differentiated from arguments and claims by incorporating additional attributes: topics, stances, aspects, and evidence. The _topic_ refers to the subject under discussion, such as the ethical aspects of cloning humans for reproductive purposes. The _stance_ represents the viewpoint taken on the topic, for example, a negative stance indicating that cloning for reproductive purposes is considered unethical and unacceptable. Within the narrative, the _aspect_ focuses on a specific perspective, providing a more nuanced understanding of the topic. For instance, the aspect within the context of cloning could be the creation of cloned embryos solely for research purposes, which delves into a particular subtopic. These attributes aid in identifying arguments in non-argumentative sources Stab et al. (2018).
Evidence plays a crucial role in assessing the credibility of a statement. When supported by evidence, a statement gains strength and credibility, classifying it as an argument Hansen and Hershccovich (2022). For instance, in the text "Cloning humans for reproductive purposes is unethical and unacceptable, but creating cloned embryos solely for research - which involves destroying them anyway - is downright criminal," the presence of evidence highlighting the destruction of embryos strengthens the argument. Conversely, without evidence, a statement such as "Cloning humans for reproductive purposes is unethical" would be categorized as a claim, lacking the necessary substantiation to be considered an argument. Additional support is required to validate a claim as an argument. With these definitions, we can briefly differentiate between narrative, arguments and claims as:
1. Narrative: Concise expression of an individual's perspective on a specific topic.
2. Claim: Statement or proposition without supporting evidence.
3. Argument: Claim supported by evidence and reasoning. Aim to justify a specific stance on a topic
As seen above, _claims_ can lack or have insufficient evidence and be unverifiable or unfalsifiable for purposes of fact-checking in real world scenarios Glockner et al. (2022) and are hence often not suitable for fact-checking pipelines and
thus discarded Augenstein (2021). Instead of discarding the claims and arguments, we propose that one should instead identify the individual unsupported claim or _narrative_, e.g., "human cloning is wrong". We call this task Narrative Prediction which forms the basis of this paper. We use the word "Prediction" as an umbrella term, as it can be viewed as either classification (due to the small set of unsupported claims that reflect different viewpoints in the debate) or alternatively a generation task. In addition to fact-checking the existing literature on claim generation using large language models (LLMs) lacks attention to the relationship between narratives extracted from online debate portals Christensen et al. (2022) and argumentative texts Habernal and Gurevych (2016), as well as the effective modeling of narratives by LLMs. This work addresses these gaps by formalizing narratives in online debates and understanding their elements: topics, aspects, stance, and evidence. Additionally, a curated dataset of 120k tweets, with around 40 narratives per topic, is introduced to train and evaluate narrative prediction techniques for fact-checking systems. Furthermore, we propose a method to enhance narrative prediction by generating synthetic tweets through argumentative attributes such as stance and aspects using few-shot In Context Learning (ICL) as illustrated this in Fig. 1. The task of narrative prediction simply corresponds to only the right hand side of the figure using no generated candidates where we fine-tune the LLM on tweets. In summary, the contributions of this paper are:
1. _A specific definition_ for narratives, along with an analysis of how this differs from arguments, claims, and motions.
2. _A new dataset and task_, consisting of online comments and tweets labelled for narrative prediction.
3. _A narrative prediction approach_ that maps all the tweets from a fine-grained debate into a list of narratives using a LLM.
4. _A computational approach_ that generates synthetic arguments/claims with a specified aspect and stance.
## 2 Related Works
Corpora of textual claims considering various controversial topics have often been used in the study of rhetoric and argumentation, including summarization Stammbach and Ash (2020), optimization Skitalinskaya et al. (2022), identifying human values Kiesel et al. (2022), robustness of arguments Sofi et al. (2022), controllable text generation Schiller et al. (2021), stance detection Stab et al. (2018), and studying what constitutes an argument Trautmann et al. (2020). Prior work on claim and argument summarization has been beneficial in different tasks and domains. In early works, summarization was used for explainable fact-checking Stammbach and Ash (2020); Mishra et al. (2020) and has recently been used to denoise tweets Bhatnagar et al. (2022). However, abstractive summarization techniques for real-world tweets are still underdeveloped compared to traditional text summarization methods. Given the effectiveness of prompt-based methods in tasks like abstractive summarization and binary classification Chung et al. (2022); Sanh et al. (2022), we propose exploring these methods to enhance the text generation of arguments, particularly within the fine-grained topic debates. While fine-grained approaches have been explored in argument mining Hansen and Hershcovich (2022); Trautmann et al. (2020); Schiller et al. (2021), they often address broader controversial topics ("minimum wage.") rather than narrow debates ("crypto currencies as a fiat currency,"). Similarly other works that classify if a claim is mentioned in a text Mirkin et al. (2018) studies motions which are an action that should be taken (as can be seen with the example "we should introduce goal line technology"). Other lines of works focus on scaling up by detecting "generic" claims frequent across topics Orbach et al. (2019) or mining candidate claims from corpora Lavee et al. (2019). In comparison we are envisioning our work to be applicable for unfalsifiable or unverifiable claims coming from short noisy tweets rather than a high quality curated database (iDebate) which contain minutes long speeches.
In our work we create a new dataset, focusing on narrow debate topics, by relying on an argument mining annotation scheme based on Hansen and Hershcovich (2022), consisting of various categories of claims and arguments found in online debates. Where Hansen and Hershcovich (2022) compare arguments in terms of categories (normative or factual arguments), we propose and study the new task of predicting controversial narratives from tweets. Perhaps most similar to our work is Christensen et al. (2022), which proposed a human-in-the-loop-based model to cluster unfalsifiable claims using crowdsourced triplets similarities.
## 3 Task and Data
This section introduces our definition of a narrative, and a proposed task, and presents the data used for development and evaluation.
### Narrative Definition
As mentioned in the introduction, we define the term _narrative_ as a concise statement lacking supporting evidence, which can originate from an unfalsifiable or unverifiable claim. Additionally, narratives can include arguments supported by evidence types such as anecdotal, expert, or normative sources as defined in Hansen and Hershcovich (2022), instead of empirical studies. The objective of our methodology is to identify a small set of unsupported claims that reflect diverse viewpoints in a debate and require attention from fact-checkers.
After defining the term narrative we can now focus on a theoretical underpinning of this paper which is a proposed relationship between number of narratives and the scope of the fine-grained debate, that we call the _parrot hypothesis_.
The Parrot HypothesisIn a given social media debate, the thoughts and opinions contributed by commentators resolve to a finite set of distinct narratives. While users could, in principle, state their views in a concise, distilled manner, they often prefer to write embellished variants or personal takes that require reading between the lines.
At its core, the parrot1 hypothesis seeks to propose a concept to manage the variations of statements in a debate. By grouping statements into a finite set of narratives related to common topics, the hypothesis narrows the scope of the debate and transforms it into a classification problem. Narratives, representing individual unsupported claims or viewpoints, play a crucial role in capturing diverse perspectives and supporting fact-checking efforts. Despite the potential for infinite arguments, a limited number of distinct claims tend to emerge in online debates, backed by the majority of users Boltuzic and Snajder (2015). Our hypothesis is that a narrow enough topic will emit such behaviour from users. Incorporating the parrot hypothesis and identifying narratives could enable a more systematic analysis to improve understanding of narratives and facilitating fact-checking.
Footnote 1: We use “parrot” in the sense of “parroting talking points,” except that we don’t assume the commentators are necessarily being fed talking points without their knowledge.
### Narrative Prediction
We approach the problem of narrative prediction on social media, specifically focusing on tweets.
TaskGiven a single tweet \(t\), a statement by a participant in a debate, and a set of possible narratives \(\mathcal{N}\), rewrite \(t\) into a narrative \(n\in\mathcal{N}\) such that:
* the narrative is written as an unsupported claim,
* only one narrative \(n\) can be selected for each tweet from \(\mathcal{N}\), and
* \(n\) preserves the meaning of \(t\) as much as possible.
The set of possible narratives, denoted as \(\mathcal{N}\), is sourced from domain experts. Although a tweet may implicitly or explicitly contain multiple narratives, the
Figure 1: Prompt, Condition, and Generate: A framework to enhance narrative prediction by synthetizing tweets. We first _prompt_ a LLM for the stance and aspects of a new tweet using ICL with some examples, we then _condition_ the LLM on these attributes to synthesize tweets. Lastly, we fine-tune a LLM on all tweets to _generate_ narratives.
ratives, our aim is to identify and assign only one explicitly stated narrative for each tweet.
By addressing narrative prediction in this manner, we strive to transform tweets into coherent and explicit unsupported claims, contributing to a deeper understanding and analysis of the content within the context of social media discourse.
### Annotation scheme
To collect relevant data, we use an annotation scheme comprising a fine-grained topic, a sentence, and a narrative (unfalsifiable and unverifiable claim). Additionally, we explore the augmentation of an existing dataset, following an alternative annotation scheme Schiller et al. (2021), to incorporate attributes such as stance (polarity of the argument) and aspect (subtopics or viewpoints) and use it for the generation of synthetic tweets. Though there exist narratives that are claims with supporting evidence (can be anecdotal, factual or normative which are found in Hansen and Hershcovich (2022)), the type of evidence is not considered for annotation. The details of this will be explained in the following section.
### Dataset creation
We present two datasets: Twitter-Narratives-9 (TN9) and an augmented version of the UKP-Corpus-Aug dataset. UKP-Corpus-Aug, which is the augmented UKP-Corpus dataset includes stance, aspect, and narrative annotations for three randomly selected topics from the original UKP-Corpus Schiller et al. (2021); Stab et al. (2018). On the other hand, TN9 consists of narrative annotations for 9 carefully selected controversial topics. Table 1 provides an overview of the datasets, including key statistics and a comparison between them. Additionally, Table 2 presents examples from TN9.
#### 3.4.1 Scraping
We start with scraping relevant data from Twitter. First a series of searches is executed combining different keywords and sentences/phrases, highlighting different statements in a topic. We search for 40 different keywords per topic from year 2016-2022 and search for as many fields (e.g., images, links, and other metadata) as possible using the Twitter API. The specific keywords used for each topic can be found in Appendix B.
#### 3.4.2 Filtering and Data Cleaning
To ensure that we are working with claims, we perform filtering steps. First, we remove duplicates but maintain identical sentences with different hashtags after removing retweets, quote tweets, links and videos, as well as mentions of users, token and media mentions. Second, we replace unreadable hexadecimal representations of unicode characters with their respective character, and encode the text with ascii characters. This results in 98,187 English tweets in total, around 11k tweets for each topic. The geographic distribution of the tweets as shown in Figure 2.
#### 3.4.3 Dataset Annotation
Annotation of narratives is conducted using Amazon Mechanical Turk in 2 rounds. In round 1, we design a pretest to ensure that the workers know the difference between an argument, claim and evidence using the dataset from Hansen and Hershcovich (2022). Given a tweet the annotators classify whether the tweet is a claim, argument or neither. Furthermore they also classify the evidence type given an argument Hansen and Hershcovich (2022). Learning to distinguish between these claims will help them in determining the narrative of the tweet. After passing at least 4 out of 5 questions the workers could begin annotating our 120k tweets. Using lists of around 40 narratives
\begin{table}
\begin{tabular}{l c c} \hline \hline & **UKP-Corpus-Aug** & **TN9** \\ \hline
**Annotations** & Aspect/Sance/Narrative & Narrative \\
**Tweets (train/test)** & 30/k1/.9k & 90/5/.4k \\
**Topics** & Abortion, Cloning, Nuclear Energy & AGI, Attractiveness, Alternative Meat, \\ & & Corporate culture, \\ & & Crypto, Baby Formula, \\ & & Influencer, Transport, \\ & & Mental health \\
**Source (Sentence)** & Reddit & Twitter \\
**Source (Labels)** & mTurk & mTurk \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the datasets. (The original UKP-Corpus consists of only Stance and Aspects, we provide narrative annotations)
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Topic** & **Sentence** & **Narrative** \\ Crypto & you are promoting crypto which is & Influences are scamming \\ & a scan helping thieves and criminals & their fans using crypto \\ & you are also full of plastic parts & \\ & and fillers profitable for the & \\ & pharmaceutical and cosmetic industry & \\ \hline Formula & My congressumme were voled NO on, & People are resembling baby \\ & lowering gas prices, NO on the baby & formula to other countries \\ & formula kill, NO on contracting (71), & for higher prices \\ & and NO on other helpful links. It is & \\ & unbecoming to complain about economic & \\ & hardship and then contribute to it. & \\ \hline AGI & And on other side, AGI will be & AI will not replace \\ & the single greatest technology to & humans but augment them \\ & alleviate human suffering in all of history. & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Example sentences and annotated narratives.
per topic compiled by domain experts we proceed to round 2. Each task consists of 1 tweet from 1 topic and annotators are asked to pick 1 out of \(\sim\) 40 different narratives this tweet follows (if any). Furthermore, by using the definition of argument as in Hansen and Hershcovich (2022) we can decompose it into 1) a claim and 2) its evidence, and use their schema to categories the type of evidence an argument may have. We thus additionally ask if the tweet is a claim or an argument with an evidence type that is not a study. A study here refers to "Results of a quantitative analysis of data, given as numbers, or as conclusions." That is statements that are cited, or easy to verify numbers should they appear in another argument. Hansen and Hershcovich (2022) The pay is 18%/hr. Detailed instructions can be found in Appendix D.
## 4 Method
Problem statement:Our goal is to create a model that output an estimate of the true narrative \(n\) given tweet \(t\) from debate \(d\). We do so by
* Investigating if identifying the narrative of a tweet is best suited as a text2text approach or a classification approach.
* Creating a data augmentation step using different kinds of ICL to help the best fine tuning procedure.
Narrative prediction approachGiven observed data \(\{t_{i},n_{i}\}_{i=1}^{N}\), we could parameterize a model as \(\tilde{n}=f_{\phi}(t)\), where \(f\) is a pretrained LLM with parameters \(\phi\) is the model that we finetune. This is illustrated as step 3 in fig. 1. We note that our prediction \(\tilde{n}\) in this case would be free-form text, a text2text approach. As the last step in fig. 1. illustrates a finetuning procedure we could alternatively parameterise a model as \(\tilde{n}=g(h_{\theta}(f_{\omega}(t)))\), where \(g\) is a lookup function that maps class \(c_{i}\) to narrative \(n_{i}\) for debate \(d\), \(h\) is a multi-class classifier with parameters \(\theta\), and \(f\) the model from before, but only using the encoder with parameters \(\omega\) and the classification head \(h\).The prediction \(\tilde{n}\) is now a class, a multi-class classification approach. The lookup function \(g\) enables us to take a predicted class and look up the actual narrative in the list written in appendix F. But doing this substitution we can calculate a Rouge score between the target narrative and the predicted one. However, during training we opt for optimizing the crossentropy loss on the narrative classes.
Prompt, Condition and GenerateIn addition to only finetuning \(f\), which is a LLM, we argue for using new methodologies focusing on ICL to exploit the capabilities of the LLM further and validate their performance on our new dataset. As such we let us inspire by prior work Schiller et al. (2021) on generating synthetic arguments (candidates) using aspects and a stance, that we call Prompt, Condition and Generate (PCG) and add candidates to our finetuning procedure as illustrated in step 3 in fig. 1. In contrast to their work our setup requires no training and can be done using a few examples using ICL as illustrated in fig. 1. That is as a first step we annotate a few handmade examples with topics names, a binary stance and a snippet of the example tweet which forms the aspect and then we _prompt_ the frozen model to output the stance and aspect of a novel tweet \(t\). Then we _condition_ the same model anew on its predicted stance and aspect to generate a candidate, by asking it to write a tweet knowing only about the debate topic, a stance and the aspect. To complete the creation of synthetic data we copy the original narrative \(n\) from \(t\) to form the candidate. This data can be used for additional fine-tuning of the text generation model that _generate_ narratives, as shown in fig. 1.
ICL MethodsIn the context of using LLM for ICL, a simple but effective approach called few-shot learning is to provide several examples of a task in the same prompt with the given input Brown et al. (2020). Additionally one can also first generate an explanation as to why certain outputs are favourable before generating the final answer, this is called Chain-of-Thought (CoT) Wei et al. (2023). Furthermore one should be careful with the selected examples for ICL. As noted in Zhao et al. (2021), standard ICL can be biased towards
Figure 2: Visualization of the percentages of the number of tweets per country. Like Huang and Carley (2019) only 2% of all tweets had available geotags and the tweets are found to be predominantly from the US, where the userbase is numerically the largest.
the training examples and the order of their occurrence. To mitigate this effect one can estimate the bias towards each answer by feeding in a test input that is content-free, e.g., "N/A" and "'". In practice on can fit an affine transformation to "calibrate" (Cal) the model's output probabilities to cause a uniform prediction for "N/A". We will investigate these methods in our PCG setup using different numbers of shots for aspect and stance prediction.
## 5 Experiments
In this section we investigate the performance of our finetuning approaches, including synthetic tweet generation for performance enhancement and the subtasks of stance and aspect prediction using different ICL techniques. We predict narratives on 7548 test cases (629 per topic).
### Setup
Classification:As described in Section 4.1, our classification model (_SFT_head_) consists of an encoder \(f_{\omega}\), being a T0 encoder (Sanh et al., 2022) model and a multi-class classifier \(h\), which is a single MLP that project the hidden layer down to the number of narratives present in one topic. We only finetune \(h\) using using the crossentropy loss. Finally using \(g\) we can report the Rouge-L F1 score as we convert the predicted class into the written narrative and comparing it with ground truth.
Text generation:In contrast to the classification model \(f_{\phi}\) is the full T0 model. We add new parameters and make a parameter efficient fine-tuning setup known as LoRA (Liu et al., 2022) on the T0 model. LoRA incorporates two low-rank matrices that are added to each parameter matrix in T0. We measure the Rougle-L F1 score between the generated narrative \(\hat{n}\) and the ground truth narrative.
Prompt, Condition and Generate:To enhance the above mentioned setups we generate synthetic tweets. We do this we first infer the stance and aspect of a new tweet by insert up to 4 such examples into the prompt, and then second simply prompt our frozen model to write a tweet with the predicted stance containing the sentence from the aspect and about the same topic as the new tweet, this is shown in Figure 1.
To test how well the model can predict aspects and stances we focus on stance and aspect data from the UKP-Corpus on 3 randomly select topics, as these sentences are annotated with a stance and aspects. We compare 3 methods: standard ICL, CoT (Wei et al., 2023) and Cal (Zhao et al., 2021), with a fully supervised BERT span predictors as a baseline. We train 3 BERT\({}_{BASE}\) baselines with \({}_{only}\) indicating it is _only_ trained on this topic using 10k examples. The second \({}_{remain}\) uses 60k total examples training on the 5 _remaining_ out of distribution topics from (Stab et al., 2018) ( i.e. excluding abortion, cloning & nuclear energy) before fine-tuning to a new topic and finally \({}_{all}\) trained on all 8 topics (80k examples) from (Schiller et al., 2021).The ICL setups use 4 examples to predict the attributes, though we experiment with including fewer examples for aspect prediction (Figure 3 ). We do not use verbalizers for ICL but restrict the possible decoding output only to the words considered in the sentence for aspect prediction or "for" or "against" in case of stance prediction.
In addition to generating candidates with our original model \(f\), we test the generality of generating candidates of these attributes by conditioning other LLMs on them. These include T5-Flan-3B (Chung et al., 2022), BLOOM-175B (Workshop et al., 2023) and CTRLUKP (Schiller et al., 2021) for the UKP-Corpus. When finetuning using candidates and tweets, we compare them using several metrics like precision oriented BLEU (Papineni et al., 2002), recall oriented Rouge-L (Lin, 2004), METEOR (Banerjee and Lavie, 2005), and finally chrF (Popovic, 2015). To automatically quantify to what extent a candidate contains the meaning of the original claim, we compute their semantic similarity in each case using the BERT-score(Zhang* et al., 2020). Additionally we conduct a human evaluation of the generated candidates to ensure readability for humans. For each generative model and topic we select 10 candidates and acquire 2 independent crowdworkers via MTurk at 185/hour. The annotators scored all candidates on four quality metrics: (1) argument quality (2) persuasiveness, (3) meaning preservation and (4) fluency. We follow Schiller et al. (2021) for assessing the argument quality, Habernal and Gurevych (2016) for persuasiveness and Skitalinskaya et al. (2022) for quality using these Likert scales: Argument Quality. 1 (much worse than original) - 5 (notably improved), Persuasiveness. 1 (generated text less persuasive than original) - 3 (generated text is more persuasive), Fluency. 1 (major errors) - 3 (fluent) and Meaning Preservation. 1 (entirely different) - 5 (identical). Lastly we report the inter-annotator
agreement (Cohen, 1960) and krippendorff's alpha (Krippendorff, 2004) between 2 annotators.
Stance predictionTo do stance prediction, we classify a tweet as either "for" or "against" a particular topic. For ICL methods we output the most likely word and convert it to 0 or 1 to compare it with the binary class output for the baseline model. We use binary cross entropy loss to compare the predicted stance with the true label.
Aspect predictionHere we aim to identify the correct span of text within a tweet. The span is represented using the beginning-inside-outside (BIO) tags format (Ramshaw and Marcus, 1995). Here the initial word within the span is given the label "B" for beginning, the following words within the span is given the label "I" for inside, and finally any other word is given the label "O" for outside, making it a ternary classification task for the baselines. We sample multiple completions using beam search and report the average micro F1 and accuracy for both stance and aspect prediction.
### Results
Classification10 epochs of fine-tuning LM head results in a 38.54 Rouge-L F1 score for the UKP corpus and 38.72 Rouge-L F1 score for TN9.
Text generationFine-tuning the LoRA weights results in a 39.32 Rouge-L F1 score for the UKP corpus and 39.49 Rouge-L F1 score for TN9, similar to other summarization tasks (Zhang et al., 2022). Inspecting the results of this models for a couple of outputs is shown in Table 3. Analysing the second example we see a more concisely written narrative than the target, this lowers the resulting Rouge F1 score due to its shorter common sub-sequence.
Prompt condition generateGiven prior results we proceed with the best setup, the text generation setup from before and Table 4 shows the average Rouge-L F1 micro accuracy using additional candidate examples generated by T0-3B, T5-flan-3B model (Chung et al., 2022), an API call to BLOOM-176B (Workshop et al., 2023) and the CTRL generative model from Schiller et al. (2021) respectively. Using 629 candidates we get a 4 percentage point increase from the 39.49 Rouge-L F1 for TN9 from before, highlighting the strength of our approach.
Table 5 shows the quantitative metrics of our candidates. The relatively low BLEU (6.5) and ROUGE-L (9.2) indicate that revisions take place, however due to the high BERT-score (90.5) the meaning is largely preserved. Also the METEOR and Rouge-L scores are similar to Schiller et al. (2021) indicating similar generative behaviour. TN9 lower scores indicate that the model has difficulty generating similar sentences to the original tweet using a predicted stance and aspect.
Table 6 show generally low Krippendorff's alpha agreement of 0.24 on average, which are common in subjective tasks (Wachsmuth et al., 2017). The inter-annotator agreement (Cohen, 1960) varies from model and attribute but is on average.25, which can be interpreted as "fair" agreement (Landis and Koch, 1977). Table 6 shows that human annotators find text generated by T0, having a higher persuasiveness (2.6) and having similar meaning to the source text (4.5) than the other methods. However, candidates from BLOOM and CTRL-UKP have a higher argument quality (3.5 vs. 3.6 and 4.2) and are more fluently written. Table 7 shows T0 being preferred for generating meaningful and persuasive texts. This is important as we will use the data in a fine-tuning setup.
\begin{table}
\begin{tabular}{l l l} \hline \hline Setups & **UKP** & **TN9** \\ \hline SFT\_T0 & 43.56 & 43.89 \\ SFT\_T5F & 44.12 & 44.34 \\ SFT\_BLOOM & 42.64 & 42.97 \\ SFT\_CTRL & 43.34 & – \\ \hline \hline \end{tabular}
\end{table}
Table 4: Summary of Rouge-L F1 scores using text2text supervised fine-tuning on the original dataset as well as candidates generated with different models.
\begin{table}
\begin{tabular}{l|l l l} \hline \hline
**Tweet \(t\)** & **Model Prediction (\(\sim\)** & **Target Narrative \(n\)** \\ \hline Animals are not ingredients’ & eating meat is murder & Eating meat is murder \\ \hline Yall find hypermescullity & & \\ resulting in incentivities & & \\ about the lack of a better & Hypermescullity is & Hypermescullity in and \\ body attractive’s.Lmaoco & problematic & of itself is the problem \\ \hline \hline \end{tabular}
\end{table}
Table 3: Sentences with predicted and target narrative.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Approach** & **BLEU** & **RouL** & **Meteor** & **BERT-score** & **chrf** \\ \hline
**UKP-Corpus** & & & & \\ CTRLUKP & 8.3 & 12.1 & 16.4 & 83.7 & 23.1 \\ BLOOM & 6.5 & 13.6 & 16.2 & 84.8 & **31.1** \\ T5-flan & 10.8 & **20.6** & 16.4 & **90.5** & 25.1 \\ T0 & **13.6** & 20.3 & **16.7** & 90.2 & 25.2 \\
**TN9** & & & & \\ BLOOM & 7.94 & 9.2 & 9.7 & 82.1 & 23.8 \\ T5-flan & 11.2 & 13.7 & 9.5 & 87.4 & 18.7 \\ T0 & 12.3 & 13.1 & 9.2 & 87.8 & 18.9 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Automatic evaluation: Average performance of each model on 629 test cases per topic
#### 5.3.2 Stance prediction
Table 8 shows our methods outperform the baselines on at least two topics in UKP-Corpus with Cloning topic being an exception. We believe this is because the distribution of stances in this topic makes it highly polarized. While imperfect it suffices to generate candidates.
#### 5.3.3 Aspect prediction
Table 9 shows that our method perform worse than our best baseline trained on 80k examples, but perform at a similar level to the official results reported in Table 3 in (Schiller et al., 2021). Additionally our baseline increases performance on the number of topics it has been trained on.Figure 3 visualises the performance of T0 few-shot prediction given \(k\leq 4\) examples and baseline models. T0-3B using 4 tweets is competitive to baselines trained on 10k+ tweets, despite the variance of the predictions being rather large, which reflect that using the samples is not always beneficial to the model. We proceed with the ICL setup for generating candidates despite this.
## 6 Conclusion
In this paper we introduced a new definition of narratives and how to model these in fine-grained debates with large language models. Our approach is based on parameter efficient fine-tuning using controlled text generation using attributes predicted using a handful of examples. We show that claims generated using our approach are genuine and sensible in general. We fine-tune of model on our own dataset and the augmented UKP-corpus and outperform baseline approaches. In future work, we seek to examine multiple completions and ensembles similar to (Lievin et al., 2023) which enables to include examples of up to 100 examples for ICL, to reduce variance and outperform single-sample CoT methods using larger models (GPT-4, Chat-GPT, LLama). Moreover, our approach considers each topic independently using a LLM but could be made to consider all simultaneously.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **Persuasiveness** & **Fluency** & **Argument** & **Meaning** \\ \hline
**UKP-Corpus** & & & & \\ CTRLUKP & 2.1 & 2.3 & 3.6 & 3.4 \\ BLOOM & 1.9 & **2.8** & **4.2** & 4.1 \\ T5-flan & 2.2 & 1.8 & 3.2 & 3 \\ T0 & **2.6** & **2.8** & 3.5 & **4.5** \\
**T9** & **2.7** & 3.4 & **3.5** \\ T5-flan & **2.4** & 2.3 & **3.6** & 3.3 \\ T0 & **2.4** & 2.5 & 3.4 & **3.5** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Human evaluation: Average scores on 10 candidates per topic using different models.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \hline
**Method** & **Abor. (F1 / Acc)** & **Clon. (F1 / Acc)** & **Nucl. (F1 / Acc)** \\ \hline only & 50.1 / 53.1 & 75.5 / 75.8 & 37.1 / 58.9 \\ \hline ICL & 54.4 / 53.8 & 59.3 / 54.9 & 54.9 / 52.9 \\ \hline CoT & 55.7 / 54.7 & 62.4 / 56.7 & 57.4 / 53.6 \\ \hline Cal & 57.3 / 55.6 & 60.6 / 55.9 & 58.7 / 54.1 \\ \hline remain & 36.1 / 56.4 & 35.6 / 55.3 & 37.1 / 58.9 \\ \hline all & 52.6 / 55.6 & 77.1 / 77.6 & 37.1 / 58.9 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Average micro F1 and accuracy for stance prediction using binary classification (for=1,against=0).
Figure 3: Average Aspect accuracy of few-shot ICL (T0-3B) on the Abortion, Cloning and the Nuclear Energy topic in the UKP dataset using random subsets of \(k^{\prime}=1\ldots 4\) examples. We display the best performances of the best fine-tuned BERT\({}_{BASE}\) baselines, the tags \({}_{only}\), \({}_{remain}\) and \({}_{all}\) indicate the same setup from table 8.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \hline
**Method** & **Abor. (F1 / Acc)** & **Clon. (F1 / Acc)** & **Nucl. (F1 / Acc)** \\ \hline only & 68.5 / 87.7 & 71.8 / 88.9 & 73.1 / 89.9 \\ \hline ICL & 66.9 / 87.1 & 66.5 / 86.6 & 66.1 / 86.3 \\ \hline CoT & 67.2 / 87.3 & 67.7 / 87.7 & 68.8 / 88.3 \\ \hline Cal & 68.2 / 87.8 & 68.5 / 88.2 & 68.4 / 88.1 \\ \hline remain & 71.6 / 88.7 & 74.9 / 90.5 & 75.5 / 91 \\ \hline all & 72.9 / 89.4 & 75.2 / 90.9 & 76.6 / 91.5 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Average micro F1 and accuracy for aspect prediction using BIO tags.
### Acknowledgements
PEC, SY and SB was supported by the Pioneer Centre for AI, DNRF grant number P1.
## 7 Limitations
Scaling to multiple topicsFor our approach, the prediction of narratives is topic specific and the number of models scales linearly with the with the topics. This is primarily because both the baseline method using a LM head cannot predict new classes and for the text2text approach it is theoretically possible to simply use one model, though initial experiments suggested a model per topic worked better. Instead of directly predicting the narratives, one could instead have ranked the list of narratives given a tweet. This gives us contextual information about the narratives, since they are written in text and not just as a class and provides a number of benefits including having one model for all topics but also new topics. Additionally it could also provide temporal evaluations by adding new emerging narratives to the list.
Scaling to more narrativesThe current approach requires a domain expert to writing down the particular narratives from the fine grained debate and does not model that there is a countable number of narratives within a specific domain. Finding the particular narratives is bottlenecked by knowing enough about the particular topic. Moreover, since it takes time to gather enough information about the different topics it makes it difficult to scale up to larger numbers of taxons.
Future work can explore automatic generation of the narratives given a list of tweets, and condense this list iteratively, and patch templates e.g., using pre-trained language models.
Directly modelling the initial argumentative textFinally, the approach we develop can operate on text that is claims or argument discourse units, but has no way of distinguishing between these or nonarguments. This precludes the model from being able to only predict a narrative if the text is indeed from the fine grained debate and can be tricked into providing narratives which the text doesn't follow.
|
2309.09608 | On the high-energy instability of quarkonium production | The perturbative instability of NLO collinear factorisation (CF) computations
of $p_T$-integrated cross sections of heavy quarkonium production at high
hadronic or photon-hadron collision energy is discussed. We resolve this
problem via the matching of NLO CF computation with the resummation of
higher-order corrections $\propto \alpha_s^n \ln^{n-1}(\hat{s}/M^2)$ at high
partonic center of mass energies $\hat{s}\gg M^2$. The resummation is performed
in the Doubly-Logarithmic Approximation(DLA) of High-Energy Factorisation(HEF)
formalism. We also report the results of the first computation of one-loop
corrections to impact-factors involving heavy quark-antiquark pair in the
intermediate states considered in the Non-Relativistic QCD (NRQCD)
factorisation formalism for quarkonium production: $Q\bar{Q}\left[{}^1S_0^{[8]}
\right]$ and $Q\bar{Q}\left[{}^1S_0^{[1]} \right]$. These results are necessary
for the extension of our resummation formalism beyond DLA. | Maxim Nefedov | 2023-09-18T09:25:55Z | http://arxiv.org/abs/2309.09608v1 | # On the high-energy instability of quarkonium production1
###### Abstract
The perturbative instability of NLO collinear factorisation (CF) computations of \(p_{T}\)-integrated cross sections of heavy quarkonium production at high hadronic or photon-hadron collision energy is discussed. We resolve this problem via the matching of NLO CF computation with the resummation of higher-order corrections \(\propto\alpha_{s}^{n}\ln^{n-1}(\delta/M^{2})\) at high partonic center of mass energies \(\hat{s}\gg M^{2}\). The resummation is performed in the Doubly-Logarithmic Approximation(DLA) of High-Energy Factorisation(HEF) formalism. We also report the results of the first computation of one-loop corrections to impact-factors involving heavy quark-antiquark pair in the intermediate states considered in the Non-Relativistic QCD (NRQCD) factorisation formalism for quarkonium production: \(QQ\left[{}^{1}S_{0}^{[1]}\right]\) and \(QQ\left[{}^{1}S_{0}^{[1]}\right]\). These results are necessary for the extension of our resummation formalism beyond DLA.
keywords: perturbative QCD, higher-order corrections, resummations, heavy quarkonium production, NRQCD factorisation, High-Energy Factorisation +
Footnote †: journal:
## 1 High-Energy instability of quarkonium production cross sections and HEF1
Footnote 1: This part is based on the work done in collaboration with Jean-Philpe Lansberg and Melin Ozcelik [1; 2].
Since the heavy quarkonium mass \(M(\simeq 2m_{c}\) or \(2m_{b})\) provides a hard scale, the computation of \(p_{T}\)-integrated quarkonium production cross sections should in principle be possible using perturbative QCD combined with standard collinear factorisation (CF) theorem for initial state as well as Non-Relativistic QCD (NRQCD) factorisation hypothesis [3] to describe the hadronisation of heavy quark-antiquark pair (\(QQ\)) into quarkonium. However, as it was emphasized in recent papers [4; 5], such CF computation develops an extremely strong sensitivity to the choice of factorisation scale \(\mu_{F}\), when hadronic or photon-hadron collision energy becomes large in comparison to \(M\). The plot in the Fig. 1 illustrates this phenomenon for the case of inclusive \(\eta_{c}\) hadroproduction cross section, studied in Ref. [4] in the approximation that the \(\eta_{c}\) production is dominated by the \(c\bar{c}\left[{}^{1}S_{0}^{[1]}\right]\) NRQCD intermediate state. As one can see, the usual scale-variation band of the NLO computation explodes for \(\sqrt{s}>1\) TeV and one can get negative cross sections at high \(pp\) collision energy for reasonable choice of scales. In the Fig. 2 we observe the similar behaviour of inclusive \(J/\psi\) photoproduction cross section for \(\sqrt{s_{\gamma p}}>20\) GeV, which was studied in Ref. [5] in the CS approximation of dominating \(c\bar{c}\left[{}^{3}S_{1}^{[1]}\right]\) state. Lifting the colour-singlet (CS) approximation of Refs. [4; 5] will not resolve this problem.
The detailed analysis of NLO CF computation (see Refs. and references therein) shows that this instability comes from the high partonic center-of-mass energy (\(\sqrt{\hat{s}}\)) region of integration in the collinear factorisation formula, which for both considered processes can be written in a form:
\[\sigma(\sqrt{S})=\int\limits_{X_{\rm min}}^{1}\frac{dX}{X}\mathcal{L}_{ij}(X, \mu_{F})\hat{\sigma}_{ij}(X,\mu_{R},\mu_{F}), \tag{1}\]
where \(X=M^{2}/\hat{s}\), \(X_{\rm min}=M^{2}/S\), for the case of
hadroproduction, with \(\sqrt{S}=\sqrt{s}\), the partonic luminosity \(\mathcal{L}_{ij}\) is given by the convolution of PDFs of two colliding protons (see the Eq. (1.2) in Ref. [1]) and partonic labels \(i,j=\{g,q,\bar{q}\}\) In the photoproduction case, with \(\sqrt{S}=\sqrt{s_{pp}}\), we have just one proton PDF: \(\mathcal{L}_{ij}(X,\mu_{F})=Xf_{i}(X,\mu_{F})\). The \(\hat{\sigma}_{ij}\) in Eq. 1 is the CF coefficient function, which in the LO in \(\alpha_{s}\) for the hadroproduction case is given by the partonic cross sections of the process:
\[g(q_{1})+g(q_{2})\to c\bar{c}\left[{}^{1}S_{0}^{[1]}\right](p), \tag{2}\]
with \(\hat{s}=(q_{1}+q_{2})^{2}\), \(q_{1,2}^{2}=0\), so that \(\hat{\sigma}_{gg}^{\rm(CF\,LO)}\propto\delta(X-1)\). For the \(J/\psi\) photoproduction process the LO contribution is given by:
\[g(q_{1})+\gamma(q)\to c\bar{c}\left[{}^{3}S_{1}^{[1]}\right](p)+g, \tag{3}\]
with \(\hat{s}=(q_{1}+q)^{2}\), \(q^{2}=q_{1}^{2}=0\), and \(\hat{\sigma}_{g\gamma}^{\rm(CF\,LO)}\) is a smooth function of \(X\) in this case.
The perturbative instability illustrated in Figs. 1 and 2 arises due to the behaviour of the NLO CF coefficient function \(\hat{\sigma}_{ij}^{\rm(CF\,NLO)}(X,\mu_{F},\mu_{R})\) for \(X\ll 1\), so it is natural to seek for a solution of this problem with perturbative resummation of the CF coefficient function in this region.
Such resummation is provided by the High-Energy Facotiszation (HEF) formalism of Refs. [6; 7; 8; 9], which resums the series of higher-order corrections to \(\hat{\sigma}_{ij}\) at leading power in \(X\ll 1\) which scale as \(\alpha_{s}^{n}\ln^{n-1}(1/X)\), referred to as Leading Logarithmic Approximation (LLA). For the photoproduction case the resummation formula in the strict LLA(\(\ln(1/X)\)) is derived in Ref. [2]:
\[\frac{d\hat{\sigma}_{ij}^{\rm(HEF)}}{dz}(X,\mu_{F},\mu_{R})=\frac{1}{2M^{2}} \int\limits_{0}^{\infty}d{\bf k}_{T}^{2}\mathcal{C}_{gi}(X,{\bf k}_{T}^{2}, \mu_{F},\mu_{R})\]
\[\times\int\limits_{1/z}^{\infty}\frac{dy}{y}\frac{d{\cal H}}{dz}({\bf k}_{T}^{ 2},y,z), \tag{4}\]
where we take into account the possibility of experimental cuts on the elasticity variable \(z=(Pp)/(Pq)\), with \(P\) being the proton momentum, and the resummation factor \(\mathcal{C}_{gi}(X,{\bf k}_{T}^{2},\mu_{F},\mu_{R})\) in the Doubly-Logarithmic Approximation (DLA) which resums terms \(\propto\left[\alpha_{s}(\mu_{R})\ln(1/X)\ln({\bf k}_{T}^{2}/\mu_{F}^{2}) \right]^{n}\), to stay consistent with \(\mu_{F}\)-evolution of PDFs, see the Sec. 2.3 of Ref. [1] and references for more detailed discussion. The coefficient function \(d{\cal H}/dz\) is derived in the Appendix A of Ref. [2] and is related to the following "off-shell" analog of the partonic subprocess (3):
\[R_{+}(k)+\gamma(q)\to c\bar{c}\left[{}^{3}S_{1}^{[1]}\right](p)+g, \tag{5}\]
with \(k=q_{1}+k_{T}\) so that \(k^{2}=-{\bf k}_{T}^{2}\) and \(R_{+}\) denotes the _Reggeized gluon_ which can be defined e.g. using the gauge-invariant EFT for Mili-Regge processes in QCD [10]. Coefficient functions of subprocesses with one Reggeized gluon in the initial state, such as (5) are often referred to as _impact-factors_ in literature.
The resummation formula for the \(\eta_{c}\) hadroproduction case involves two resummation factors \(\mathcal{C}_{gi}\) and is more
Figure 1: The \(pp\) collision energy\((\sqrt{s})\) dependence of the total cross section of production of the \(c\bar{c}\)-pair in the \({}^{1}S_{0}^{[1]}\) state in the LO (gray curve) and NLO (blue curve) of CF, shown together with the corresponding 5-point scale-variation bands. The NLO computation with the \(\hat{\mu}_{F}\)-scale of Ref. is shown by the dashed line. The figure is taken from Ref..
Figure 2: The \(\gamma p\) collision energy\((\sqrt{s_{pp}})\) dependence of total cross section of prompt \(J/\psi\) photoproduction in the CSM at LO (grey curve) and NLO (blue curve) of CF, shown together with the corresponding 5-point scale-variation bands. The NLO computation with the \(\hat{\mu}_{F}\)-scale of Ref. is shown by the dashed line. The figure is taken from Ref..
cumbersome, so we will not reproduce it here, see the Eq. (2.6) in Ref. [1]. Importantly, it involves the off-shell coefficient function which is given by the analog of partonic subprocess (2) with two Reggeised gluons in the initial state:
\[R_{+}(k_{1})+R_{-}(k_{2})\to c\bar{c}\left[{}^{1}S_{0}^{[1]}\right](p), \tag{6}\]
with \(k_{1,2}=q_{1,2}+k_{1,2T}\), \(k_{1,2}^{2}=-{\bf k}_{1,2T}^{2}\).
The HEF resummation outlined above is valid only for \(X\ll 1\), so to compute the integral (1) we must combine it with the NLO CF approximation for \(\hat{\sigma}_{ij}\) for \(X\lesssim 1\). We do this using the smooth weight functions (\(0<w_{ij}(X)<1\)):
\[\hat{\sigma}_{ij}(X)=w_{ij}(X)\hat{\sigma}_{ij}^{(\rm NLO\;CF)}(X)\] \[+(1-w_{ij}(X))\hat{\sigma}_{ij}^{(\rm HEF)}(X), \tag{7}\]
which are computed according to the Inverse Error Weighting (InEW) prescription of Ref. [11], further developed in Refs [1; 2].
The results of such matched computation are shown in the Figs. 3 and 4 for \(\eta_{c}\) hadroproduction and \(J/\psi\) photoproduction respectively. One can see, that the instability of scale-variation band at high energy is gone and the band is even reduced in comparison with the LO band shown in the Figs. 1 and 2. From the Fig. 4 one can see, that PDF uncertainties for \(\sqrt{s_{\gamma p}}<500\) GeV are clearly subdominant, showing that it is the improvement of high-energy behaviour of the CF coefficient function had stabilised the predictions.
These results look encouraging, however their residual scale uncertainty is still unacceptably large, calling for improvement of the computation beyond DLA. One of the key steps towards this goal is the computation of loop corrections to off-shell subprocesses such as (5) and (6). In the next section we report the first results of such computations involving NRQCD states, which we have performed.
## 2 One-loop quarkonium impact factors
In this section we present our computation of one loop impact-factors for the following processes:
\[R_{+}(k)+\gamma(q) \to Q\bar{Q}\left[{}^{1}S_{0}^{[8]}\right](p), \tag{8}\] \[R_{+}(k)+g(q) \to Q\bar{Q}\left[{}^{1}S_{0}^{[1]}\right](p), \tag{9}\]
where \(q^{2}=0\), \(k^{2}=-{\bf k}_{T}^{2}\), \(q^{+}=k^{-}=0\), \(q^{-}>0\), \(p^{2}=M^{2}=4m_{c}^{2}\) and light-cone components are defined as \(k^{\pm}=k^{0}\pm k^{3}\). The subprocess (8) is mostly of academic interest, since it contributes e.g. to the inclusive \(J/\psi\) photoproduction at the "exclusive" kinematic threshold \(z=1\) where no data exist. However it was very instructive for us to consider it, because of the smaller number of Feynman diagrams and master integrals contributing in comparison with the subprocess (9). The subprocess (9) is more physical and can be used to study \(\eta_{c,b}\) production at forward rapidities at hadron colliders.
However the most useful for the phenomenology at hadron colliders would be the computation of central production vertices, similar to the subprocess (6), which can be performed within the EFT formalism without computing new integrals, besides those which arise in the impact-factor computations. The computation reported in this proceedings serves as a stepping stone towards computation of central production vertices.
### Outline of the computation
The Feynman diagrams for both subprocesses had been generated using the custom made model file for FeynArts[12], in which the \(Rg\) "mixing" coupling (see e.g. Eq. (13) in Ref. [13]) and \(Rgg\) induced coupling (see e.g. Eq. (14) in Ref. [13]) had been implemented. Example Feynman diagrams are shown in the Figs. 5 and 6. Then, the NRQCD spin and colour projectors had been inserted, and momenta of heavy quarks had been put to \(p_{c}=p_{\bar{c}}=p/2\) to project-out the \(S\)-wave. After taking the interference with corresponding LO impact-factor, obtained scalar quantity can be reduced down to one-loop master integrals using integration-by-parts (IBP) reduction, we use FIRE[14] package for this purpose. However, due to above-mentioned choice of
Figure 3: The same cross section as in the Fig. 1 computed via the matching of NLO computation in CF with the DLA HEF resummation in the approach of the Ref. [1]. Different curves correspond to different PDF sets and bands correspond to the same 5 point scale variation as in the Fig. 1.
momenta of heavy quarks, linearly-dependent quadratic denominators appear in some diagrams, e.g. in the \(Rg\)-coupling diagrams #2 and #3 and \(Rgg\)-coupling diagram #3 in both Figs. 5 and 6. These linearly-dependent denominators have to be partial-fractionated into different master topologies before IBP reduction can be performed. This is the common procedure in one-loop computations involving quarkonia and we have implemented it into our FeynCalc[15; 16]-based code.
To regularise rapidity divergences in scalar one-loop integrals we tilt the direction-vectors of Wilson lines in the effective action from the light-cone:
\[n_{\pm}\to\tilde{n}_{\pm}=n_{\pm}+rn_{\mp}, \tag{10}\]
with \(0<r\ll 1\) as was first proposed in Refs. [17; 18], the same regularisation is also used in our Ref. [13]. Besides rapidity-divergent scalar one loop integrals which are listed in the latter paper the present computation also contains integrals mixing massive quadratic propagators with linear propagators, which also acquire nontrivial \(r\)-dependence. Fortunately, for such integrals massive propagators can be traded for massless ones, using the following algebraic identity:
\[\frac{1}{((\tilde{n}_{+}l)+k_{+})(l^{2}-m^{2})}=\frac{1}{((\tilde {n}_{+}l)+k_{+})(l+\kappa\tilde{n}_{+})^{2}}\] \[+\frac{2\kappa\Big{[}(\tilde{n}_{+}l)+\frac{m^{2}+\tilde{n}_{+}^{ 2}\kappa^{2}}{2\kappa}\Big{]}}{((\tilde{n}_{+}l)+k_{+})(l+\kappa\tilde{n}_{+} )^{2}(l^{2}-m^{2})}, \tag{11}\]
where we choose the parameter \(\kappa\) in such a way that the linear denominator in the last term in the r.h.s. gets canceled, while in the first term we are left only with a linear and massless quadratic denominators. This identity can be applied recursively to remove all massive quadratic denominators from the integral containing the linear denominator. All additional terms generated by this procedure will be just usual one-loop integrals with quadratic massive or massless propagators. The known results from literature can be used for the latter integrals and we exploit the implementation of PackageX[19] into FeynHelpers[20] for this purpose. The new rapidity divergent scalar integrals which we have encountered during this computation are:
\[B_{[-]}(-K,K-q)=\int\frac{d^{D}l}{[\tilde{l}^{-}](l-K)^{2}(l+K-q)^{2}}, \tag{12}\]
\[C_{[-]}(0,-K,K-q)=\int\frac{d^{D}l}{[\tilde{l}^{-}]l^{2}(l-K)^{2}(l+K-q)^{2}}, \tag{13}\]
\[B_{[-]}(p,K)=\int\frac{d^{D}l}{[\tilde{l}^{-}](l+p)^{2}(l+K)^{2}}, \tag{14}\]
\[C_{[-]}(p,K,k)=\int\frac{d^{D}l}{[\tilde{l}^{-}](l+p)^{2}(l+K)^{2}}, \tag{15}\]
Figure 4: The same cross section as in the Fig. 2 computed via the matching of NLO computation in CF with the DLA HEF resummation in the approach of the Ref. [2] Different curves correspond to different PDF sets together with corresponding PDF uncertainties, shaded bands corresponds to the same 5 point scale variation as in the Fig. 2.
Figure 5: Example Feynman diagrams with \(Rg\) (top row) and \(Rgg\) (bottom row) couplings, contributing to the subprocess (8) at one loop
Figure 6: Example Feynman diagrams with \(Rg\) (top row) and \(Rgg\) (bottom row) couplings, contributing to the subprocess (9) at one loop
where \(K=[p-M^{2}n_{-}/(2q^{-})]/2\) with \(n_{-}^{\mu}=(1,0,0,1)^{\mu}\). These integrals have the same complexity as the integral \(C_{[-]}\) with two scales, computed in the Ref. [13]. We will cover the computation of these integrals in more detail in a longer version of this paper.
### Results for quarkonium impact factors
In this subsection we present results of the computation outlined above, which had been expanded in the limit \(r\ll 1\) as well as in \(\epsilon\). We present the real parts of interference of one-loop and LO impact-factors of subprocesses (8) and (9), normalised by the corresponding LO impact-factors and with heavy-quark-mass renormalisation counterterm in the on-shell scheme added, which is customary for heavy quarkonium production studies. For subprocesses (8) and (9) respectively, these results can be written as follows:
\[2\Re\left[\frac{H_{1,1\leq 0}^{1S[1]}(\mathbf{k}_{T})+(\text{OS mass CT})}{(\alpha_{/ }(2\pi))H_{1,0}^{1S[1]}(\mathbf{k}_{T})}\right]=\left(\frac{\mu^{2}}{\mathbf{k }_{T}^{2}}\right)^{\epsilon}\frac{1}{\epsilon}\left[-\frac{2n_{E}}{3}-\frac{3 }{2N_{c}}\right. \tag{16}\] \[\left.+N_{c}\left(\ln\frac{\mathbf{k}_{T}^{2}}{M^{2}}+\ln\frac{ \epsilon^{2}}{\mathbf{k}_{T}^{2}}+\frac{19}{6}\right)\right]+F_{1S_{0}^{[11]} }(\mathbf{k}_{T}^{2}/M^{2})+O(r,\epsilon),\] \[2\Re\left[\frac{H_{1,1\leq 0}^{1S[1]}(\mathbf{k}_{T})+(\text{OS mass CT})}{(\alpha_{/}(2\pi))H_{1,0}^{1S[1]}(\mathbf{k}_{T})}\right]=\left( \frac{\mu^{2}}{\mathbf{k}_{T}^{2}}\right)^{\epsilon}\left[-\frac{N_{c}}{ \epsilon^{2}}+\frac{1}{\epsilon}\left[-\frac{2n_{E}}{3}\right.\right.\] (17) \[\left.\left.-\frac{3}{2N_{c}}+N_{c}\left(\ln\frac{\mathcal{L}_{c }}{\mathbf{k}_{T}^{2}}+\frac{25}{6}\right)\right]\right)+F_{1S_{0}^{[11]}}( \mathbf{k}_{T}^{2}/M^{2})+O(r,\epsilon).\]
where \(n_{F}\) is the number of flavours of light quarks. It is crucial, that the sole remaining dependence on \(\ln r\) in Eqns. (16) and (17) is proportional to the one-loop Regge trajectory of a gluon, as required by gluon Reggeisation, while terms \(\sim\ln^{2}r\) have canceled non-trivially between different diagrams. The remainder functions \(F_{m}(\tau)\), with \(m={}^{1}S_{0}^{[8]}\) or \({}^{1}S_{0}^{[1]}\), can be decomposed w.r.t. different colour structures:
\[F_{m}(\tau)=-\frac{10}{9}n_{F}+\Re[C_{F}F_{m}^{(C_{F})}(\tau)+C_{A}F_{m}^{(C_ {A})}(\tau)], \tag{18}\]
The coefficients in front of \(C_{F}\) are the same for both processes:
\[F_{{}^{1S_{0}^{[8]}}}^{(C_{F})}(\tau)=F_{{}^{1S_{0}^{[1]}}}^{(C_ {F})}(\tau)=\frac{\mathcal{L}_{2}+\mathcal{L}_{7}(1-2\tau)}{\tau+1}\] \[+\frac{1}{6(\tau+1)(2\tau+1)^{2}}\Big{\{}144L_{1}\tau^{2}+144L_{1 }\tau\] \[+36L_{1}-16\pi^{2}\tau^{3}-72\tau^{3}+72\tau^{3}\log(2)\] \[-156\tau^{2}+12\tau^{2}\log^{2}(2\tau+1)+168\tau^{2}\log(2)\] \[-24\left(3\tau^{2}+5\tau+2\right)\tau\log(\tau+1)+12\pi^{2}\tau\] \[-108\tau+12\tau\log^{2}(2\tau+1)+3\log^{2}(2\tau+1)\] \[+132\tau\log(2)+18(\tau+1)(2\tau+1)^{2}\log(\tau)\] \[+4\pi^{2}-24+36\log(2)\Big{]}. \tag{19}\]
The coefficient in front of \(C_{A}\) for subprocess (8) is:
\[F_{{}^{1S_{0}^{[8]}}}^{(C_{A})}(\tau)=\frac{1}{2(\tau-1)(\tau+1)^{ 3}}\Big{\{}(\tau+1)^{2}\left(-4\mathcal{L}_{4}\left(\tau^{2}-1\right)\right.\] \[+\mathcal{L}_{2}(\tau+1)(2\tau+1)+\mathcal{L}_{7}(2\tau-3)+ \mathcal{L}_{7})\] \[+2\mathcal{L}_{6}(\tau(\tau((\tau-4)\tau-6)-4)+1)\Big{\}}\] \[+\frac{1}{36(\tau-1)(\tau+1)^{3}(2\tau+1)}\Big{\{}-216L_{1}\tau^ {4}-324L_{1}\tau^{3}\] \[+108L_{1}\tau^{2}+324L_{1}\tau+108L_{1}+120\pi^{2}\tau^{5}\] \[+608\tau^{5}-36\tau^{5}\log^{2}(\tau+1)+36\tau^{5}\log^{2}(2\tau+1)\] \[-36\tau^{5}\log^{2}(2)-72\tau^{5}\log(2)\log(\tau+1)\] \[+216\tau^{5}\log(\tau+1)+72\tau^{5}\log(2)+228\pi^{2}\tau^{4}\] \[+1520\tau^{4}-306\tau^{4}\log^{2}(2)+360\tau^{4}\log(2)\] \[-306\tau^{4}\log^{2}(\tau+1)+144\tau^{4}\log^{2}(2\tau+1)\] \[+252\tau^{4}\log(2)\log(\tau+1)+432\tau^{4}\log(\tau+1)\] \[+84\pi^{2}\tau^{3}+608\tau^{3}-360\tau^{3}\log^{2}(\tau+1)\] \[-360\tau^{3}\log^{2}(2)+576\tau^{3}\log(2)\log(\tau+1)\] \[+225\tau^{3}\log^{2}(2\tau+1)-1216\tau^{2}-108\tau^{2}\log^{2}(2)\] \[+72\tau^{3}\log(\tau+1)+72\tau^{3}\log(2)-120\pi^{2}\tau^{2}\] \[-108\tau^{2}\log^{2}(\tau+1)+171\tau^{2}\log^{2}(2\tau+1)\] \[+504\tau^{2}\log(2)\log(\tau+1)-360\tau^{2}\log(2(\tau+1))\] \[-72(\tau+1)^{3}\left(2\tau^{2}-\tau-1\right)\log(\tau-1)\log(2/( \tau+1))\] \[+36(2\tau+1)\log(\tau)\left[-\tau^{4}+\tau^{4}\log(8)-6\tau^{2} \log(2)\right.\] \[+\left(-\tau^{3}+4\tau^{2}+6\tau+4\right)\tau\log(\tau+1)\] \[-8\tau\log(2)-\log(2\tau+2)+1\right]+63\tau\log^{2}(2\tau+1)\] \[-18\left(2\tau^{5}+17\tau^{4}+20\tau^{3}+6\tau^{2}-6\tau-3\right) \log^{2}(\tau)\] \[-84\pi^{2}\tau-1216\tau+108\tau\log^{2}(\tau+1)\] \[+108\tau\log^{2}(2)+54\log^{2}(\tau+1)+9\log^{2}(2\tau+1)\] \[+72\tau\log(2)\log(\tau+1)-288\pi\log(\tau+1)\] \[-144\tau\log(2)-36\log(2)\log(\tau+1)\] \[-72\log(\tau+1)-12\pi^{2}-304+54\log^{2}(2)\Big{\}} \tag{20}\]
while for the subprocess (9) it is:
\[F_{{}^{1S_{0}^{[11]}}}^{(C_{A})}(\tau)=\frac{1}{(\tau-1)(\tau+1)^{ 3}}\Big{\{}2\mathcal{L}_{1}\left(\tau^{2}+\tau-2\right)(\tau+1)^{3}\] \[+\tau\Big{[}2\mathcal{L}_{5}\left(\tau(\tau+1)\left(\tau^{2}-2 \right)+1\right)-\mathcal{L}_{7}\left(\tau^{2}+\tau-1\right)\] \[-\left(\mathcal{L}_{2}(\tau+2)(\tau+1)^{2}\right)\] \[+\mathcal{L}_{6}(\tau(\tau(6-(\tau-4)\tau)+4)-1)\Big{]}\] \[+2\mathcal{L}_{3}(\tau-1)(\tau+1)^{3}+2\mathcal{L}_{5}+\mathcal{L }_{7}\Big{\}}\] \[-\frac{1}{18(\tau-1)(\tau+1)^{3}}\Big{\{}6\pi^{2}\tau^{5}-36\tau^{5} \log(2)\log(\tau+1)\]
\[+36\tau^{5}\log(\tau+1)\log(\tau+2)+63\pi^{2}\tau^{4}-98\tau^{4}\] \[-63\tau^{4}\log^{2}(\tau+1)+9\tau^{4}\log^{2}(2\tau+1)\] \[-63\tau^{4}\log^{2}(2)+138\pi^{2}\tau^{3}\] \[+54\tau^{4}\log(2)\log(\tau+1)-36\tau^{4}\log(\tau+1)\] \[+36\tau^{4}\log(\tau+1)\log(\tau+2)+36\tau^{4}\log(2)\] \[-196\tau^{3}-72\tau^{3}\log^{2}(\tau+1)+36\tau^{3}\log^{2}(2\tau+1)\] \[-72\tau^{3}\log^{2}(2)+144\tau^{3}\log(2)\log(\tau+1)\] \[-36\tau^{3}\log(\tau+1)-72\tau^{3}\log(\tau+1)\log(\tau+2)\] \[-36\tau^{3}\log(2)+18\pi^{2}\tau^{2}-18\tau^{2}\log^{2}(\tau+1)\] \[+45\tau^{2}\log^{2}(2\tau+1)-18\tau^{2}\log^{2}(2)\] \[+108\tau^{2}\log(2)\log(\tau+1)+36\tau^{2}\log(\tau+1)\] \[-72\tau^{2}\log(\tau+1)\log(\tau+2)-36\tau^{2}\log(2)\] \[-18\left(4\tau^{4}+5\tau^{3}+\tau^{2}-3\tau-1\right)\log^{2}(\tau)\] \[+18\log(\tau)\Big{[}\tau^{5}\log(2)-\tau^{4}(\log(4)-2)\] \[-2\tau^{2}(1+\log(4))-\tau^{3}\log(4)\] \[-\left(\tau^{4}-4\tau^{3}-6\tau^{2}-4\tau+1\right)\tau\log(\tau+1)\] \[-\tau\log(8)-\log(4)\Big{]}\] \[-120\pi^{2}\tau+196\tau+36\tau\log^{2}(\tau+1)\] \[+18\tau\log^{2}(2\tau+1)+36\tau\log^{2}(2)+9\log^{2}(\tau+1)\] \[-36\tau\log(2)\log(\tau+1)+36\tau\log(\tau+1)\] \[+36\tau\log(\tau+1)\log(\tau+2)+36\tau\log(2)\] \[-36(\tau-1)(\tau+1)^{3}\log(\tau-1)(\log(2)-\log(\tau+1))\] \[-18\log(2)\log(\tau+1)+36\log(\tau+1)\log(\tau+2)\] \[-69\pi^{2}+98+9\log^{2}(2)\Big{\}}. \tag{21}\]
In formulas above the following combinations of logarithms and dilogarithms appear:
\[L_{1} = \sqrt{\tau(1+\tau)}\ln\left[1+2\tau+2\sqrt{\tau(1+\tau)}\right],\] \[\mathcal{L}_{1} = \mathrm{Li}_{2}\left(\frac{1}{\tau}+1\right),\] \[\mathcal{L}_{2} = \mathrm{Li}_{2}\left(\frac{1}{-2\tau-1}\right),\] \[\mathcal{L}_{3} = \mathrm{Li}_{2}\left(\frac{1}{\tau}\right)+\mathrm{Li}_{2}\left( \frac{\tau-1}{\tau+1}\right)-\mathrm{Li}_{2}\left(\frac{\tau+1}{2\tau}\right)\] \[+\frac{\mathrm{Li}_{2}\left(\frac{1}{4}\right)}{2}+\mathrm{Li}_{2 }(-2),\] \[\mathcal{L}_{4} = \mathrm{Li}_{2}\left(1+\frac{1}{\tau}\right)+\mathrm{Li}_{2}\left( \frac{1}{\tau}\right)+\mathrm{Li}_{2}\left(\frac{\tau-1}{\tau+1}\right)\] \[-\mathrm{Li}_{2}\left(\frac{\tau+1}{2\tau}\right)+\frac{\mathrm{ Li}_{2}\left(\frac{1}{4}\right)}{2}+\mathrm{Li}_{2}(-2),\] \[\mathcal{L}_{5} = \mathrm{Li}_{2}\left(-\frac{1}{\tau+1}\right)-\mathrm{Li}_{2}( \tau+2)+\frac{1}{2}\mathrm{Li}_{2}\left(\frac{2\tau+1}{2\tau+2}\right),\] \[\mathcal{L}_{6} = -\mathrm{Li}_{2}\left(-\frac{2\tau+1}{\tau^{2}}\right)+\mathrm{Li}_ {2}\left(-\frac{2\tau^{2}+\tau+1}{2\tau^{2}}\right)\] \[+\mathrm{Li}_{2}\left(\frac{1}{2}-\frac{\tau}{2}\right)+\mathrm{ Li}_{2}\left(-\frac{1}{\tau}\right)\] \[-\mathrm{Li}_{2}\left(\frac{\tau-1}{2\tau}\right)-\mathrm{Li}_{2}( -\tau)+\mathrm{Li}_{2}\left(\frac{1-\tau}{\tau+1}\right),\] \[\mathcal{L}_{7} = \mathrm{Li}_{2}(-2\tau-1)-\mathrm{Li}_{2}\left(\frac{2\sqrt{\tau}}{ \sqrt{\tau}-\sqrt{\tau+1}}\right)\] \[-\mathrm{Li}_{2}\left(\frac{2\sqrt{\tau}}{\sqrt{\tau}+\sqrt{\tau+1 }}\right).\]
The number of dilogarithms with different arguments can clearly be reduced using known dilogarithm identities to reveal further structure of impact-factor expressions.
## 3 Conclusions and Outlook
In this contribution we have discussed the perturbative instability of \(p_{T}\)-integrated cross sections of production of heavy quarkonia at NLO of CF and its resolution through the matching with DLA resummation in the HEF formalism. We also describe our progress towards going beyond DLA, namely the first computation of one-loop corrections to impact factors involving NRQCD states of the \(Q\bar{Q}\) pair: \(Q\bar{Q}\left[{}^{1}S_{0}^{[8]}\right]\) and \(Q\bar{Q}\left[{}^{1}S_{0}^{[1]}\right]\). The expected structure of rapidity, ultraviolet and infrared divergences had been found, which is a strong cross-check of the computation. In future the real-emission contribution will be also computed to obtain the infrared finite NLO correction to the impact-factors.
**Acknowledgments:** This work is supported by the Marie Sklodowska-Curie action "RadCor4HEF" under grant agreement No. 101065263.
|
2308.16401 | Optimality and Constructions of Spanning Bipartite Block Designs | We consider a statistical problem to estimate variables (effects) that are
associated with the edges of a complete bipartite graph $K_{v_1, v_2}=(V_1, V_2
\, ; E)$. Each data is obtained as a sum of selected effects, a subset of $E$.
In order to estimate efficiently, we propose a design called Spanning Bipartite
Block Design (SBBD). For SBBDs such that the effects are estimable, we proved
that the estimators have the same variance (variance balanced). If each block
(a subgraph of $K_{v_1, v_2}$) of SBBD is a semi-regular or a regular bipartite
graph, we show that the design is A-optimum. We also show a construction of
SBBD using an ($r,\lambda$)-design and an ordered design. A BIBD with prime
power blocks gives an A-optimum semi-regular or regular SBBD. At last, we
mention that this SBBD is able to use for deep learning. | Shoko Chisaki, Ryoh Fuji-Hara, Nobuko Miyamoto | 2023-08-31T02:11:51Z | http://arxiv.org/abs/2308.16401v1 | # Optimality and Constructions of
###### Abstract
We consider a statistical problem to estimate variables (effects) that are associated with the edges of a complete bipartite graph \(K_{v_{1},v_{2}}=(V_{1},V_{2}\,;E)\). Each data is obtained as a sum of selected effects, a subset of \(E\). In order to estimate efficiently, we propose a design called Spanning Bipartite Block Design (SBBD). For SBBDs such that the effects are estimable, we proved that the estimators have the same variance (variance balanced). If each block (a subgraph of \(K_{v_{1},v_{2}}\)) of SBBD is a semi-regular or a regular bipartite graph, we show that the design is A-optimum. We also show a construction of SBBD using an \((r,\lambda)\)-design and an ordered design. A BIBD with prime power blocks gives an A-optimum semi-regular or regular SBBD. At last, we mention that this SBBD is able to use for deep learning.
**Keywords.** spanning bipartite block design, A-optimum, variance balanced, \((r,\lambda)\)-design, balanced incomplete block design, ordered design, deep learning
**AMS classification.** 62K05, 62K10, 05B05
## 1 Introduction
Let \(V_{1}\) and \(V_{2}\) be point sets, and \(E\) the set of edges between the \(V_{1}\) and \(V_{2}\), it is a complete bipartite graph, \(K_{v_{1},v_{2}}=(V_{1},V_{2}\,;E)\), where \(|V_{1}|=v_{1},|V_{2}|=v_{2}\). We consider a statistical problem estimating the variables associated with \(E\) from experimental data. For example, communication capacities between two sets of cities, traffic volume between two sets of cities, etc (see Fig. 1).
Let \(\tau_{ij}\) be a variable (or an effect) to estimate corresponding to the edge \((i,j)\), \(i\in V_{1}\)\(j\in V_{2}\), of the complete bipartite graph \(K_{v_{1},v_{2}}\), and \(\tau\) be the vector of \(\tau_{ij}\) arranged in the following lexicographical order:
\[\mathbf{\tau}=(\tau_{11},\tau_{12},\ldots,\tau_{1v_{2}}\ ;\ \tau_{21},\tau_{22},\ldots,\tau_{2v_{2}}\ ;\ \cdots\ ;\ \tau_{v_{1}1},\ldots,\tau_{v_{1}v_{2}})^{t}. \tag{1}\]
Figure 1: Image of bipartite problem
We consider the following statistical model (2) that each data \(y_{i}\) is obtained as a sum of selected effects, i.e. a subset of \(\{\tau_{11},\tau_{12},\ldots,\tau_{v_{1}v_{2}}\}\):
\[\begin{split}&\mathbf{y}=\mathbf{X}\boldsymbol{\tau}+\boldsymbol{ \epsilon}\\ &\sum_{j=1}^{v_{2}}\tau_{ij}=0,\ \ 1\leq i\leq v_{1}\\ &\sum_{i=1}^{v_{1}}\tau_{ij}=0,\ \ 1\leq j\leq v_{2},\end{split} \tag{2}\]
where the data vector \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{N})^{t}\) is assumed that the mean of all data was subtracted and \(\boldsymbol{\epsilon}\) is a vector of random variables of errors with \(\mathrm{N}(0,\sigma^{2})\). \(X\) is a \((0,1)\)-matrix of size \((N\times v_{1}v_{2})\) such that \(1\) for selected effects and \(0\) for other effects in each row. Our purpose is to estimate all effects with high precision. The main problem here is how to design the matrix \(X\).
**Example 1.1**.: This is an example that each data is obtained as a sum of selected effects. The effects have a bipartite graph structure. Let \(v_{1}=2,v_{2}=3\).
\[\begin{split}& y_{1}=\tau_{11}+\tau_{13}+\tau_{22}+\tau_{23}+ \epsilon_{1}\\ & y_{2}=\tau_{12}+\tau_{13}+\tau_{21}+\tau_{22}+\epsilon_{2}\\ &\vdots\hskip 56.905512pt\vdots\\ & y_{N}=\tau_{13}+\tau_{21}+\tau_{22}+\tau_{23}+\epsilon_{N}\end{split}\]
\[\begin{split}\begin{bmatrix}y_{1}\\ y_{2}\\ \vdots\\ y_{N}\end{bmatrix}=\begin{bmatrix}1&0&1&0&1&1\\ 0&1&1&1&1&0\\ &&&\vdots\\ 0&0&1&1&1&1\end{bmatrix}\cdot\begin{bmatrix}\tau_{11}\\ \tau_{12}\\ \tau_{13}\\ \tau_{21}\\ \tau_{22}\\ \tau_{23}\end{bmatrix}+\begin{bmatrix}\epsilon_{1}\\ \epsilon_{2}\\ \vdots\\ \epsilon_{N}\end{bmatrix}\end{split}\]
There is a similar model which is a two-way factorial design having a block factor called an incomplete split-block design, see [7]. The model is the following:
\[\boldsymbol{y}=X_{1}\boldsymbol{\alpha}+X_{2}\boldsymbol{\gamma}+X_{12}( \boldsymbol{\alpha}\boldsymbol{\gamma})+X_{3}\boldsymbol{\beta}+\boldsymbol{ \epsilon},\]
where \(X_{1},X_{2},X_{12},X_{3}\) are (0,1)-matrices having exactly one 1 in each row, \(\boldsymbol{\alpha},\boldsymbol{\gamma}\) are vectors of main effects, \((\boldsymbol{\alpha}\boldsymbol{\gamma})\) is a vector of interaction effects of \(\boldsymbol{\alpha}\) and \(\boldsymbol{\gamma}\), \(\boldsymbol{\beta}\) is a vector of block effects, and \(\boldsymbol{\epsilon}\) is the error vector. In our model, there are no main or block effects, and each data is obtained as the sum of interaction effects within a block instead of blocking effects. Furthermore, we insist on spreading out the interaction effects in each block as much as possible.
In this paper, we propose a new design named _spanning bipartite block design_ for application to the statistical model (2) and discuss the precision of the estimators in the designs. In Section 2, a spanning bipartite block design (SBBD) is defined precisely. In Section 3, we discuss optimality of designs. Optimality problems of new designs are discussed in [7] and [6]. We argue the optimality of SBBD and show a design is A-optimum in a certain class. In Section 4, a construction of SBBD is shown using an \((r,\lambda)\)-design and an ordered design. Finally, in Section 5, we mention the relationship between SBBD and deep learning.
## 2 Spanning Bipartite Block Design
Let \(\mathcal{B}=\{B_{1},B_{2},\ldots,B_{N}\}\) be a collection of subgraphs of \(K_{v_{1},v_{2}}=(V_{v_{1}},V_{v_{2}}\,;E)\) called spanning bipartite blocks (SB-blocks). If \(\mathcal{B}\) satisfies the following five conditions, then we call \((K_{v_{1},v_{2}}\,;\mathcal{B})\) a _spanning bipartite block design_ (SBBD):
1. Each subgraph \(B_{i}\) of \(\mathcal{B}\) is incident with all points of \(V_{1}\) and \(V_{2}\). This is called the _spanning condition_.
2. Each edge of \(E\) appears in \(\mathcal{B}\) exactly \(\mu\) times.
3. Any two edges \(e_{ij},e_{ij^{\prime}}\in E\) such that \(i\in V_{1}\), \(j,j^{\prime}\in V_{2}\), (\(j\neq j^{\prime}\)) are included together in \(\lambda_{12}\) subgraphs in \(\mathcal{B}\).
4. Any two edges \(e_{ij},e_{i^{\prime}j}\in E\) such that \(i,i^{\prime}\in V_{1}\), (\(i\neq i^{\prime}\)), \(j\in V_{2}\) are included together in \(\lambda_{21}\) subgraphs in \(\mathcal{B}\).
5. Any two edges \(e_{ij}\), \(e_{i^{\prime}j^{\prime}}\in E\) such that \(i,i^{\prime}\in V_{1}\), (\(i\neq i^{\prime}\)), \(j,j^{\prime}\in V_{2}\), (\(j\neq j^{\prime}\)) are included together in \(\lambda_{22}\) subgraphs in \(\mathcal{B}\).
Next, we define a \((0,1)\)-matrix \(X\), called a _design matrix_, from the SB-blocks.
* Suppose that the edges \(e_{ij}\) of \(K_{v_{1},v_{2}}\) are arranged in the same lexicographical order as Equation (1). \[(e_{11},e_{12},\ldots,e_{1v_{2}}\ ;\ e_{21},e_{22},\ldots,e_{2v_{2}}\ ;\ \cdots\ ;\ e_{v_{1}1},\ldots,e_{v_{1}v_{2}}).\] (3) This sequence of edges corresponds to the columns of \(X\). Denote \((e_{ij})\) for the column number which corresponds to the edge \(e_{ij}\).
* Put \(X=[x_{k,(e_{ij})}]\), then \(x_{k,(e_{ij})}\) is the element of the \(k\)-th row and the \((e_{ij})\)-th column of \(X\). The design matrix \(X\) is defined by the SB-blocks \(B_{1},B_{2},\ldots,B_{N}\) as follows: \[x_{k,(e_{ij})}=\begin{cases}1&\text{ if }\ e_{ij}\in B_{k}\\ 0&\text{ otherwise}\end{cases}\] (4)
* \(X\) is an \(N\times v_{1}v_{2}\) matrix.
This is a convenient way to represent a \((0,1)\)-matrix for checking the conditions. Let \(X_{i}\) be an \(N\times v_{2}\) submatrix of \(X\) consisting of \(v_{2}\) columns of \(X\) corresponding to \((e_{i1},e_{i2},\ldots,\)\(e_{iv_{2}})\). Then the design matrix \(X\) is partitioned into \(v_{1}\) submatrices expressed as \(X=(X_{1}\mid X_{2}\mid\cdots\mid X_{v_{1}})\). The conditions of a spanning bipartite block design (\(K_{v_{1},v_{2}}\,;\,\mathcal{B}\)) can be re-expressed using the design matrix \(X=(X_{1}\mid X_{2}\mid\cdots\mid X_{v_{1}})\) as follows:
1. If \(\mathcal{B}\) satisfies the condition (i), any row of \(X_{i}\) is not zero-vector for \(1\leq i\leq v_{1}\) and \(\sum_{i=1}^{v_{1}}X_{i}\) has no zero element (the spanning condition).
2. If \(\mathcal{B}\) satisfies the condition (ii), all diagonal elements of \(X_{i}^{\,t}X_{i}\) are \(\mu\) for \(1\leq i\leq v_{1}\).
3. If \(\mathcal{B}\) satisfies the condition (iii), all off-diagonal elements of \(X_{i}^{\,t}X_{i}\) are \(\lambda_{12}\) for \(1\leq i\leq v_{1}\).
4. If \(\mathcal{B}\) satisfies the condition (iv), all diagonal elements of \(X_{i}^{\,t}X_{j}\) are \(\lambda_{21}\) for \(1\leq i\neq j\leq v_{1}\).
5. If \(\mathcal{B}\) satisfies the condition (v), all off-diagonal elements of \(X_{i}^{\,t}X_{j}\) are \(\lambda_{22}\) for \(1\leq i\neq j\leq v_{1}\).
\(X^{\,t}X\) is called an _information matrix_. The information matrix of an SBBD is expressed as follows:
\[X^{t}X = I_{v_{1}}\otimes(X_{i}^{\,t}X_{i})+(J_{v_{1}}-I_{v_{1}})\otimes(X _{i}^{\,t}X_{j}) \tag{5}\] \[= I_{v_{1}}\otimes\begin{bmatrix}\mu&\lambda_{12}&\cdots&\lambda_{1 2}\\ \lambda_{12}&\mu&\cdots&\lambda_{12}\\ \vdots&\vdots&\ddots&\vdots\\ \lambda_{12}&\lambda_{12}&\cdots&\mu\end{bmatrix}+(J_{v_{1}}-I_{v_{1}}) \otimes\begin{bmatrix}\lambda_{21}&\lambda_{22}&\cdots&\lambda_{22}\\ \lambda_{22}&\lambda_{21}&\cdots&\lambda_{22}\\ \vdots&\vdots&\ddots&\vdots\\ \lambda_{22}&\lambda_{22}&\cdots&\lambda_{21}\end{bmatrix},\]
where \(1\leq i\neq j\leq v_{1}\), and \(I_{n}\) is the identity matrix of size \(n\) and \(J_{n}\) is the \((n\times n)\) all-ones matrix.
A matrix expressed by \(aI_{n}+b(J_{n}-I_{n})\) is called _completely symmetric_. The information matrix above has a double structure of a completely symmetric matrix. We call the matrix _double completely symmetric_. A spanning bipartite block design (\(K_{v_{1},v_{2}}\,;\mathcal{B}\)) is denoted as SBBD(\(v_{1},v_{2},N\,;\Lambda\)), where \(\Lambda=(\mu,\lambda_{12},\lambda_{21},\lambda_{22})\).
**Example 2.1**.: Let
\[X=[X_{1}\mid X_{2}\mid X_{3}]=\left[\begin{array}{cccccc|cc}0&1&1&1&1&0&1&1&0 \\ 1&0&1&0&1&1&0&1\\ 1&1&0&1&0&1&0&1\\ 0&1&1&0&1&1&0&1\\ 1&0&1&1&0&1&1&0\\ 1&1&0&1&0&0&1&1\\ 0&1&1&1&0&1&1\\ 1&0&1&1&0&1&0&1\\ 1&0&1&1&0&1&1&0\end{array}\right]\]
be a design matrix of an SBBD. Then its information matrix is
\[X^{t}X=I_{3}\otimes\left[\begin{array}{cccc}6&3&3\\ 3&6&3\\ 3&3&6\end{array}\right]+(J_{3}-I_{3})\otimes\left[\begin{array}{cccc}4&4&4&4 \\ 4&4&4\\ 4&4&4\end{array}\right].\]
The design matrix \(X\) satisfies the spanning condition since any row of \(X_{i}\) is not the zero-vector and \(X_{1}+X_{2}+X_{3}\) does not contain \(0\). So we have an SBBD\((3,3,9\,;\Lambda)\), \(\Lambda=(6,3,4,4)\).
As you can see from the above example, the spanning condition can not be confirmed from the information matrix \(X^{t}X\). If \(v_{1}\ll v_{2}\), there is a high possibility that the spanning condition is not met. Such a design in which the spanning condition (I) is not guaranteed is denoted by SBBD\({}^{*}\).
## 3 Optimality
### Variance balanced
For a design matrix, we have a statistical problem of whether it is optimum under certain conditions. There are some criteria for the precision of the estimators (variances of estimators), see [8]. Here we use a criterion called A-optimality (or A-criterion).
Let \(e_{ij}^{(v)}=(e_{1},e_{2},\ldots,e_{v})\) be a \((0,1)\)-vector of length \(v\) such that \(e_{i}=1\), \(e_{j}=-1\) and \(e_{k}=0,k\neq i,j\). \((e_{ij}^{(v_{1})}\otimes e_{i^{\prime}j^{\prime}}^{(v_{2})})^{t}\boldsymbol{\tau}\) for any \(1\leq i<j\leq v_{1}\) and \(1\leq i^{\prime}<j^{\prime}\leq v_{2}\) are called _elementary contrasts_.
Suppose an information matrix \(X^{t}X\) has a double structure of completely symmetric matrices \(A\) and \(B\) of size \((v_{1}\times v_{1})\) and \((v_{2}\times v_{2})\), respectively. Let \(\boldsymbol{p}_{1},\boldsymbol{p}_{2},\ldots,\boldsymbol{p}_{v_{1}-1}\) be orthonormal eigenvectors of \(A\) orthogonal to \(\boldsymbol{1}_{v_{1}}\), and also \(\boldsymbol{q}_{1},\boldsymbol{q}_{2},\ldots,\boldsymbol{q}_{v_{2}-1}\) be similar vectors of \(B\) with size \(v_{2}\), where \(\boldsymbol{1}_{n}\) is the all-one \(n\)-vector. A _basic contrast_ of \(A\otimes B\) is defined by
\[(\boldsymbol{p}_{i}\otimes\boldsymbol{q}_{j})^{t}\boldsymbol{\tau}. \tag{6}\]
Since every \(e_{ij}^{(v_{1})}\otimes e_{i^{\prime}j^{\prime}}^{(v_{2})}\) for any \(1\leq i<j\leq v_{1}\) and \(1\leq i^{\prime}<j^{\prime}\leq v_{2}\) lies in the subspace spanned by \(\boldsymbol{p}_{1}\otimes\boldsymbol{q}_{1},\ \boldsymbol{p}_{1}\otimes \boldsymbol{q}_{2},\ldots,\ \boldsymbol{p}_{v_{1}-1}\otimes\boldsymbol{q}_{v_{2}-1}\), we here use basic contrasts for the proofs in this section, although elementary contrasts are commonly used as contrasts.
Let \(\theta_{1,1},\theta_{1,2},\ldots,\theta_{(v_{1}-1),(v_{2}-1)}\) be non-zero eigenvalues of \(A\otimes B\), and \(\boldsymbol{p}_{1}\otimes\boldsymbol{q}_{1},\ \boldsymbol{p}_{1}\otimes \boldsymbol{q}_{2},\ldots,\ \boldsymbol{p}_{v_{1}-1}\otimes\boldsymbol{q}_{v_{2}-1}\) be orthonormal eigenvectors, that is, \((\boldsymbol{p}_{i}\otimes\boldsymbol{q}_{j})^{t}\boldsymbol{1}_{v_{1}v_{2}}=0\), \((\boldsymbol{p}_{i}\otimes\boldsymbol{q}_{j})^{t}(\boldsymbol{p}_{i}\otimes \boldsymbol{q}_{j})=1\) and \((\boldsymbol{p}_{i}\otimes\boldsymbol{q}_{j})^{t}(\boldsymbol{p}_{i^{\prime}} \otimes\boldsymbol{q}_{j^{\prime}})=0\), \(i\neq i^{\prime}\) or \(j\neq j^{\prime}\) then we have
\[\mathrm{Var}((\boldsymbol{p}_{i}\otimes\boldsymbol{q}_{j})^{t}\boldsymbol{ \hat{\tau}})=(\boldsymbol{p}_{i}\otimes\boldsymbol{q}_{j})^{t}(A\otimes B)^{- }(\boldsymbol{p}_{i}\otimes\boldsymbol{q}_{j})\sigma^{2}=\frac{1}{\theta_{i, j}}\sigma^{2}, \tag{7}\]
where \((A\otimes B)^{-}\) is a Moore-Penrose generalized inverse matrix of \((A\otimes B)\).
**Definition 3.1**.: (A-optimum, [5]) Let \(\Xi\) be the set of \((N\times v_{1}v_{2})\) design matrices \(X\) with a certain number of \(1\)'s. Assume \(X^{t}X\), \(X\in\Xi\), has \((v_{1}-1)(v_{2}-1)\) non-zero eigenvalues \(\alpha_{1},\alpha_{2},\ldots,\alpha_{(v_{1}-1)(v_{2}-1)}\). For a design matrix \(X\in\Xi\), if the sum of \(1/\alpha_{i}\) is minimum among \(\Xi\), then \(X\) is called _A-optimum relative to \(\Xi\)_.
\[\text{A-optimum:}\ \min_{X\in\Xi}\ \left\{\sum_{1\leq i\leq(v_{1}-1)(v_{2}-1)} \frac{1}{\alpha_{i}}\right\} \tag{8}\]
**Definition 3.2**.: (Variance Balanced, [10]) If all variances for the estimators of basic contrasts are the same, that is, if \(X^{t}X\) has \((v_{1}-1)(v_{2}-1)\) identical non-zero eigenvalues, then the design is called _variance balanced_:
\[\mathrm{Var}((\mathbf{p}_{i}\otimes\mathbf{q}_{j})^{t}\mathbf{\hat{\tau}})=\frac{1}{\alpha} \sigma^{2}\ \ \mathrm{for}\ 1\leq i\leq v_{1}-1\ \mathrm{and}\ 1\leq j\leq v_{2}-1. \tag{9}\]
It is known that any A-optimum design is variance balanced, but the reverse has not been proven.
Consider a statistical model (2) of SBBD. If we compare to an ordinal two-way factorial model with interactions, the treatment effects of Equation (1) have the exact same structure as the interaction effects whose information matrix is double completely symmetric, see [7].
**Theorem 3.3**.: _SBBD\({}^{*}\) is variance balanced whenever all basic contrasts of \(\mathbf{\tau}\) are estimable._
**Proof** Let \(X^{t}X\) be a double completely symmetric information matrix from an SBBD\({}^{*}\) which can be expressed with four integers, \(a,b,c,d\) as follows:
\[X^{t}X=I_{v_{1}}\otimes(aI_{v_{2}}+bJ_{v_{2}})+(J_{v_{1}}-I_{v_{1}})\otimes(cI _{v_{2}}+dJ_{v_{2}}). \tag{10}\]
Consider a Moore-Penrose generalized inverse matrix of \(X^{t}X\) by putting \(A_{1},A_{2},B_{1}\) and \(B_{2}\) as follows:
\[A_{1}=I_{v_{1}}-\frac{1}{v_{1}}J_{v_{1}},\ A_{2}=\frac{1}{v_{1}}J_{v_{1}},\ B_{1}=I_{v_{2}}-\frac{1}{v_{2}}J_{v_{2}},\ B_{2}=\frac{1}{v_{2}}J_{v_{2}}.\]
Using the spectral decomposition method, the information matrix \(X^{t}X\) can be rewritten as follows:
\[X^{t}X = (A_{1}+A_{2})\otimes(a(B_{1}+B_{2})+bv_{2}B_{2})+(v_{1}A_{2}-(A_ {1}+A_{2}))\otimes(c(B_{1}+B_{2})+dv_{2}B_{2}) \tag{11}\] \[= (A_{1}+A_{2})\otimes(aB_{1}+(a+bv_{2})B_{2})+((v_{1}-1)A_{2}-A_{ 1})\otimes(cB_{1}+(c+dv_{2})B_{2})\] \[= (a-c)(A_{1}\otimes B_{1})+(a+bv_{2}-c-dv_{2})(A_{1}\otimes B_{2} )+(a+c(v_{1}-1))(A_{2}\otimes B_{1})\] \[+\,(a+bv_{2}+(v_{1}-1)(c+dv_{2}))(A_{2}\otimes B_{2}).\]
Then we can see the eigenvalues \(\alpha\), \(\beta\), \(\gamma\), \(\delta\) of \(X^{t}X\):
\[\alpha=a-c,\ \beta=a-c+(b-d)v_{2},\gamma=a+c(v_{1}-1),\ \delta=a+bv_{2}+(v_{1}-1)(c+ dv_{2}).\]
We are interested in the first term, \(\alpha(A_{1}\otimes B_{1})\), of Equation (11). The matrix of the term can be represented as
\[(A_{1}\otimes B_{1})=\sum_{\begin{subarray}{c}1\leq i\leq v_{1}-1\\ 1\leq j\leq v_{2}-1\end{subarray}}\theta_{i,j}(\mathbf{p}_{i}\otimes\mathbf{q}_{j})( \mathbf{p}_{i}\otimes\mathbf{q}_{j})^{t},\]
where \(\mathbf{p}_{i}\)\((\mathbf{q}_{j})\) are orthonormal eigenvectors of \(A_{1}\)\((B_{1})\) orthogonal to \(\mathbf{1}_{v_{1}}\)\((\mathbf{1}_{v_{2}})\) and \(\theta_{i,j}\) is the eigenvalue corresponding to \(\mathbf{p}_{i}\otimes\mathbf{q}_{j}\). From the matrix forms of \(A_{1}\) and \(B_{1}\), non-zero eigenvalues \(\theta_{i,j}\) are all 1. So, all basic contrasts of \(\mathbf{\tau}\), \((\mathbf{p}_{i}\otimes\mathbf{q}_{j})^{t}\mathbf{\tau}\) for \(1\leq i\leq v_{1}-1\) and \(1\leq j\leq v_{2}-1\), are obtained from the first term. Therefore, the Moore-Penrose generalized inverse matrix \((X^{t}X)^{-}\) including the ordinal inverse matrix is written as:
\[(X^{t}X)^{-}=\frac{1}{\alpha}(A_{1}\otimes B_{1})+\frac{1}{\beta}(A_{1} \otimes B_{2})+\frac{1}{\gamma}(A_{2}\otimes B_{1})+\frac{1}{\delta}(A_{2} \otimes B_{2}). \tag{12}\]
If \(\alpha,\beta,\gamma,\delta\) are all non-zero, \((X^{t}X)^{-}\) is an inverse matrix, otherwise, it is a Moore-Penrose generalized inverse matrix. If at least one of the eigenvalues is 0, then it is obtained by removing the term from Equation (12). Here we put \(\alpha\neq 0,\ (a>c)\) from the assumption that all basic contrasts of effects in the model (2) are estimable, and algebraic multiplicity of \(\alpha\) is \((v_{1}-1)(v_{2}-1)\). Using Equation (7), it holds that
\[\mathrm{Var}((\mathbf{p}_{i}\otimes\mathbf{q}_{j})^{t}\mathbf{\hat{\tau}}) =(\mathbf{p}_{i}\otimes\mathbf{q}_{j})^{t}(X^{t}X)^{-}(\mathbf{p}_{i}\otimes \mathbf{q}_{j})\sigma^{2}\] \[=\frac{1}{\alpha}(\mathbf{p}_{i}\otimes\mathbf{q}_{j})^{t}(A_{1}\otimes B _{1})(\mathbf{p}_{i}\otimes\mathbf{q}_{j})\sigma^{2}\] \[=\frac{1}{\alpha}\sigma^{2}\]
for \(1\leq i\leq v_{1}-1\) and \(1\leq j\leq v_{2}-1\). Therefore, the theorem is complete.
The coefficients \(a,b,c,d\) in the proof correspond to the parameters of SBBD as follows:
\[a=\mu-\lambda_{12},\ \ b=\lambda_{12},\ \ c=\lambda_{21}-\lambda_{22},\ \ d= \lambda_{22}.\]
**Example 3.4**.: Eigenvalues of \(X^{t}X\) in Example 2.1 are
\[36,\ 3,\ 3,\ 3,\ 3,\ 3,\ 3,\ 0,\ 0.\]
They include \((3-1)(3-1)=4\) same eigenvalues for \(\alpha\ (=3)\). The parameters of Example 2.1 are \(\mu=6\), \(\lambda_{12}=3\), \(\lambda_{21}=4\), \(\lambda_{22}=4\), \(v_{1}=3\), \(v_{2}=3\). Then we have the following four kinds of eigenvalues, \(\alpha,\beta,\gamma,\delta\).
\[\begin{array}{rcl}\alpha&=&a-c=\mu-\lambda_{12}-\lambda_{21}+\lambda_{22}=3 &(m_{\alpha}=(v_{1}-1)(v_{2}-1)=4)\\ \beta&=&a+bv_{2}-c-dv_{2}=0&(m_{\beta}=v_{1}-1=2)\\ \gamma&=&a+c(v_{1}-1)=3&(m_{\gamma}=v_{2}-1=2)\\ \delta&=&a+bv_{2}+(v_{1}-1)(c+dv_{2})=36&(m_{\delta}=1),\end{array}\]
where \(m_{\alpha},m_{\beta},m_{\gamma},m_{\delta}\) are multiplicities. These are consistent with the proof of Theorem 3.3.
From the proof, we have the following corollary:
**Corollary 3.5**.: _If \(X\), size (\(N\times v_{1}v_{2}\)), is a design matrix of an SBBD\({}^{*}\), where all contrasts of effects are estimable, then \(X^{t}X\) has \((v_{1}-1)(v_{2}-1)\) non-zero identical eigenvalues._
### Optimality on semi-regular SBBD
Let \(B=(V_{1},V_{2}\,;E^{\prime}),E^{\prime}\subseteq E\) be a subgraph of the complete bipartite graph \(K_{v_{1},v_{2}}=(V_{1},V_{2}\,;E)\). If all points in \(V_{1}\) and \(V_{2}\) of \(B\) have degrees \(k_{1}\) and \(k_{2}\), respectively, then the subgraph \(B\) is called a _semi-regular bipartite subgraph_, and called _regular bipartite subgraph_ if \(k_{1}=k_{2}\). Let \(\mathcal{B}\) be a set of \(N\) semi-regular bipartite subgraphs of \(K_{v_{1},v_{2}}\), degrees are \(k_{1}\) and \(k_{2}\), that is, \(v_{1}k_{1}=v_{2}k_{2}\). Assume \(N\geq(v_{1}-1)(v_{2}-1)\). Let \(X\) be a design matrix of size (\(N\times v_{1}v_{2}\)) defined by Section 2 with respect to \(\mathcal{B}\). Now we define a class \(\Omega\) for matrices \(X\) that satisfies the following conditions:
1. the number of \(1\)'s in each column of \(X\) is \(\mu\),
2. \(X\) is a design matrix whose rows correspond to blocks of \(\mathcal{B}\),
3. the all elementary contrasts of \(X^{t}X\), \((e_{ij}^{(v_{1})}\otimes e_{i^{\prime}j^{\prime}}^{(v_{2})})^{t}\boldsymbol{\tau}\), are estimable for \(1\leq i<j\leq v_{1}\) and \(1\leq i^{\prime}<j^{\prime}\leq v_{2}\).
If \(X=(X_{1}\mid X_{2}\mid\cdots\mid X_{v_{1}})\) is a matrix in \(\Omega\), then \(X\) has the following properties:
* any row of \(X_{i}\) has exactly \(k_{1}\)\(1\)'s for \(1\leq i\leq v_{1}\),
* any row of \(\sum_{i=1}^{v_{1}}X_{i}\) is \((k_{2},k_{2},\ldots,k_{2})\).
If a matrix \(X\in\Omega\) satisfies the conditions 1 to 5 of SBBD, it is called a _semi-regular SBBD_. And if all blocks of a semi-regular SBBD are regular bipartite subgraphs then it is called a _regular SBBD_.
Let \(\mathbf{z}_{1},\mathbf{z}_{2},\ldots,\mathbf{z}_{v_{1}-1}\) be the orthonormal vectors orthogonal to \(\mathbf{1}_{v_{1}}\), and also \(\mathbf{w}_{1},\mathbf{w}_{2}\), \(\ldots,\mathbf{w}_{v_{2}-1}\) be the orthonormal vectors orthogonal to \(\mathbf{1}_{v_{2}}\) (they are not necessary to be eigenvectors).
**Lemma 3.6**.: _For any \(X\in\Omega\), \(X^{t}X\) has the following eigenvectors whose eigenvalues are all \(0\),_
\[\mathbf{z}_{1}\otimes\mathbf{1}_{v_{2}},\mathbf{z}_{2}\otimes\mathbf{1}_{v_{2 }},\ldots,\mathbf{z}_{v_{1}-1}\otimes\mathbf{1}_{v_{2}}, \tag{13}\]
_and_
\[\mathbf{1}_{v_{1}}\otimes\mathbf{w}_{1},\mathbf{1}_{v_{1}}\otimes\mathbf{w}_{2 },\ldots,\mathbf{1}_{v_{1}}\otimes\mathbf{w}_{v_{2}-1}, \tag{14}\]
**Proof** Let \(k=k_{1}v_{1}=v_{2}k_{2}\). Since \(X^{t}X\mathbf{1}_{v_{1}v_{2}}=\mu k\mathbf{1}_{v_{1}v_{2}}\), \(\mathbf{1}_{v_{1}v_{2}}\) is an eigenvector of \(X^{t}X\) whose eigenvalue is \(\mu k\). Let \(X=(X_{1}\mid X_{2}\mid\cdots\mid X_{v_{1}})\) and \(X_{i}=[x_{jh}^{(i)}]\). The inner product of the \(j\)-th row of \(X\) and \(\mathbf{z}_{i}\otimes\mathbf{1}_{v_{2}}\) is
\[\sum_{g=1}^{v_{1}}\sum_{h=1}^{v_{2}}x_{jh}^{(g)}z_{g}^{(i)}=k_{1}\sum_{g=1}^{v _{1}}z_{g}^{(i)}=0\]
from the semi-regular condition, where \(z_{g}^{(i)}\) is the \(g\)-th element of \(\mathbf{z}_{i}\). Similarly, we have
\[\sum_{g=1}^{v_{1}}\sum_{h=1}^{v_{2}}x_{jh}^{(g)}w_{h}^{(i)}=k_{2}\sum_{h=1}^{v _{2}}w_{h}^{(i)}=0,\]
where \(w_{h}^{(i)}\) is the \(h\)-th element of \(\mathbf{w}_{i}\). Therefore, the vectors of (13) and (14) are eigenvectors of \(X^{t}X\) corresponding to eigenvalue zero.
Let \(\mathbf{u}_{1},\mathbf{u}_{2},\ldots,\mathbf{u}_{(v_{1}-1)(v_{2}-1)}\) be orthonormal eigenvectors of \(X^{t}X\) which are orthogonal to \(\mathbf{1}_{v_{1}v_{2}}\), (13) and (14).
**Lemma 3.7**.: _Every \(X\) in \(\Omega\) is A-optimum if the eigenvalues corresponding to \(\mathbf{u}_{i}\)'s are all equal._
**Proof** Let \(\alpha_{i}\) be the eigenvalue corresponding to \(\mathbf{u}_{i}\). We have a spectrum decomposition of \(X^{t}X\),
\[X^{t}X=\sum_{i=1}^{(v_{1}-1)(v_{2}-1)}\alpha_{i}\mathbf{u}_{i}\mathbf{u}_{i}^{t}+\mu k \frac{1}{v_{1}v_{2}}\mathbf{1}_{v_{1}v_{2}}(\mathbf{1}_{v_{1}v_{2}})^{t}. \tag{15}\]
Every \(e_{ij}^{(v_{1})}\otimes e_{i^{\prime}j^{\prime}}^{(v_{2})}\) for any \(1\leq i<j\leq v_{1}\) and \(1\leq i^{\prime}<j^{\prime}\leq v_{2}\) lies in the subspace spanned by \(\mathbf{u}_{1},\mathbf{u}_{2},\ldots,\mathbf{u}_{(v_{i}-1)(v_{2}-1)}\). From the condition (C3), \(\alpha_{i}>0\) for \(1\leq i\leq(v_{1}-1)(v_{2}-1)\). Let \(\hat{\mathbf{\tau}}\) be the least square estimator of \(\mathbf{\tau}\). We have
\[\text{Var}(\mathbf{u}_{i}^{t}\hat{\mathbf{\tau}})=\mathbf{u}_{i}^{t}(X^{t}X)^{-}\mathbf{u}_{i} \sigma^{2}=\frac{1}{\alpha_{i}}\sigma^{2}\]
with Moore-Penrose generalized inverse of \(X^{t}X\):
\[(X^{t}X)^{-}=\sum_{i=1}^{(v_{1}-1)(v_{2}-1)}\frac{1}{\alpha_{i}}\mathbf{u}_{i}\mathbf{ u}_{i}^{t}+\frac{1}{\mu k}\frac{1}{v_{1}v_{2}}\mathbf{1}_{v_{1}v_{2}}( \mathbf{1}_{v_{1}v_{2}})^{t}.\]
Consider A-criterion
\[\sum_{i=1}^{(v_{1}-1)(v_{2}-1)}\frac{1}{\alpha_{i}}. \tag{16}\]
From Equation (15), we have
\[\text{tr}(X^{t}X)=\sum_{i=1}^{(v_{1}-1)(v_{2}-1)}\alpha_{i}+\mu k=\mu v_{1}v_{ 2},\]
that is,
\[\sum_{i=1}^{(v_{1}-1)(v_{2}-1)}\alpha_{i}=\mu(v_{1}v_{2}-k).\]
Since
\[\frac{(v_{1}-1)(v_{2}-1)}{\sum_{i=1}^{(v_{1}-1)(v_{2}-1)}\frac{1}{\alpha_{i}}} \leq\frac{1}{(v_{1}-1)(v_{2}-1)}\sum_{i=1}^{(v_{1}-1)(v_{2}-1)}\alpha_{i},\]
if \(\alpha_{1}=\alpha_{2}=\cdots=\alpha_{(v_{1}-1)(v_{2}-1)}=\frac{\mu(v_{1}v_{2} -k)}{(v_{1}-1)(v_{2}-1)}\), then A-criterion (16) is minimum.
From Theorem 3.3 and Corollary 3.5, an information matrix \(X^{t}X\) of an SBBD has non-zero \((v_{1}-1)(v_{2}-1)\) eigenvalues which are equal. Therefore, we have the following theorem:
**Theorem 3.8**.: _A semi-regular SBBD is A-optimum relative to \(\Omega\)._
Constructions of SBBD
### A construction using an \((r,\lambda)\)-design and an ordered design
**Definition 4.1**.: \(((r,\lambda)\)-design, [3]) Let \(V\) be \(v\)-point set and \(\mathcal{B}=\{B_{1},B_{2},\ldots,\)\(B_{b}\}\) a collection of subsets (blocks) of \(V\). If \((V,\mathcal{B})\) holds the following conditions, it is called an \((r,\lambda)\)-_design_:
* each point of \(V\) is contained in exactly \(r\) blocks of \(\mathcal{B}\),
* any two distinct points of \(V\) are contained in exactly \(\lambda\) blocks of \(\mathcal{B}\).
Let \(v\) be the number of points and \(b\) the number of blocks, and put \(k_{i}=|B_{i}|\) as the block size. If the block sizes are constant \(k\), then the \((r,\lambda)\)-design is called a _balanced incomplete block design_ (BIBD) and denoted by a \((v,k,\lambda)\)-BIBD. An \((r,\lambda)\)-design is also called a _regular pairwise balanced design_. It is not hard to construct because there is no block size restriction. Pairwise balanced designs (PBD, the first condition of \((r,\lambda)\)-design is not required) have been well studied, and many recursive constructions are known, see [3] and [13]. It is not difficult to modify a PBD to be regular.
Let \((V,\mathcal{B})\) be an \((r,\lambda)\)-design, and \(H=[x_{ij}]\) be the \((b\times v)\) incidence matrix between \(\mathcal{B}=\{B_{1},B_{2},\ldots,B_{b}\}\) and \(V=\{a_{1},a_{2},\ldots,a_{v}\}\).
\[x_{ij}=\begin{cases}1&\text{ if }a_{j}\in B_{i}\\ 0&\text{ otherwise.}\end{cases}\]
Then \(H^{t}H\) is expressed as \(rI_{v}+\lambda(J_{v}-I_{v})\).
**Definition 4.2**.: (Ordered design, [9]) Let \(M\) be an \((\eta(n^{2}-n)\times s)\)-array with entries from \(\mathbf{N}_{n}=\{1,2,\ldots,\)\(n\}\). If \(M=[d_{pq}]\) holds the following conditions, it is called an _ordered design_, denoted by \(OD_{\eta}(s,n)\).
1. each row of M consists of \(s\) distinct elements in \(\mathbf{N}_{n}\), where \(s\leq n\),
2. in any distinct two columns of \(M\), every ordered pair \((x,y)\) of distinct elements in \(\mathbf{N}_{n}\) appears on the same rows exactly \(\eta\) times.
In the condition (2), if the pair \((x,y)\) is not necessary to be distinct, the \((\eta n^{2}\times(s+1))\) array is called orthogonal array. It is well known that there exists an \((\eta(n^{2}-n)\times s)\) ordered design if there is an orthogonal \((\eta n^{2}\times(s+1))\) array. We know that there is an orthogonal \((q^{2}\times(q+1))\) array, \(q\) a prime power, see [4]. That is:
**Property 4.3**.: _For any prime power \(q\), there exists an \(OD_{1}(q,q)\)._
Suppose \(H\) is a \((b\times v)\) incidence matrix of an \((r,\lambda)\)-design with \(v\) points and \(b\) blocks, and let \(\mathbf{h}_{i}\) be the \(i\)-th row vector of \(H\). Then, we can obtain a design matrix by arranging the vectors \(\mathbf{h}_{i}\) in according to an ordered design \(OD_{\eta}(s,b)\)\(M=[d_{pq}]\) as follows:
\[X=[\mathbf{h}_{d_{pq}}]=(X_{1}\mid X_{2}\mid\cdots\mid X_{s}).\]
Note that \(X\) is of size \((\eta(b^{2}-b)\times vs)\), where each \(X_{j}\) is an \((\eta(b^{2}-b)\times v)\)-submatrix of \(X\) in which the row vector of \(H\) are put in according to the \(j\)-th column of \(M\).
**Example 4.4**.: The ordered design \(OD_{1}(3,3)\) represented by the symbols \(\{1,2,3\}\) is on the left side of the following matrices. The design matrix \(X\) with the vectors \(\boldsymbol{h}_{i},i=1,2,3\), is on the right side.
\[OD_{1}(3,3)=\begin{bmatrix}1&2&3\\ 2&3&1\\ 3&1&2\\ 1&3&2\\ 2&1&3\\ 3&2&1\end{bmatrix},\qquad\quad X=\begin{bmatrix}\mathbf{h}_{1}&\mathbf{h}_{2}& \mathbf{h}_{3}\\ \mathbf{h}_{2}&\mathbf{h}_{3}&\mathbf{h}_{1}\\ \mathbf{h}_{3}&\mathbf{h}_{1}&\mathbf{h}_{2}\\ \mathbf{h}_{1}&\mathbf{h}_{3}&\mathbf{h}_{2}\\ \mathbf{h}_{2}&\mathbf{h}_{1}&\mathbf{h}_{3}\\ \mathbf{h}_{3}&\mathbf{h}_{2}&\mathbf{h}_{1}\end{bmatrix}.\]
Regarding the \(j\)-th row of \(X\) as an SB-block \(B_{j}\), \(1\leq j\leq N\), which is a spanning subgraph of \(K_{s,v}\), we have an SBBD \((K_{s,v}\,;\mathcal{B})\), where \(N=\eta(b^{2}-b)\).
**Lemma 4.5**.: _Let \(H\) be the \((b\times v)\) incidence matrix of an \((r,\lambda)\)-design, and \(\mathbf{h}_{1},\mathbf{h}_{2},\ldots,\mathbf{h}_{b}\) be the row vectors of \(H\). Then the following equations hold:_
\[\sum_{i=1}^{b}\mathbf{h}_{i}^{t}\,\mathbf{h}_{i}=rI_{v}+\lambda(J_{v}-I_{v}), \tag{17}\]
\[\sum_{i=1}^{b}\sum_{j=1}^{b}\mathbf{h}_{i}^{t}\,\mathbf{h}_{j}=r^{2}J_{v}. \tag{18}\]
**Proof** From the definition of \((r,\lambda)\)-design, \(H^{t}H=rI_{v}+\lambda(J_{v}-I_{v})\). Since \(\sum_{i=1}^{b}\mathbf{h}_{i}^{t}\mathbf{h}_{i}=H^{t}H\), it holds Equation (17). Next,
\[\sum_{i=1}^{b}\sum_{j=1}^{b}\mathbf{h}_{i}^{t}\,\mathbf{h}_{j}= \sum_{i=1}^{b}(\sum_{j=1}^{b}\mathbf{h}_{i}^{t}\,\mathbf{h}_{j}) =\sum_{i=1}^{b}\mathbf{h}_{i}^{t}(r,r,,\ldots,r)\] \[=(r,r,\ldots,r)^{t}(r,r,,\ldots,r)\] \[=r^{2}J_{v}.\]
**Theorem 4.6**.: _If there exists an \((r,\lambda)\)-design with \(b\) blocks and \(v\) points, and an ordered design \(OD_{\eta}(s,b)\), then there is a spanning bipartite block design SBBD\({}^{\prime}\)(\(s,v,N\,;\Lambda\)), \(N=\eta(b^{2}-b)\) and \(\Lambda=(\mu,\lambda_{12},\lambda_{21},\,\lambda_{22})=(\eta r(b-1),\eta \lambda(b-1),\eta r(r-1),\eta(r^{2}-\lambda)\)\()\)._
**Proof** Let \(H\) be the \((b\times v)\) incidence matrix of an \((r,\lambda)\)-design with \(b\) blocks and \(v\) elements, and \(\mathbf{h}_{i}\)\((1\leq i\leq b)\) be the \(i\)-th row of \(H\). \(M=[d_{pq}]\) is an ordered design \(OD_{\eta}(s,b)\). Let \(X\) be the design matrix by arranging the row vector of \(H\) in according to the ordered design \(M\).
\[X=[\mathbf{h}_{d_{pq}}]=(X_{1}\mid X_{2}\mid\cdots\mid X_{s}).\]
First, we compute a diagonal submatrix \(X_{q}^{\,t}X_{q}\) of the information matrix \(X^{t}X\). In \(X_{q}\), \(1\leq q\leq s\), each vector \(\mathbf{h}_{i}\) appears \(\eta(b-1)\) times. Therefore, from Lemma 4.5, we have
\[X_{q}^{\,t}X_{q}=\eta(b-1)\sum_{j=1}^{b}\mathbf{h}_{j}^{t}\mathbf{h}_{j}=\eta (b-1)\cdot(rI_{v}+\lambda(J_{v}-I_{v})).\]
Next, we have the following off-diagonal submatrices of \(X^{t}X\) for \(1\leq q\neq q^{\prime}\leq s\):
\[X_{q}^{\,t}X_{q^{\prime}} =\eta\,(\,\sum_{i=1}^{b}\sum_{j=1}^{b}\mathbf{h}_{i}^{t}\mathbf{h }_{j}-\sum_{i=1}^{b}\mathbf{h}_{i}^{t}\mathbf{h}_{i}\,\,)\] \[=\eta(r^{2}J_{v}-(rI_{v}+\lambda(J_{v}-I_{v}))\] \[=\eta((r^{2}-r)I_{v}+(r^{2}-\lambda)(J_{v}-I_{v})).\]
Let \(P\) be a permutation matrix of size \((v\times v)\) (a \((0,1)\)-matrix with exactly one \(1\) in every row and column).
**Corollary 4.7**.: _If \(X=(X_{1}\mid X_{2}\mid\cdots\mid X_{s})\) is an SBBD, then_
\[X^{(P)}=(X_{1}P\mid X_{2}P\mid\cdots\mid X_{s}P)\]
_is also an SBBD with the same parameters as \(X\)._
**Proof** For \(1\leq i,j\leq s\),
\[(X_{i}P)^{t}(X_{j}P)=P^{t}(X_{i}^{\,t}X_{j})P=X_{i}^{\,t}X_{j}\]
because every \(X_{i}^{t}X_{j}\) is a completely symmetric matrix. Therefore, it holds that \((X^{(P)})^{t}X^{(P)}=X^{t}X\).
\(X^{(P)}\) may include many different rows from \(X\) and may have some additional linearly independent rows in \(X\). If we have the following combined design matrix using permutation matrices \(P_{1},P_{2},...,P_{u-1}\):
\[X^{(I,P_{1},P_{2},...,P_{u-1})}=\left[\begin{array}{cccc}X_{1}&X_{2}&\cdots&X _{s}\\ X_{1}P_{1}&X_{2}P_{1}&\cdots&X_{s}P_{1}\\ \vdots&\vdots&&\vdots\\ X_{1}P_{u-1}&X_{2}P_{u-1}&\cdots&X_{s}P_{u-1}\end{array}\right]\]
then it is an SBBD with the parameters \((s,v,N\,;\Lambda)\), \(N=\eta u(b^{2}-b)\) and \(\Lambda=(\eta ur(b-1),\eta u\lambda(b-1),\eta ur(r-1),\eta u(r^{2}-\lambda)\,)\).
**Corollary 4.8**.: _In Theorem 4.6, if \(s>b-r\) then there exists an SBBD\((s,v,N\,;\Lambda)\) with the same parameters._
**Proof** Since any row \(\boldsymbol{h}_{i}\) is not the zero vector, \(X_{i}\) contains no zero row vector. If \(s=b\), each row of \(\sum_{q=1}^{s}X_{q}\) is exactly equal to \(\sum_{i=1}^{b}\mathbf{h}_{i}=(r,r,\ldots,r)\). Therefore if \(s>b-r\), any component of \(\sum_{q=1}^{s}X_{q}\) is greater than or equal to \(1\).
**Example 4.9**.: Consider an \((r,\lambda)\)-design \((V=\{0,1,2\},\mathcal{B}=\{0,1\},\{1,2\},\\ \{0,2\},\{0,1,2\})\) with parameters \(r=3,\lambda=2,v=3,b=4\). Its incidence matrix is
\[H=\left(\begin{array}{ccc}1&1&0\\ 0&1&1\\ 1&0&1\\ 1&1&1\end{array}\right).\]
Then the row vectors of \(H\) are
\[\mathbf{h}_{1}=\left(\begin{array}{ccc}1&1&0\\ 1&0&1\\ 1&1&1\end{array}\right),\ \ \mathbf{h}_{3}=\left(\begin{array}{ccc}1&0&1 \end{array}\right),\ \ \mathbf{h}_{4}=\left(\begin{array}{ccc}1&1&1 \end{array}\right).\]
Now, we have a design matrix \(X\) using an ordered design \(OD_{1}(4,4)\),
\[X=\begin{bmatrix}\mathbf{h}_{1}&\mathbf{h}_{2}&\mathbf{h}_{3}&\mathbf{h}_{4} \\ \mathbf{h}_{2}&\mathbf{h}_{1}&\mathbf{h}_{4}&\mathbf{h}_{3}\\ \mathbf{h}_{3}&\mathbf{h}_{4}&\mathbf{h}_{1}&\mathbf{h}_{2}\\ \mathbf{h}_{4}&\mathbf{h}_{3}&\mathbf{h}_{2}&\mathbf{h}_{1}\\ \mathbf{h}_{1}&\mathbf{h}_{3}&\mathbf{h}_{4}&\mathbf{h}_{2}\\ \mathbf{h}_{2}&\mathbf{h}_{4}&\mathbf{h}_{3}&\mathbf{h}_{1}\\ \mathbf{h}_{3}&\mathbf{h}_{1}&\mathbf{h}_{2}&\mathbf{h}_{1}\\ \mathbf{h}_{4}&\mathbf{h}_{2}&\mathbf{h}_{1}&\mathbf{h}_{3}\\ \mathbf{h}_{2}&\mathbf{h}_{3}&\mathbf{h}_{1}&\mathbf{h}_{4}\\ \mathbf{h}_{3}&\mathbf{h}_{2}&\mathbf{h}_{4}&\mathbf{h}_{1}\\ \mathbf{h}_{4}&\mathbf{h}_{1}&\mathbf{h}_{3}&\mathbf{h}_{2}\end{bmatrix}.\]
It is an SBBD\((4,3,12\,;\Lambda)\), \(\Lambda=(\mu,\lambda_{12},\lambda_{21},\lambda_{22})=(9,6,6,7)\), with the information matrix
\[X^{t}X=I_{3}\otimes\left[\begin{array}{ccc}9&6&6\\ 6&9&6\\ 6&6&9\end{array}\right]+(J_{3}-I_{3})\otimes\left[\begin{array}{ccc}6&7&7\\ 7&6&7\\ 7&7&6\end{array}\right].\]
### Regular and semi-regular SBBD
In this part, we consider a case that \((V,\mathcal{B})\) is a \((v,k,\lambda)\)-BIBD and introduce the constructions of semi-regular and regular SBBDs.
**Theorem 4.10**.: _If there is a \((v,k,\lambda)\)-BIBD with b blocks and an ordered design \(OD_{\eta}(b,b)\) then there exist a semi-regular SBBD\((b,v,b^{2}-b\,;\,\Lambda)\), where \(\Lambda=(\eta r(b-1),\eta\lambda(b-1),\eta r(r-1),\eta(r^{2}-\lambda))\)._
**Proof** Let \(H\) be the \((b\times v)\) incidence matrix of a \((v,k,\lambda)\)-BIBD with \(b\) blocks. \(H\) has \(k\) ones in each row and \(r=kb/v\) ones in each column. Consider \(X\) is the design matrix constructed from \(H\) and \(OD_{\eta}(b,b)\). Each row of \(X\) is a SB-block of the SBBD \((K_{b,v}\,;\,\mathcal{B})\). From the construction above, each SB-block \(B\in\mathcal{B}\) consists of permuted rows of \(H\). If we reconstruct \((b\times v)\)-array from an SB-block, the number of \(1\)s in each row and in each column is exactly the same as \(H\). That is, each SB-block of \(X\) is a semi-regular bipartite subgraph of \(K_{b,v}\), where the degrees of each point of \(V_{1}\) and \(V_{2}\) are \(k\) and \(r\), respectively.
A BIBD with \(b=v\) is said to be a symmetric BIBD. From Property 4.3, an \(OD_{1}(b,b)\) exists if \(b\) is a prime power.
**Corollary 4.11**.: _If there is a \((v,k,\lambda)\)-BIBD with a prime power number of blocks then there exists a semi-regular SBBD, and if the \((v,k,\lambda)\)-BIBD is symmetric then there exist a regular SBBD._
Table 1 is a list of existing BIBD with a prime power number of blocks less then 100 selected from the table in [3]. The BIBDs in the list are all symmetric except one. We can construct many A-optimal SBBDs.
## 5 Application to Deep Learning
Deep learning, in other words, a multi-layer neural network model, is a network consisting of a sequence of point sets (node layers) and complete bipartite graphs (connection layers) between consecutive node layers. Ignoring the difference between solid and dotted lines, Fig. 2 gives an example.
\begin{table}
\begin{tabular}{c c c c c|c} \(v\) & \(b\) & \(r\) & \(k\) & \(\lambda\) & Remark \\ \hline \hline
7 & 7 & 3 & 3 & 1 & PG(2,2) \\
11 & 11 & 5 & 5 & 2 & \\
13 & 13 & 4 & 4 & 1 & PG(2,3) \\
19 & 19 & 9 & 9 & 4 & \\
23 & 23 & 11 & 11 & 5 & \\
25 & 25 & 9 & 9 & 3 & \\
27 & 27 & 13 & 13 & 6 & 27=3\({}^{3}\) \\
31 & 31 & 6 & 6 & 1 & PG(2,5) \\
31 & 31 & 10 & 10 & 3 & \\
31 & 31 & 15 & 15 & 7 & PG(4,2) \\
37 & 37 & 9 & 9 & 2 & \\
41 & 41 & 16 & 16 & 6 & \\
43 & 43 & 21 & 21 & 10 & \\
47 & 47 & 23 & 23 & 11 & \\ \hline \end{tabular}
\begin{tabular}{c c c c|c} \(v\) & \(b\) & \(r\) & \(k\) & \(\lambda\) & Remark \\ \hline \hline
7 & 49 & 21 & 3 & 7 & \\
49 & 49 & 16 & 16 & 5 & 49=7\({}^{2}\) \\
59 & 59 & 29 & 29 & 14 & \\
61 & 61 & 16 & 16 & 4 & \\
61 & 61 & 25 & 25 & 10 & \\
67 & 67 & 33 & 33 & 16 & \\
71 & 71 & 15 & 15 & 3 & \\
71 & 71 & 21 & 21 & 6 & \\
71 & 71 & 35 & 35 & 17 & \\
73 & 73 & 9 & 9 & 1 & PG(2,8) \\
79 & 79 & 13 & 13 & 2 & \\
79 & 79 & 27 & 27 & 9 & \\
79 & 79 & 39 & 39 & 19 & \\ \hline \end{tabular}
\end{table}
Table 1: BIBD with prime power \(b\) blocks
The weight variables are associated with the edges (connection), and they are gradually estimated using a large amount of training data that are pairs of input data \(\mathbf{x}_{i}\) and correct answers \(\mathbf{d}_{i},i=1,2,\ldots,N\). Let \(\mathbf{W}\) be the all weight parameters, and \(y(\mathbf{x}_{i},\mathbf{W})\) be output. \(\mathbf{W}\) is estimated in the same way as regression so that the following error function is minimized:
\[E(\mathbf{W})=\frac{1}{2}\sum_{i=1}^{N}\parallel\mathbf{d}_{i}-y(\mathbf{x}_{i},\mathbf{W}) \parallel^{2}.\]
During the learning process, we regularly test using data not used in the learning processes. At this time, the training data are learning smoothly, but it often gets worse in the test. This is called _overlearning_ or _overfitting_. It is known that similar overfitting occurs in regression when the model has a large number of parameters to be estimated. In deep learning, [11] proposed a method called _Dropout_ in 2014 as a way to deal with overfitting. This is a method of randomly invalidating the points of each node layer and joining only the valid points by a complete bipartite subgraph. This is a kind of so-called _sparsification_. Regarding this method, Chisaki et al. 2020 and 2021 [1, 2] have proposed a method applying the combinatorial design theory.
In 2013, [12] proposed another method called _DropConnect_, it is a method of sparsification by randomly selecting some edges in a connection layer instead of points of a node layer, for an example, solid lines in Fig. 2. We propose to sparsify the edges in connection layers using SBBD, not by random selection. In the DropConnect method, a node without an incoming connection can occur. From the spanning condition of SBBD, we can sparsify independently for each connection layer without occurring of such nodes. We expect to have a balanced sparsifying system for a multi-layer neural network which have statistically high precision for weight parameter estimations.
### Acknowledgments
I would like to express our sincere gratitude to Professor Shinji Kuriki. He provided very important comments and advice during the writing of this paper. In particular, we received great help from him for the optimality proof. This work was supported in part by JSPS KAKENHI Grant Number JP19K11866.
|
2301.02619 | Review of Cookie Synchronization Detection Methods | The research community has deemed cookie synchronization detection an
inherently challenging task. Studies aiming to identify cookie synchronizations
often share high-level design choices, but deviate amongst low-level
implementations. For example, the majority of studies label a cookie
synchronization iff a user identifier is shared with a third party; however,
there is a lack of consistency among implementations, such as party relations
or identifier value definitions, or whether such definitions are even included.
This review intends to provide a record of established methods and promote
standardization of methods choice in future work. | Jake Smith | 2022-10-28T00:13:14Z | http://arxiv.org/abs/2301.02619v1 | # Review of Cookie Synchronization Detection Methods
## 1 Abstract
The research community has deemed cookie synchronization detection an inherently challenging task [1, 2, 3]. Studies aiming to identify cookie synchronizations often share high-level design choices, but deviate amongst low-level implementations. For example, the majority of studies label a cookie synchronization iff a user identifier is shared with a third party; however, there is a lack of consistency among implementations, such as party relations or identifier value definitions, or whether such definitions are even included. This review intends to provide a record of established methods and promote standardization of methods choice in future work.
**CCS Concepts:** Web protocol security; Network privacy and anonymity; Surveillance.
**Keywords:** cookie synchronization; cookie matching; tracking; cookies; methods.
## 2 Introduction
The sharing of user browsing information is necessary for the Internet advertising and tracking industries to serve targeted ads [4, 5, 6, 7], perform cross-device tracking [8], and sell user information [6, 7, 9]. Browser cookies are a standard container for user browsing data, and the sharing of first party cookies with third parties is restricted by the Same-Origin policy [10] to protect user privacy [9, 11, 12]. Cookie synchronization is used to bypass the Same-Origin policy and share first party cookies with third parties to support the advertising and tracking ecosystem [9, 11, 13]. Cookie synchronization is defined by a variety of terms in the research community, such as cookie matching, cookie linking, cookie leaking, and ID syncing.
## 3 Background
### Browser Cookies and User Identifiers
Cookies are key=value pairs set on a user's browser to bring state to the HTTP protocol and provide session management, user personalization, and tracking functionality.
Browser cookies can be set by the Set-Cookie header of HTTP responses [12, 14, 15] or the document.cookie operation of JavaScript embedded in a visited website [16].
Cookie synchronization involves the sharing of cookie values that can uniquely identify a user (i.e. the cookie value is unique to one user). This review defines such cookie values as identifiers. Methods used to define and label identifiers are discussed in Section 7.2.
### Party Relations
First party cookies are set by a user requested domain, and third party cookies are set by an entity (i.e. domain or parent organization) other than the domain requested.
### Cookie Synchronization
Cookie synchronization is defined as the sharing of a first or third party identifier with another third party, which can be initiated by an embedded third party resource, third party redirect, or the first party itself [13, 17, 9].
### How is Cookie Synchronization Performed?
Assume a user is browsing website1.com and website2.com, and there exists tracking entities tracker1.com and tracker2.com who both set identifiers on the user's browser, ABC and 123, respectively. The user later visits website3.com, which has an embedded resource from tracker1.com that initiates a GET request to tracker1.com. tracker1.com responds with a 3XX redirect instructing the user's browser to issue another request to tracker2.com, with the identifier for tracker1.com (ABC), placed in the parameters of the requested URL1. tracker2.com is now able to link its identifier (123) with tracker1.com's identifier (ABC) [9, 11, 13].
Footnote 1: Additional locations to share identifiers are discussed in Section 7.2.
### User Privacy Erosion
Cookie synchronization allows a third party to reconstruct portions of a user's browsing history by receiving the visited first party site in the Referer field of a GET request header [13, 18]. Websites visited over TLS are not exempt from this history leakage, as plaintext HTTP requests to third parties share URLs requested using HTTPS [11, 13, 19]. As a tracker learns more third party identifiers for a single user, it can reconstruct a larger portion of her browsing history [13].
Cookie respawning methods such as evercookie [20] can enable third parties to re-identify users after clearing browser cookies. A respawned identifier can be re-synced with a tracker, effectively eliminating a user's ability to delete browser cookies. This enables third parties to track users and join browsing histories across browser refreshes [13, 3].
Server-to-server user data merges are facilitated by cookie synchronization. Separate tracker data-sets of known user information can be combined by linking respective identifiers for each tracker [13, 3].
### Cookie Synchronization and the Advertising Industry
Advertising companies are motivated to collect as much user information as possible in order to serve the most targeted ads; Bashir et. al. [4] report Demand-Side Platforms (DSPs) place higher bids to serve users whom they have more information about. Cookie synchronization enables this information acquisition by sharing user browsing data and linking tracker databases, which enables ad targeting based on web history [13]. Papadopoulos et. al. [13] report ad related domains are the most prevalent entities involved in cookie synchronization, participating in 75% of all synchronizations and acquiring as much as 90% of all identifiers synced.
## 4 Related Work
As early as 2014, Olejnik et. al. [6] showed how advertisers use cookie synchronization in real-time bidding (RTB) to reconstruct and share browsing history. HTTP traffic and browser cookies were collected from 100 real users browsing more than 70 sites each. After 70 site visits, a user experienced on average 100 cookie synchronizations with 30 domains involved.
tion by 2.9% and identifiers shared by 2.6%. When third party cookies were blocked, this decreased the number of identifiers synced to 353 over 321 first parties, with 129 third parties involved. They report 3 instances of respawned cookies being synced over two 3,000 site crawls.
Papadopoulos et. al. [13] investigated the prevalence of cookie synchronization events in mobile web traffic. The study collected 850 mobile users HTTP traffic for 12 months. 263,635 cookie synchronizations were detected over 179M total requests, with 22,329 identifiers shared; 91.996% of the shared identifiers were located in URL parameters, 3.705% in the Referer URL, and 3.771% in the URL path. The study reports 5% of identifiers set in TLS sessions being leaked over plain HTTP, as well as the websites visited in the Referer field.
Brookman et. al. [8] examined the extent of cross-device tracking visible to an end-user, including cookie synchronizations. The study crawled the Alexa top 100 websites four times each. They report 106 unique third parties syncing identifiers with 210 other third parties.
Englehardt et. al. [2] performed an extensive analysis of online tracking using their open source crawler, OpenWPM. They collected web traffic and browser cookies from two crawls of the top 10K Alexa websites. They report the majority of common third parties embedded in websites participating in cookie synchronization: 45 of the top 50, 85 of the top 100, and 157 of the top 200.
Papadopoulos et. al. [11] investigated TLS privacy breaches facilitated by cookie synchronization, specifically the sharing of websites visited and identifiers set over HTTPS. The top 12K Alexa websites were crawled, with 440K HTTP(S) requests logged. They report 89,479 HTTP(S) syncing requests (i.e. HTTP redirects sharing an identifier) occurring from 32% of the crawled domains; 17,171 unique identifiers were shared with 733 unique domains. Of the 8,398 websites visited over TLS, 2,317 websites were involved in cookie synchronization. Most critically, these TLS websites conducted 2,879 cookie synchronizations with non-TLS websites and leaked 174 HTTPS visits over plaintext. They report 1 in 13 TLS-supported websites performing cookie synchronization over HTTP.
Urban et. al. [21] performed a longitudinal study documenting the effects of the General Data Protection Regulation (GDPR) on cookie synchronization rates in the European Union (EU). 12 measurements were performed, with one occurring a month before the GDPR going into effect (May 2018), and the rest performed each month after. Each measurement instrumented 400 individual browsing profiles (i.e. unique browsing instances). The measurements each crawled an average of 8.5K domains, totalling over 2.5M requests over the year. After the legislation's passing in May 2018, they report an immediate drop in the number of cookie synchronizations per month (\(\sim\)510) in relation to the pre-GDPR measurement (898); a year later, this number decreased to \(\sim\)480 cookie synchronization per month. The number of third parties conducting cookie synchronizations per month also decreased from \(\sim\)12K to \(\sim\)10.2K. The number of involved third parties per month gradually recovered over the year to \(\sim\)12K. The study claims "cookie synchronization is still used in practice, but its extent is significantly reduced and still declining" in the EU [21]. This claim is not supported by the results of later studies conducted in the EU by Fouad et. al. [17] and Papadogiannakis et. al [9].
Fouad et. al. [17] investigated the role of 1x1 pixel images and other embedded content types in initiating cookie synchronization. They conducted two crawls of the Alexa top 10k domains, and successfully crawled 8,744 domains. They report 34.36% of tracking was initiated by scripts, 23.34% by pixels, 20.01% by text/html, 8.54% by large images, and 4.32% by application or JSON. Of the 8,744 websites crawled, 67.96% were involved in cookie synchronization, with 17,425 third parties involved. Third party identifiers were shared with other third parties in 22.73% of websites with 1,263 unique partners.
Sanchez-Rola et. al. [19] conducted a large scale crawl of the Tranco top 1M most accessed domains list to reconstruct the cookie ecosystem, clarifying known roles and defining novel ones involved in the creation and sharing of cookies. They define the ghost cookie, which is created by an embedded third party script on a first party website that sets a first party cookie. The study claims the existence of a
ghosted cookie decreases a first party's control over the cookies their web-page sets on a browser. They report 8.97M cookie synchronization across 387K websites, with the most common sender and receiver relationship (48%) being the own sender to own receiver (i.e. a first party ghost cookies shared with the third party who embedded the script). 52.4% of domains experience at least one cookie synchronization or cookie value overwriting event. Reflecting the results of Papadopoulos et. al. [11, 13], 37.71% of cookies synchronized over HTTP were created in a TLS session.
Papadogiannakis et. al. [9] investigated whether third party trackers respect cookie consent banner choices {No Action, Reject All Cookies, Accept All Cookies}. Their data-set was derived from the Tranco top 850K sites and successfully crawled 27,953 domains containing a Consent Management Platform (CMP). They specify two types of cookie synchronization relationships. They define a First-Party ID Leak if a first party identifier is shared with a third party, and a Third-Party ID Synchronization if a third party identifier is shared with a third party. When the user takes No Action, 52.88% and 24.03% of websites conduct First-Party ID Leaking and Third-Party ID Synchronization, respectively. When Rejecting All Cookies, 56.41% and 26.20% of websites conduct First-Party ID Leaking and Third-Party ID Synchronization, respectively.
## 5 Purpose
This review intends to document the variety of methods employed to detect cookie synchronization. All studies under review must log HTTP data and label cookie synchronizations from the collected network traffic.
## 6 Data-set Collection Methods
**Crawled Data-set:** Web crawlers instrumented include OpenWPM [8, 12, 21, 17, 2, 1], Chromium-based crawlers [4, 19, 22, 23], Selenium-based crawlers [11, 3, 24], or custom crawlers [9].
**User Data Collection:** To collect the HTTP traffic of real users, study-specific browser plugins are installed on a user's browser [6, 7, 13, 14].
Henceforth, the term _user_ will refer to the browser instance instrumented, regardless of whether the study collected crawled or real user data.
## 7 Labeling Cookie Synchronizations by Shared Identifiers
### Shared Identifier Heuristic
The majority of cookie synchronization detection methods draw inspiration from the shared identifier heuristic proposed by Olejnik et. al. [6]. This method labels a cookie synchronization iff an identifier is shared in a HTTP request's URL parameters to an entity other than the entity who set the cookie (i.e. a third party) [6, 7, 17, 14]. An entity can be defined as either a domain or the parent organization of a domain.
Related methods build on this heuristic by additionally extracting identifiers shared with third parties from the URL path of requests [9, 13, 25], Referer URL of requests2[9, 11, 13, 2, 25, 3], direct Location URL [3], nonstandard request and redirect headers [12], or POST request bodies [9].
Footnote 2: As of November 2020, the HTTP Referer-Policy default directive has been updated to strict-origin-when-cross-origin to only share the origin of a request. This prevents identifiers from being shared in the path and superstring [26].
### Extracting Identifiers from Browser Cookies
#### 7.2.1 What Defines an Identifier?
A cookie set on a user's browser is an identifier iff the cookie's value can identify a specific user (i.e. the value is mapped to only one user). These identifying cookie values and the entities who set them are stored to later detect instances of identifiers shared in HTTP
traffic. This method confirms that a cookie value shared with a third party can uniquely identify the user who initiated the third party request [7, 8, 9, 11, 12, 13, 3, 21, 17, 19, 2, 14].
#### 7.2.2 Extracting Browser Cookies
To create the set of all cookies set on a user's browser, cookie values are extracted from the Set-Cookie header of HTTP responses [12, 14, 15, 13] or Cookie header of HTTP requests [12, 27].
Solomos et. al. [1] use OpenWPM's javascript_instrument[2] to log cookie values set by JavaScript embedded in visited web pages.
#### 7.2.3 User Identifier Filtering
The following restrictions are used to filter identifier cookie values from the original browser cookie set.
Value Length Restrictions:Identifiers often have minimum length requirements: cookie values \(>10\) characters [6, 7, 13, 25], \(>8\) characters [12], \(>7\) characters [21, 2, 14], and \(>5\) characters [9]. Of studies that provide identifier length restrictions, only one provides an upper bound: \(\leq 100\) characters [2].
Value Character Quality Restrictions:Identifiers can be extracted based on character values. Studies that set character value restrictions only extract cookie values consisting of alphanumeric characters and other common characters [2, 17, 12]. Common character values include [-, -, =], with = indicating a key=value pair [2]. Fouad et. al. [17] also consider the comma and period and exclude the equals sign.
Delimiter Parsing:To extract consecutive identifier strings bounded by known characters, cookie values can be parsed (i.e. split) at these common delimiters. All studies that split consecutively shared identifiers consider [k, ;] to be delimiters, except Ghosh et. al. [5] who consider the colon rather than semicolon [9, 12, 3, 21, 17, 2, 14, 25, 13].
Similarity Measurement:Identifiers can be extracted by uniqueness. All studies extracting identifiers based on string entropy use the Ratcliff/Obershelp Algorithm [28] with a provided maximum similarity score: eliminate cookie values \(>66\%\) similar to another cookie value [2], \(>33\%\) similar [8, 3, 25], or not provided [21].
Multiple Values Set for a Key=Value Pair:Falahraster et. al. [14] and Urban et. al. [21] exclude any cookie value extracted from a key=value pair containing more than one value.
Key=Value Pairs with Dynamic Values:Cookie values can be eliminated if the key's value changes over the course of a crawl or user browsing session [3, 2, 25].
Keyword Filtering:Papadogiannakis et. al. [9] use a manually curated list of keywords to eliminate cookie values containing dates, timestamps, regions, locale, URLs, prevalent keywords, consent information (e.g. values of the keys euconsent, eupubconsent, _cmposnent, _cmpiab), or end in common file extensions.
Filtering Non-Unique Strings:Studies with access to multiple cookie data-sets from multiple crawls or user browsing sessions can eliminate cookie values present for multiple crawls or users [13, 21, 17, 14].
Session Cookie Values:Session cookies are deleted at the end of a browsing session and their values can be eliminated [27]. Studies that eliminate session cookies examine the Expires and Max-Age attributes [27] and eliminate values associated with cookies lacking an expiration date [11, 13] or expire earlier than a specified future date: earlier than 90 days [2] or 30 days [3].
### Detecting Identifiers Shared in HTTP Traffic
#### 7.3.1 Labeling Requests to First or Third Parties
Studies that label the party relation of (referrer, request) pairs only label identifiers shared in requests to third parties [6, 7, 8, 9, 11, 12, 13, 21, 17, 19, 2, 1, 14, 25].
**Parent Organization Mapping:** Domain names can be mapped to parent organizations using DNS whois records and blacklists [11, 13, 14, 25] or the WhoTracks.me database [19, 29]. To resolve domain names obfuscated by CNAME cloaking [30], Sanchez et. al. [19] use the NextDNS blocklist [31] to resolve these cloaked domains to known trackers; tldExtract[32] is then used to determine the private suffix of each domain; private suffixes are mapped to parent organizations using the Disconnect[33], WhoTracks.me[29], and webxray[34] lists.
**String Matching:** Domain name string matching is also common, with matches indicating a first party and mismatches indicating a third party [6, 7, 8, 9].
**Englehardt et. al. Case Study:** Englehardt et. al. [2] label request party relations using the Mozilla Public Suffix list [35]; iff the landing page's domain name and public suffix (not including subdomains) do not match a request's domain name and public suffix, the request is labeled as to a third party.
#### 7.3.2 HTTP Identifier Sharing Locations
The research community has examined the following HTTP elements for instances of shared identifiers using exact string matching, with matches indicating a cookie synchronization.
**HTTP GET Requests:** URL query parameters [6, 3, 2, 8, 11, 17, 13, 25, 9, 12], URL path3[3, 2, 8, 11, 13, 25, 9, 12], Referer URL4[2, 11, 13, 25, 9, 3], and non-standard headers [12].
Footnote 3: Studies who report examining URLs–without specifying which elements–are assumed to check both the path and querying.
**HTTP Redirects:** Location URL [3, 2, 25] and non-standard headers [12].
**HTTP POST Requests:** Request bodies [9].
#### 7.3.3 Papadopoulos et. al. Shared Identifier Labeling Case Study
Papadopoulos et. al. [13, 11] implemented a distinct method of detecting instances of shared identifiers over two cookie synchronization studies.
Rather than using string matching to label instances of shared identifiers, they first extract all ID-looking strings from GET request URL paths, query parameters, and Referer headers. An ID-looking string is defined by the same qualities used for filtering identifiers from browser cookies {Section 7.2}.
The study stores detected ID-looking strings in a hashtable with the receiving domain. If an ID-looking string is seen for the first time in an HTTP element, the string is added to the hashtable with the requested domain. If an ID-looking string is seen for at least the second time, all requests carrying it are labeled as an ID-sharing event.
Cookie synchronizations are labeled from the ID-sharing event set; iff an ID-looking string present in an ID-sharing event matches a known identifier, the ID-sharing event is labeled a cookie synchronization.
## 8 Alternative Cookie Synchronization Detection Methods
### Decision Tree Classifier of Encrypted Identifier Synchronization
Papadopoulos et. al. [13] trained a decision tree model to detect cookie synchronizations of encrypted identifiers. The model does not consider the presence of a shared, known identifier when classifying cookie synchronizations.
The study assumes an equal distribution of HTTP traffic feature variability between cookie synchronization of non-encrypted and encrypted identifiers. The training and testing sets were labeled by non-encrypted cookie synchronizations detected using the study's shared identifier heuristic. The features selected include requested entity name, type of entity {Content, Social, Advertising, Analytics, Other}, URL parameter names, location of hashed identifier {URL parameter, URL path, Referer URL parameter}, HTTP status code, browser type, and number of parameters.
### Labeling Cookie Synchronizations in Retargeted Ad Serving Information Flows
Bashir et. al. [4] collect the resource inclusion chain for all websites crawled. At a high level, a cookie synchronization is labeled iff an auction is held by the publisher-side and requests between the Supply-Side Platforms (SSP) of the chain directly include a resource.
The study defines the following terminology. Personas are individually created to represent 90 unique categories of shoppers by browsing specific products on e-commerce sites. These categories are used to later compare with the qualities of retargeted ads for each persona. A publisher-side resource chain serves a retargeted ad to a user's browser. pub is the root node's publisher domain. d is the last entity in a chain and serves the ad. s denotes a SSP. shop is the e-commerce site domain of the retargeted ad.
Cookie synchronizations are labeled iff s and d are adjacent at the end of a chain, d observes the persona at shop, and a request from s to d (or d to s) is present in a chain prior to the retargeted ad being served [4].
### Labeling Tracker to Tracker Cookie Synchronizations with Pre-Existing Data-sets
Bashir et. al. [22] and Solomos et. al. [1] label any (tracker, tracker) referrer-request pair as a cookie synchronization iff the pair is present on a list of known cookie synchronizing third parties [4, 13].
## 9 Acknowledgements
The author would like to thank Dr. Zubair Shafiq and Dr. Katie Rodger for their technical and expository insights.
|
2301.13714 | Recursive Neural Networks with Bottlenecks Diagnose
(Non-)Compositionality | A recent line of work in NLP focuses on the (dis)ability of models to
generalise compositionally for artificial languages. However, when considering
natural language tasks, the data involved is not strictly, or locally,
compositional. Quantifying the compositionality of data is a challenging task,
which has been investigated primarily for short utterances. We use recursive
neural models (Tree-LSTMs) with bottlenecks that limit the transfer of
information between nodes. We illustrate that comparing data's representations
in models with and without the bottleneck can be used to produce a
compositionality metric. The procedure is applied to the evaluation of
arithmetic expressions using synthetic data, and sentiment classification using
natural language data. We demonstrate that compression through a bottleneck
impacts non-compositional examples disproportionately and then use the
bottleneck compositionality metric (BCM) to distinguish compositional from
non-compositional samples, yielding a compositionality ranking over a dataset. | Verna Dankers, Ivan Titov | 2023-01-31T15:46:39Z | http://arxiv.org/abs/2301.13714v1 | # Recursive Neural Networks with Bottlenecks Diagnose
###### Abstract
A recent line of work in NLP focuses on the (dis)ability of models to generalise compositionally for artificial languages. However, when considering natural language tasks, the data involved is not strictly, or _locally_, compositional. Quantifying the compositionality of data is a challenging task, which has been investigated primarily for short utterances. We use recursive neural models (Tree-LSTMs) with bottlenecks that limit the transfer of information between nodes. We illustrate that comparing data's representations in models with and without the bottleneck can be used to produce a compositionality metric. The procedure is applied to the evaluation of arithmetic expressions using synthetic data, and sentiment classification using natural language data. We demonstrate that compression through a bottleneck impacts non-compositional examples disproportionately and then use the bottleneck compositionality metric (BCM) to distinguish compositional from non-compositional samples, yielding a compositionality ranking over a dataset.
## 1 Introduction
_Compositional generalisation_ in contemporary NLP research investigates models' ability to compose the meanings of expressions from their parts and is often investigated with artificial languages (e.g. Lake and Baroni, 2018; Hupkes et al., 2020) or highly-structured natural language data (e.g. Keysers et al., 2019). For such tasks, the **local compositionality** definition of Szabo (2012, p. 10) illustrates how meaning can be algebraically composed:
"The meaning of a complex expression is determined by the meanings its constituents have _individually_ and the way those constituents are combined."
In natural language, there are fragments whose meaning can be composed as with arithmetic (e.g. "the cat is in the house"), while others carry contextual dependencies (e.g. "the kiwi grows on the farm"). Can we characterise whether an input's meaning arises from strictly local compositions?
Existing work in that direction mostly focuses on providing a 'compositionality rating'1 for figurative utterances since figurative language is assumed to be less compositional (Ramisch et al., 2016; Nandakumar et al., 2019; Reddy et al., 2011). Andreas (2018) suggests a general-purpose formulation for measuring the compositionality of examples using their numerical representations, through the _Tree Reconstruction Error_ (TRE), expressing the distance between a model's representation of an input and a strictly compositional reconstruction of that representation. Determining how to compute that reconstruction is far from trivial.
Footnote 1: We colloquially refer to the ‘compositionality ratings’ of phrases, but a more appropriate way to express the same would be to refer to the extent to which the meaning of a phrase arises from a compositional syntax and semantics’. After all, compositionality is a property of a language, not of a phrase.
Inspired by TRE, we use recursive neural networks, Tree-LSTMs (Tai et al., 2015), to process inputs according to their syntactic structure. We augment Tree-LSTMs with bottlenecks to compute
Figure 1: When processing this phrase, “the ruler” is interpreted differently when comparing recursive processing with local processing. We enforce local processing by equipping models with bottlenecks, and our **bottleneck compositionality metric (BCM)** then compares inputs’ representations before and after compression through the bottleneck.
the task-specific meaning of an input in a more locally compositional manner. We use these models to distinguish more compositional examples from less compositional ones in a **bottleneck compositionality metric (BCM)**. Figure 1 provides an intuition for how a bottleneck can provide a metric. For fragments that violate the assumption that meanings of subexpressions can be computed locally (on the left side), one could end up with different interpretations when comparing a contextualised interpretation (in blue) with one locally computed (in green): disambiguating "ruler" requires postponed meaning computation, and thus local processing is likely to lead to different results from regular processing. For fragments that are non-ambiguous (on the right side) the two types of processing can yield the same interpretation because the interpretation of "pencil" is likely to be the same, with or without the context. The bottleneck hinders the model in postponing computations and more strongly impacts non-compositional samples compared to compositional ones, thus acting as a metric.
In the remainder of the paper, we firstly discuss the related work in SS2. SS3 elaborates on the models used that either apply a _deep variational information bottleneck_ (DVIB) (Alemi et al., 2017) or compress representations through increased dropout or smaller hidden dimensionalities. In SS4, we provide a proof-of-concept in a controlled environment where non-compositional examples are manually introduced, after which SS5 elaborates on the natural language example of sentiment analysis. For both tasks, we (1) demonstrate that compression through a bottleneck encourages local processing and (2) show that the bottleneck can act as a metric distinguishing compositional from less compositional examples.
## 2 Related Work
Multi-word expressionsThe majority of the related work in the past two decades has discussed the compositionality of phrases in the context of figurative language, such as phrasal verbs ("to eat up") (McCarthy et al., 2003), noun compounds ("cloud nine" vs "swimming pool") (Reddy et al., 2011; Yazdani et al., 2015; Ramisch et al., 2016; Nandakumar et al., 2019), verb-noun collocations ("take place" vs "take a gift") (Venkatapathy and Joshi, 2005; McCarthy et al., 2007), and adjective-noun pairs ("nice house") (Guevara, 2010). Compositionality judgements were obtained from humans, who indicated to what extent the meaning of the compound is that of the words when combined literally, and various computational methods were applied to learn that mapping. Those methods were initially thesaurus-based (McCarthy et al., 2003), relied on word vectors from co-occurrence matrices later on (Reddy et al., 2011), or employed deep neural networks (Nandakumar et al., 2019).
Compositionality by reconstruction TRE (Andreas, 2018) is a task-agnostic metric that evaluates the compositionality of data representations: \(\text{TRE}(x)=\delta(f(x),\hat{f}_{\eta}(d))\). It is the distance between the representation of \(x\) constructed by \(f\) and the compositionally reconstructed variant \(\hat{f}_{\eta}(d)\) based on the derivation of \(x\) (\(d\)). When employing the metric, one should define an appropriate distance function (\(\delta\)) and define \(\hat{f}_{\eta}\) parametrised by \(\eta\). Andreas illustrates the TRE's versatility by instantiating it for three scenarios: to investigate whether image representations are similar to composed image attributes, whether phrase embeddings are similar to the vector addition of their components, and whether generalisation accuracy in a reference game positively correlates with TRE.
Bhathena et al. (2020) present two methods based on TRE to obtain compositionality ratings for sentiment trees, referred to as _tree impurity_ and _weighted node switching_ that express the difference between the sentiment label of the root and the other nodes in the tree. Zheng and Jiang (2022) ranked examples of sentiment analysis based on the extent to which neural models should _memorise_ examples in order to capture their target correctly. While different from TRE, memorisation could be related to non-compositionality in the sense that non-compositional examples require more memorisation, akin to formulaic language requiring memorisation in humans (Wray and Perkins, 2000).
Other instantiations of the TRE are from literature on language emergence in signalling games, where the degree of compositionality of that language is measured. Korbak et al. (2020) contrast TRE and six other compositionality metrics for signalling games where the colour and shape of an object are communicated. Examples of such metrics are topographic similarity, positional disentanglement and context independence. These are not directly related to our work, considering that they aim to provide a metric for a _language_ rather than single utterances. Appendix B.2 elaborates on topographic similarity and the metrics of Bhathena
et al. (2020) and Zheng and Jiang (2022), comparing them to our metric for sentiment analysis.
Compositional data splitsRecent work on compositional generalisation using artificial languages or highly-structured natural language data focuses on creating data splits that have systematic separation of input combinations in train and test data. The aim is to create test sets that should not be hard when computing meaning compositionally, but, in practice, are very challenging. An example compositionality metric for semantic parsing is _maximum compound divergence_(Keysers et al., 2019; Shaw et al., 2021), that minimises train-test differences in word distributions while maximising the differences in compound usage. This only applies to a data split as a whole, and - differently from the work at hand - does not rate individual samples.
More recently, Bogin et al. (2022) discussed a diagnostic metric for semantic parsing, that predicts model success on examples based on their local structure. Because models struggle with systematically assigning the same meaning to subexpressions when they re-appear in new syntactic structures, such structural deviation diagnoses generalisation failures. Notice that the aim of our work is different, namely identifying examples that are _not_ compositional, rather than investigating generalisation failure for _compositional_ examples.
## 3 Model
The model we employ is the _Tree-LSTM_(Tai et al., 2015), which is a generalisation of LSTMs to tree-structured network topologies. The LSTM computes symbols' representations by incorporating previous time steps, visiting symbols in linear order. A sentence representation is simply the final time step. A Tree-LSTM, instead, uses a tree's root node representation as the sentence representation, and computes the representation of a non-terminal node using the node's children.
Equations 1 and 2 illustrate the difference between the LSTM and an \(N\)-ary Tree-LSTM for the input gate. The LSTM computes the gate's activation for time step \(t\) using input vector \(x_{t}\) and previous hidden state \(h_{t-1}\). The Tree-LSTM does so for node \(j\) using the input vector \(x_{j}\) and the hidden states of up to \(N\) children of node \(j\).
\[i_{t}=\sigma(W^{(i)}x_{t}+U^{(i)}h_{t-1}+b^{(i)}) \tag{1}\]
\[i_{j}=\sigma(W^{(i)}x_{j}+\sum_{\ell=1}^{N}U_{\ell}^{(i)}h_{j\ell}+b^{(i)}) \tag{2}\]
In addition to the input gate, the Tree-LSTM's specification for non-terminal \(j\) (with its \(k\)th child indicated as \(h_{jk}\)) involves an output gate \(o_{j}\) (equation analogous to 2), a forget gate \(f_{jk}\) (Equation 3), cell input activation vector \(u_{j}\) (equation analogous to 2, with the \(\sigma\) function replaced by tanh), and memory cell state \(c_{j}\) (Equation 4). Finally, \(c_{j}\) feeds into the computation of hidden state \(h_{j}\) (Equation 5).
\[f_{jk}=\sigma(W^{(f)}x_{j}+\sum_{\ell=1}^{N}U_{k\ell}^{(f)}h_{j\ell}+b^{(f)}) \tag{3}\]
\[c_{j}=i_{j}\odot u_{j}+\sum_{\ell=1}^{N}f_{j\ell}\odot c_{j\ell} \tag{4}\]
\[h_{j}=o_{j}\odot\text{tanh}(c_{j}) \tag{5}\]
We apply a _binary_ Tree-LSTM to compute hidden state \(h_{j}\) and memory cell state \(c_{j}\), that thus uses separate parameters in the gates for the left and right child.
Tree-LSTMs process inputs according to their syntactic structure, which has been associated with more compositional processing (Socher et al., 2013; Tai et al., 2015). Yet, although the topology encourages compositional processing, there is no mechanism to explicitly regulate how much information is passed from children to parent nodes - e.g. given enough capacity, the hidden representations could store every input encountered and postpone processing until the very end. We add such a mechanism by introducing a **bottleneck**.
**1. Deep Variational Information Bottleneck** The information bottleneck of Alemi et al. (2017) assumes random variables \(X\) and \(Y\) for the input and output, and emits a compressed representation \(Z\) that preserves information about \(Y\), by minimising the loss \(\mathcal{L}_{IB}\) in Equation 6. This loss is intractable, which motivates the variational estimate \(\mathcal{L}_{VIB}\) provided in Equation 7 (Alemi et al., 2017) that we use to train the **deep variational information bottleneck (DVIB)** version of our model.
\[\mathcal{L}_{IB}=\beta I(X,Z)-I(Z,Y) \tag{6}\]
\[\mathcal{L}_{VIB}=\underbrace{\beta\underset{x}{\mathbb{E}}[\text{KL}[p_{ \theta}(z|x),r(z)]]}_{\text{information loss}}+ \tag{7}\]
In the information loss, \(r(z)\) and \(p_{\theta}(z|x)\) estimate the prior and posterior probability over \(z\), respectively. In the task loss, \(q_{\phi}(y|z)\) is a parametric approximation of \(p(y|z)\). In order to allow an analytic computation of the KL-divergence, we consider Gaussian distributions \(r(z)\) and \(p_{\theta}(z|x)\), namely \(r(z)=\mathcal{N}(z|\mu_{0},\Sigma_{0})\) and \(p_{\theta}(z|x)=\mathcal{N}(z|\mu(x),\Sigma(x))\), where \(\mu(x)\) and \(\mu_{0}\) are mean vectors, and \(\Sigma(x)\) and \(\Sigma_{0}\) are diagonal covariance matrices. The reparameterisation trick is used to estimate the gradients: \(z=\mu(x)+\Sigma(x)\odot\epsilon\), where \(\epsilon\sim\mathcal{N}(0,I)\).
We sample \(z\) once per non-terminal node, and average the KL terms of all non-terminal nodes, where \(x\) is the hidden state \(h_{j}\) or the cell state \(c_{j}\) (that have separate bottlenecks), and \(\mu(x)\) and \(\Sigma(x)\) are computed by feeding \(x\) to two linear layers. \(\beta\) regulates the impact of the DVIB, and is gradually increased during training. During inference, we use \(z=\mu(x)\).
2. Dropout bottleneckBinary **dropout**(Srivastava et al., 2014) is commonly applied when training neural models, to prevent overfitting. With a probability \(p\) hidden units are set to zero, and during the evaluation all units are kept, but the activations are scaled down. Dropout encourages distributing the most salient information over multiple neurons, which comes at the cost of idiosyncratic patterns that networks may memorise otherwise. We hypothesise that this hurts non-compositional examples most. We apply dropout to the Tree-LSTM's hidden states (\(h_{j}\)) and memory cell states (\(c_{j}\)).
3. Hidden dimensionality bottleneckSimilarly, decreasing the number of **hidden units** is expected to act as a bottleneck. We decrease the number of hidden units in the Tree-LSTM, keeping the embedding and task classifier dimensions stable, where possible.
The different bottlenecks have different merits: whereas the hidden dimensionality and dropout bottlenecks shine through simplicity, they are rigid in how they affect the model and apply in the same way at every node. The DVIB allows for more flexibility in how compression is achieved through learnt \(\Sigma(x)\) and by requiring an overall reduction in the information loss term, without enforcing the same bottleneck at every node in the tree.
From bottleneck to compositionality metricBCM compares Tree-LSTMs with and without a bottleneck. We experiment with two methods, inspired by TRE (Andreas, 2018). TRE aims to find \(\eta\) such that \(\delta(f(x),\hat{f}_{\eta}(d))\) is minimised, for inputs \(x\), their derivations \(d\), distance function \(\delta\), a model \(f\) and its compositional approximation \(\hat{f}_{\eta}\).
* In the **TRE training** (BCM-TT) setup, we include the distance (\(\delta\)) between the hidden representations of \(f\) and \(\hat{f}_{\eta}\) in the loss when training \(\hat{f}_{\eta}\). When training \(\hat{f}_{\eta}\) with TRE training, \(f\) is frozen, and \(f\) and \(\hat{f}_{\eta}\) share the final linear layer of the classification module. In the arithmetic task, \(\delta\) is the _mean-squared error_ (MSE) (i.e. the squared Euclidean distance). In sentiment analysis, \(\delta\) is the Cosine distance function.
* In the **post-processing** (BCM-PP) setup, we train the two models separately, extract hidden representations and apply _canonical correlation analysis_ (CCA) (Hotelling, 1936) to minimise the distance between the sets of hidden representations. Assume matrices \(A\in\mathcal{R}^{d_{A}\times N}\) and \(B\in\mathcal{R}^{d_{B}\times N}\) representing \(N\) inputs with dimensionalities \(d_{A}\) and \(d_{B}\). CCA linearly transforms these subspaces \(A^{\prime}=WA\), \(B^{\prime}=VB\) to maximise the correlations \(\{\rho_{1},\ldots,\rho_{\text{min}(d_{A},d_{B})}\}\) of the transformed subspaces. We treat the number of CCA dimensions to use as a hyperparameter.
## 4 Proof-of-concept: Arithmetic
Given a task, we assign ratings to inputs that express to what extent their task-dependent meaning arises in a locally compositional manner. To investigate the impact of our metric on compositional and non-compositional examples in a controlled environment, we first use perfectly compositional arithmetic expressions and introduce exceptions to that compositionality manually.
### Data and model training
Math problems have previously been used to examine neural models' compositional reasoning (e.g. Saxton et al., 2018; Hupkes et al., 2018; Russin et al., 2021). Arithmetic expressions are suited for our application, in particular since they can be represented as trees. We use expressions containing brackets, integers -10 to 10, and + and - operators - e.g. "( 10 - ( 5 + 3 ))" (using data from Hupkes et al., 2018). The output is an integer. This is modelled as a regression problem with the MSE loss. The'meaning' (the numerical value) of a subexpression can be locally computed at each node in
the tree: there are no contextual dependencies.
In this controlled environment, we introduce exceptions by making "\(\emptyset\)" ambiguous. When located in the subtree headed by the root node's left child, it takes on its regular value, but when located in the right subtree, it takes on the value of the leftmost leaf node of the entire tree (see Figure 2). The model is thus encouraged to perform non-compositional processing to keep track of all occurrences of "\(\emptyset\)" and store the first leaf node's value throughout the tree. 88% of the training data are the original arithmetic expressions, and 12% are such exceptions. We can thus track what happens to the two categories when we introduce the bottleneck. The training data consist of 14903 expressions with 1 to 9 numbers. We test on expressions with lengths 5 to 9, using 5000 examples per length. The Tree-LSTMs trained on this dataset have embeddings and hidden states of sizes 150 and are trained for 50 epochs with learning rate \(2\mathrm{e}{-4}\) with AdamW and a batch size of 32. The base Tree-LSTMs in all setups use the same architecture, namely the Tree-LSTM architecture required for the DVIB, but with \(\beta=0\). All results are averaged over models trained using ten different random seeds. In the Tree-LSTM, the numbers are leaf nodes and the labels of non-terminal nodes are the operators.2
Footnote 2: Appendix C further elaborates on the experimental setup.
### Task performance: Hierarchy without compositionality?
Figures 2(a) and 2(b) visualise the performance for the regular examples and exceptions, respectively, when increasing \(\beta\) for the DVIB. The DVIB disproportionately harms the exceptions; when \(\beta\) is too high the model cannot capture the non-local dependencies. Appendix A.1 shows how the hidden dimensionality and dropout bottlenecks have a similar effect. Figure 4 and Appendix A.2 provide insights in the training dynamics of the models: initially, all models will treat "\(\emptyset\)" as a regular number, independent of the bottleneck. Close to convergence, models trained with a low \(\beta\) have learnt to capture the ambiguities, whereas models trained with a higher \(\beta\) will remain in a more locally compositional state.3
Footnote 3: Comparing ‘early’ and ‘late’ models may yield similar results as comparing base and bottleneck models. Yet, without labels of which examples are compositional, it is hard to know when the model transitions from the early to the late stage.
Bottlenecks restrict information passed throughout the tree. To process an arithmetic subexpression, all that is needed is to pass on its outcome, not the subexpression itself - e.g. in Figure 2, one could simply store the value 6 instead of the subexpression "2 - -4". The former represents _local processing_ and is more efficient (i.e. it requires storing less information), while the latter leads to information from the leaf nodes being passed to non-terminal nodes higher up the tree. Storing information about the leaf nodes would be required to cope with the exceptions in the data. That the model could get close to accurate predictions for these exceptions in the first place suggests Tree-LSTMs can process inputs according to the hierarchical structure without being locally compositional. Increasing compression using bottlenecks enforces local processing.
Figure 4: Training dynamics for the Tree-LSTM with the DVIB: for all test examples we compute the MSE over the course of training on the validation set using (a) the compositional targets, or (b) the targets from the adapted dataset, of which a subset is not compositional.
Figure 3: Performance (MSE) on the arithmetic task for the Tree-LSTM with the DVIB (darker colours correspond to higher \(\beta\)). Exceptions have a contextual dependency and cannot be computed bottom up.
Figure 2: Illustration of the ‘exceptions’ in the arithmetic task: the value of “\(\emptyset\)” depends on its position and on the value of the leftmost leaf node in the tree.
### The Bottleneck Compositionality Metric
The bottleneck Tree-LSTM harms the exceptions disproportionately in terms of task performance, and through BCM we can exploit the difference between the base and bottleneck model to distinguish compositional from non-compositional examples. As laid out in SS3, we use the TT or PP method to compare pairs of Tree-LSTMs: the base Tree-LSTM with \(\beta=0\), no dropout and a hidden dimension of 150 (**base model**) is paired up with Tree-LSTMs with the same architecture, but a different \(\beta\), a different dropout probability \(p\) or a different hidden dimensionality \(d\) (**bottleneck model**). All Tree-LSTMs have a classification module that consists of two linear layers, where the first layer maps the Tree-LSTM's hidden representation to a vector of 100 units, and the second layer emits the predicted value. The 100-dimensional vector is used to apply the BCM:
* In BCM-PP the vector feeds into the CCA computation, that compares the hidden representation of the base model and the bottleneck model using their Cosine distance. We rank examples according to that distance, and use all CCA directions.
* In BCM-TT, the vector feeds into the TRE loss component. We train the base model, freeze that network, and add the MSE of the hidden representations of the bottleneck model and the base model to the loss. After training, the remaining MSE is used to rank examples.
Both setups have the same output, namely a compositionality ranking of examples in a dataset. A successful ranking would put the exceptions last. Figure 4(a) illustrates the relative position of regular examples and exceptions for all bottlenecks and BCM variants, for \(\beta=0.25\), \(p=0.5\) and \(d=25\). The change in MSE observed in SS4.2 is reflected in the quality of the ranking, but the success does not depend on the specific selection of \(\beta\), \(p\) or \(d\), as long as they are large (\(\beta\), \(p\)) or small enough (\(d\)). Figure 4(b) illustrates one of the rankings.
Summarising, we illustrated that recursive models can employ strategies that do not locally compose the meaning of arithmetic subexpressions but carry tokens' identities throughout the tree. We can make a model more locally compositional using bottlenecks and can use a model's hidden states to infer which examples required non-local processing afterwards, acting as our compositionality metric.
## 5 Sentiment analysis
We apply the metric to the task of sentiment analysis, for which Moilanen and Pulman (2007, p. 1) suggest the following notion of compositionality:
"For if the meaning of a sentence is a function of the meanings of its parts then the global polarity of a sentence is a function of the polarities of its parts."
Sentiment is quasi-compositional: even though the sentiment of an expression is often a straightforward function of the sentiment of its parts, there are exceptions - e.g. consider cases of sarcasm, such as "I love it when people yell at me first thing in the morning" (Barnes et al., 2019), which makes it a suitable testing bed.
### Data and model training
We use the SST-5 subtask from the Stanford Sentiment Treebank (SST) (Socher et al., 2013), that contains sentences from movie reviews collected
Figure 5: Rankings of arithmetic examples. (a) shows the relative position of regular examples and exceptions in the rankings of all setups, where 0 corresponds to the start of the ranking and 1 to the end. (b) illustrates the result of BCM-TT with the DVIB, \(\beta=0.25\).
Figure 6: The accuracy (solid) and macro-averaged \(F_{1}\)-scores (dashed) for the SST test set, for base models, bottleneck models and a sentiment-only baseline.
by Pang and Lee (2005). The SST-5 subtask requires classifying sentences into one of five classes ranging from very negative to very positive. The standard train, development and test subsets have 8544, 1101 and 2210 examples, respectively. The sentences were originally parsed with the Stanford Parser (Klein and Manning, 2003), and the dataset includes sentiment labels for all nodes of those parse trees. Typically, labels for all phrases are included in training, but the evaluation is conducted for the root nodes of the test set, only.
Following Tai et al. (2015), we use GloVe word embeddings (Pennington et al., 2014), that we freeze across models.4 The Tree-LSTM has 150 hidden units and is trained for 10 epochs with a learning rate of \(2\mathrm{e}{-4}\) and the AdamW optimiser. During each training update, the loss is computed over all subexpressions of 4 trees. Training is repeated with 10 random seeds.5 Figure 6 provides the performance on the test set for the base and bottleneck models, using the accuracy and the macro-averaged \(F_{1}\)-score. Tai et al. (2015) obtained an accuracy of 0.497 using frozen embeddings.
Footnote 4: The notion of local compositionality, relied on in this work, assumes that tokens are not disambiguated, which is why we refrain from using state-of-the-art contextualised representations. The focus of this work is on developing a compositionality metric rather than on improving sentiment analysis.
Footnote 5: Appendix C further elaborates on the experimental setup.
In sentiment analysis, as in our pseudo-arithmetic task, a successful model would often have to deviate from local processing. After all, the correct interpretation of a leaf node is often unknown without access to the context - e.g. in the case of ambiguous words like "sick" which is likely to refer to being ill, but could also mean "awesome". Being successful at the task thus requires a recursive model to keep track of information about (leaf) nodes while recursively processing the input, and more so for non-compositional examples than for compositional examples. As with the arithmetic task, local processing - enforced in the bottleneck models - should disproportionately hinder processing of non-compositional examples.
### A sentiment-only baseline
To assert that the bottlenecks make the models more compositional, we create a sentiment-only baseline that is given as input not words but their sentiment, and has a hidden dimensionality \(d=25\). Non-compositional patterns that arise from the composition of certain words rather than the sentiment of those words (e.g. "drop dead gorgeous") could hardly be captured by that model. As such, the model exemplifies how sentiment can be computed more compositionally. Figure 7 illustrates the default sentiment this model predicts for various input combinations. Its predictions can be characterised by i) predicting **positive** for positive inputs, ii) predicting **negative** for negative inputs, iii) predicting **neutral** if one input is neutral, iv) predicting a class **in between** the input classes or as v) predicting the same class as its inputs (**continuity**).
The performance of the model is included in Figure 6, and Figure 8 (a-c) indicates the Pearson's \(r\) between the sentiment predictions of bottleneck models and baseline models. Generally, a higher \(\beta\) or dropout probability, or a lower hidden dimen
Figure 8: Pearson’s \(r\) for the predictions of sentiment-only baselines and bottleneck models (a-c) and Spearman’s \(\rho\) for the SST validation set compositionality ranking of the baselines and bottleneck models (d-f), when varying the number of CCA dimensions used.
Figure 7: Illustration of the predictions of a sentiment-only baseline model. We indicate the predicted sentiment given two inputs. The labels range from very negative (‘- -’) to neutral (‘\(\sim\)’) to very positive (‘++’).
sionality, leads to predictions that are more similar to this sentiment-only model, unless the amount of regularisation is too extreme, debilitating the model (e.g. for dropout with probability 0.9).
### The Bottleneck Compositionality Metric
Now we use BCM to obtain a ranking over the SST dataset. We separate the dataset into four folds, train on those folds for the base and bottleneck model, and compute the cosine distances for the hidden representations of examples in the test sets (using BCM-PP or BCM-TT). We merge the cosine distances for the different folds, averaged over models trained with 10 random seeds, and order examples based on the resulting distance. We select the values for \(\beta\), dropout and the hidden dimensionality, as well as the number of CCA directions to use, based on rankings computed over the SST validation data. Figure 8 (d-f) illustrates how the rankings of bottleneck models correlate with rankings constructed using the sentiment-only baseline. We select 25 directions, \(\beta=0.0025\), dropout \(p=0.65\) and a hidden dimensionality of 25 to compute the full ranking. Contrary to the arithmetic task, BCM-TT underperforms compared to BCM-PP.
Different from the arithmetic task, it is unclear which examples _should_ be at the start or end of the ranking. Therefore, we examine the relative position of categories of examples in Figure 9 for the BCM-PP with the hidden dimensionality bottleneck, and in Appendix B.3 for the remaining rankings. The categories include the previously introduced ones, augmented with the following four:
* **amplification**: the root is even more positive/negative than its top two children;
* **attenuation**: the root is less positive/negative than its top two children;
* **switch**: the children are positive/negative, but the root node flips that sentiment;
* **neutral\(\leftrightarrow\)polarised**: the inputs are sentiment-laden, but the root is neutral, or vice versa.
We also include characterisations from Barnes et al. (2019), who label examples from the SST test set, for which state-of-the-art sentiment models cannot seem to predict the correct label, including, for example, cases where a sentence contains mixed sentiment, or sentences with idioms, irony or sarcasm. Appendix B.1 elaborates on the meaning of the categories. Figure 9 illustrates the relative positions of our sentiment characterisations and those of Barnes et al. on that ranking. Patterns associated with more compositional sentiment processing, such as 'positive', 'negative' and 'in between' lead to hidden representations that are more similar between the base model and bottleneck models than the dataset average (the mid point, 0.5). Atypical patterns like'switch' and 'neutral\(\leftrightarrow\)polarised', on the other hand, along with the characterisations by Barnes et al. lead to less similar hidden representations. Appendix B.3 presents the same results for all six rankings considered, along with example sentences from across the ranking, to illustrate the types of sentences encountered among the most compositional and the least compositional ones.
### Example use cases
Compositionality rankings can be used in multiple manners, of which we illustrate two below.
When data is scarce: use compositional examplesAssuming that most of natural language _is_ compositional, one would expect that when limiting the training data, selecting compositional examples yields the best test performance. To investigate this, we train models on various training dataset sizes, and evaluate with the regular test set. The training data is taken from the start of the ranking for the 'compositional' setup, and from the end of the ranking for the 'non-compositional' setup
Figure 9: Categories of SST examples and their average position on the compositionality ranking visualised for the BCM-PP with the hidden dimensionality bottleneck and \(d=25\). Categories in black are assigned by us; categories in gray are from Barnes et al. (2019). The categories are further explained in the main text and Appendix B.1. Jittering was applied to better visualise overlapping categories.
(excluding test data), while ensuring equal distributions over input lengths and output classes. We train a two-layer bidirectional LSTM with 300 hidden units, and Roberta-base Liu et al. (2019), using batch size 4. The models are trained for 10 and 5 epochs, respectively, with learning rates \(2e-4\) and \(5e-6\). Because the ranking is computed over full sentences, and not subexpressions, we train the models on the sentiment labels for the root nodes. Figure 10 presents the results, consolidating that when data is scarce, using compositional examples is beneficial.
**Non-compositional examples are challenging** For the same models, Table 1 indicates how performance changes if we redistribute train and test data such that the test set contains the most compositional examples, or the least compositional ones (keeping length and class distributions similar). The non-compositional test setup is more challenging, with an 11 percentage point drop in accuracy for the LSTM, and a 3 point decrease for Roberta.
In conclusion, applying the BCM to the sentiment data has confirmed findings previously observed for the arithmetic toy task. While it is harder to understand whether the method actually filters out non-compositional examples, both comparisons to a sentiment-only baseline, and the average position of cases for which the composition of sentiment is known to be challenging (e.g. for'mixed' sentiment, 'comparative' sentiment or'sarcasm'), suggest that compression acts as a compositionality metric. We also illustrated two ways in which the resulting ranking can be used.
## 6 Conclusion
This work presents the Bottleneck Compositionality Metric, a TRE-based metric Andreas (2018) that is task-independent and can be applied to inputs of varying lengths: we pair up Tree-LSTMs where one of them has more compressed representations due to a bottleneck (the DVIB, hidden dimensionality bottleneck or dropout bottleneck), and use the distance between their hidden representations as a per-datum metric. The method was applied to rank examples in datasets from most compositional to least compositional, which is of interest due to the growing relevance of compositional generalisation research in NLP, which assumes the compositionality of natural language, and encourages models to compose meanings of expressions rather than to memorise phrases as chunks. We provided a proof-of-concept using an arithmetic task but also applied the metric to the much more noisy domain of sentiment analysis.
The different bottlenecks lead to qualitatively similar results. This suggests that, while DVIB might be better motivated (it directly optimises an estimate of the Shannon information passed across the network), its alternatives may be preferable in practice due to their simplicity.
Because natural language itself is not fully compositional, graded metrics like the ones we presented can support future research, such as i) learning from data according to a compositionality-based curriculum to improve task performance, ii) filtering datasets to improve compositional generalisation, or iii) developing more and less compositional models depending on the desiderata for a task - e.g. to perform well on sentences with idioms, one may desire a more non-compositional model. In addition, the formulation of the metric was general enough to be expanded upon in the future: one could pair up other models, such as an LSTM and a Tree-LSTM, or a Transformer and its recursive variant, as long as one keeps in mind that the compositional reconstruction itself should not be too powerful. After all, even Tree-LSTMs could capture the exceptions in the arithmetic dataset despite their hierarchical inductive bias.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Model** & \multicolumn{2}{c}{**Comp.**} & \multicolumn{2}{c}{**Non-comp.**} & \multicolumn{2}{c}{**Random**} \\ & Acc. & \(F_{1}\) & Acc. & \(F_{1}\) & Acc. & \(F_{1}\) \\ \hline Roberta &.546 &.535 &.516 &.487 &.565 &.549 \\ LSTM &.505 &.485 &.394 &.310 &.478 &.447 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance on the new SST compositionality splits, generated using the ranking from the BCM-PP metric with the hidden dimensionality bottleneck.
Figure 10: Change in SST test set accuracy (solid) and macro-averaged \(F_{1}\)-score (dashed) as the training set size increases, for LSTM and Roberta models. The examples are from the most (in blue) or the least compositional (in green) portion of the ranking from the BCM-PP metric with the hidden dimensionality bottleneck.
### Limitations
We identify three types of limitations for the work presented:
* A **conceptual limitation** is that we work from a very strict definition of compositionality (_local_ compositionality), which essentially equates language with arithmetic. While overly restrictive, current datasets testing compositional generalisation follow this notion. The framework might be extensible to more relaxed notions by allowing for token disambiguation by using contextualised token embeddings and only enforcing a bottleneck on the amount of further contextual integration within the model added on top of the token embeddings.
* although well-motivated from the perspective of compositional processing
- is a major limitation. Tree-LSTMs are most suited for sentence classification tasks, limiting the approach's applicability to sequence-to-sequence tasks. Nonetheless, the bottlenecks can be integrated in other types of architectures that process inputs in a hierarchical manner, such as sequence-to-sequence models inducing latent source and target trees [15] to yield an alternative implementation of the BCM. Our work also assumes that an input's tree structure is known, which might not always be the case. Therefore, the compositionality ranking obtained using BCM always depends on the trees used: what is non-compositional given one (potentially inadequate) structure might be more compositional given another (improved) structure.
* Lastly, the **evaluation** of our approach is limited in the natural domain through the absence of gold labels of the compositionality of examples in the sentiment analysis task, but for other tasks that could have been considered, the same limitation would have applied.
## Acknowledgements
We thank Chris Lucas for his contributions to this project when it was still in an early stage, Kenny Smith for his comments on the first draft of this paper, and Matthias Lindemann for excellent suggestions for the camera-ready version. VD is supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh, School of Informatics and School of Philosophy, Psychology & Language Sciences. IT acknowledges the support of the European Research Council (ERC StG BroadSem 678254) and the Dutch National Science Foundation (NWO Vidi 639.022.518).
|
2309.03325 | Spatial quasiperiodic driving of a dissipative optical lattice and
origin of directed Brillouin modes in a randomly diffusing cold atom cloud | Atoms confined in a three-dimensional dissipative optical lattice oscillate
inside potential wells, occasionally hopping to adjacent wells, thereby
diffusing in all directions. Illumination by a weak probe beam modulates the
lattice, yielding propagating atomic density waves, referred to as Brillouin
modes which travel perpendicular to the direction of travel of the probe. The
probe is made incident at a small angle relative to a lattice symmetry axis,
yielding a driving potential perturbation whose spatial period is not a
multiple of the period of the underlying optical potential, thus enabling
exploration of the regime of space quasiperiodic drive. A theory, based on the
Fourier decomposition of the current into its atomic density wave
contributions, reveals that unlike the previously studied time quasiperiodic
case, wherein a lattice driven by two incommensurate frequencies may exhibit
abrupt suppression in directed current as the driving transitions from
quasiperiodic to periodic, a spatial-quasiperiodically driven lattice exhibits
no such abrupt response. Further, detailed modeling of
spatial-quasiperiodically driven lattices reveals that directed propagation
occurs not only as a consequence of velocity-matching between the propagating
modulation and the average velocity of the atom oscillating inside a well as
was previously reported in the literature, but also as a distinct consequence
of a new mechanism, namely, frequency-matching between the modulation frequency
and the oscillation frequencies. A systematic measurement of the transmitted
probe spectra as a function of off-axis probe angle is presented, which is
consistent with the velocity- and frequency-matching predictions from the
detailed model. | David Cubero, Kefeng Jiang, Alexander Staron, Casey Scoggins, Daniel Wingert, Ian Dilyard, Stone Oliver, Samir Bali | 2023-09-06T19:10:21Z | http://arxiv.org/abs/2309.03325v2 | Brillouin modes in weakly driven dissipative optical lattices: simple theoretical model vs pump-probe spectroscopy
###### Abstract
Atoms confined in a three-dimensional dissipative optical lattice oscillate inside potential wells, occasionally hopping to adjacent wells, thereby diffusing in all directions. Illumination by a weak probe beam modulates the lattice, yielding propagating atomic density waves, referred to as Brillouin modes which travel perpendicular to the direction of travel of the probe. We investigate theoretically and experimentally these modes in the case of a driving potential perturbation whose spatial period is not a multiple of the period of the underlying optical potential, allowing for a deeper understanding of Brillouin mode generation in cold confined atoms. The role of two distinct mechanisms for directed propagation is elucidated, one arising from a velocity-matching between the propagating modulation and the average velocity of the atom oscillating inside a well, and the other arising from a frequency-matching between the the modulation frequency and the oscillation frequencies.
+
Footnote †: preprint: APS/123-QED
Light-induced forces on matter over wavelength and sub-wavelength spatial scales have wide applicability in quantum sensing and metrology [1], ranging from the novel design of periodic potential landscapes [2; 3; 4] to the innovative transport and sorting of particles [5]. In particular, considerable interest has focused on the noise-induced directed motion of particles in the absence of a net force [6], with special attention devoted to cold atoms confined in dissipative optical lattices where environmental noise in the form of spontaneous emission is significant [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28].
A dissipative optical lattice consists of counter-propagating laser beams tuned near atomic resonance that yield AC Stark-shifted ground state potential wells [29]. Atoms in the lattice undergo the well-known process of Sisyphus cooling, and settle into these wells, where they oscillate with a vibrational frequency that is determined by the well-depth [11]. The stochastic optical pumping processes associated with Sisyphus cooling also cause the atoms to occasionally transfer between adjacent wells, leading to spatial diffusion of the cold atom sample [11; 30]. The introduction of a weak probe beam along a symmetry axis of the lattice, results in a time-periodic driving of the lattice that breaks the symmetry, causing directed atomic density waves to be set up. The directed propagation proceeds in the absence of a net force. These propagating atomic density waves are referred to as Brillouin modes in analogy to acoustic waves rippling through a fluid [13; 15; 28; 8]. Recently, a noise-induced resonant enhancement of this directed propagation was observed [28], and a new theory, based on the Fourier decomposition of the current into its atomic density wave contributions [31], was developed to explain this stochastic resonance. This theory was also able to successfully predict the thresholds for the transition to the regime of infinite density in the cold atom setup [31; 32].
In this paper, we investigate the driving of the lattice by illuminating the cold atoms with a probe beam propagating at a slight angle to the lattice symmetry axis. In this case, the spatial period of the driving perturbation is now not an integer multiple of the period of the underlying lattice potential. This allows, at least theoretically, for the possibility of the two spatial driving frequencies to be in irrational ratio, which permits the exploration of spatial quasiperiodic driving in these systems, in analogy with the time quasiperiodic case, wherein the lattice is driven by two incommensurate frequencies [6; 7; 33; 34; 35]. In the present spatial quasiperiodic case, though, we will show that the generated directed current is not as sensitive to the nature of the driving as in the time quasiperiodic case, where true quasiperiodicity is able to suppress the directed motion. Here the transition from periodicity to quasiperiodicity is not observed to be sharp. However, the chosen setup, where the spatial periods of the underlying lattice and the driving are basically uncoupled, sheds light on how the Brillouin modes are generated.
The paper is organized as follows. In Sec. I we define the system model studied. New analytical results, based on a Fourier decomposition of the current, are discussed in Sec. II. Numerical simulations and experiments are discussed in Sec. III and IV, respectively. Finally, Sec. V ends with the conclusions.
## I System models
We consider atoms confined in a so-called 3D-lin\(\perp\)lin optical lattice [11], formed by the superposition of four red-detuned laser beams \(\vec{k}_{1-4}\) and frequency \(\omega_{l}\) in a tetra
hedral configuration, see Fig. 1(a). For \(F_{g}=1/2\to F_{e}=3/2\) atoms the lattice is formed by just two light-shifted ground state \(\pm 1/2\)-spin potentials. An additional weak probe laser of frequency \(\omega_{p}\), forming an angle \(\theta_{p}\) with the \(z\)-axis and with its polarization parallel to the \(y\)-axis, is added to drive the system out of equilibrium and put the atoms in directed motion [8], \(\theta_{p}=0\) in Fig. 1(a, b) and \(\theta_{p}\neq 0\) in Fig. 1(c, d). In the experiments, this model is already a simplification, since the atoms need to have a more complex transition than \(F_{g}=1/2\to F_{e}=3/2\), but the theoretical results are still expected to provide good qualitative insight [36].
Following previous studies, [8; 13; 14; 15; 37; 15; 16], we focus on movement along one of the directions, taken as the \(x\)-axis. The optical potential associated with the above setup is then given by (after taking \(y=z=0\))
\[U_{\pm}(x,t)=\frac{U_{0}}{2}\Big{[}-\frac{3}{2}-\frac{1}{2}\cos (2k_{0}x)\pm\cos(k_{0}x)\] \[+ \varepsilon_{p}\cos(-k_{0}x+k_{l}\sin\theta_{p}x-\delta_{p}t)\] \[+ \varepsilon_{p}\cos(k_{0}x+k_{l}\sin\theta_{p}x-\delta_{p}t)\] \[\pm \varepsilon_{p}\cos(k_{l}\sin\theta_{p}x-\delta_{p}t)\Big{]}, \tag{1}\]
where \(k_{0}=k_{l}\sin\theta_{x}\), \(k_{l}=\omega_{l}/c\) is the laser beam wave number, \(\delta_{p}=\omega_{p}-\omega_{l}\) is a small probe detuning relative to the lattice (\(\delta_{p}/\omega_{l}\ll 1\)) [38], \(U_{0}=-16\hbar\Delta_{0}^{\prime}/3\), with \(\Delta_{0}^{\prime}\) (\(<0\)) being the light-shift per lattice field, and \(\varepsilon_{p}=E_{p}/(2E_{0})\).
The optical well-depth \(U_{0}\) defines a vibrational frequency associated with an atom of mass \(m_{a}\) oscillating at the bottom of a well,
\[\Omega_{x}=k_{x}\sqrt{3U_{0}/2m_{a}}=4\sin\theta_{x}\sqrt{|\Delta_{0}^{\prime} |\omega_{r}}, \tag{2}\]
where \(\omega_{r}=\hbar k_{l}^{2}/(2m_{a})\) is the recoil frequency.
The probe perturbation in (1) appears in three terms: A modulation propagating to the right with phase velocity \(v_{1}=\delta_{p}/(k_{0}+k_{l}\sin\theta_{p})\), another to the left with velocity \(v_{2}=-\delta_{p}/(k_{0}-k_{l}\sin\theta_{p})\), and a third moving with velocity \(v_{3}=\delta_{p}/(k_{l}\sin\theta_{p})\). Each of these terms will produce an excitation of atomic density waves on their own. Thus, for the sake of simplicity, we consider them separately in the theoretical considerations that follow.
The first system model to consider, denoted as case (a), is the following optical potential, which accounts for the third probe term in (1), and is given by
\[U_{\pm}^{(a)}(x,t)=\frac{U_{0}}{2}\Big{[}-\frac{3}{2}-\frac{1}{2 }\cos(2k_{0}x)\pm\cos(k_{0}x)\] \[\pm\varepsilon_{p}\cos(k_{p}x-\delta_{p}t+\phi_{p}),\Big{]}, \tag{3}\]
where \(k_{p}=k_{l}\sin\theta_{p}\), and \(\phi_{p}\) is a probe phase which has been introduced for convenience in the analytical calculations to be presented below. Here the potential perturbation changes sign with the specific atomic state. Terms of this kind are also generated with a \(x\)-polarized probe.
In addition, we consider the case of the following optical potential, denoted as case (b), which accounts for the first two probe terms in (1), and is given by:
\[U_{\pm}^{(b)}(x,t)=\frac{U_{0}}{2}\Big{[} -\frac{3}{2}-\frac{1}{2}(2k_{0}x)\pm\cos(k_{0}x)\] \[+ \varepsilon_{p}\cos(k_{p}x-\delta_{p}t+\phi_{p}),\Big{]}, \tag{4}\]
By setting \(k_{p}=|\Delta\vec{k}_{1}|=k_{0}+k_{l}\sin\theta_{p}\), or \(k_{p}=|\Delta\vec{k}_{2}|=-(k_{0}-k_{l}\sin\theta_{p})\), we can study the effects introduced by the first and second probe terms, respectively. If \(\theta_{p}=0\), \(|\Delta\vec{k}_{1,2}|=|\Delta\vec{k}|=k_{0}\). See Figs. 1 (b) and (d) for an illustration of \(\Delta\vec{k}_{1}\) and \(\Delta\vec{k}_{2}\).
Note the on-axis case \(\theta_{p}=0\), recently investigated in Refs. [31; 28], produces a space periodic perturbation,
Figure 1: 3D-lin\(\perp\)lin tetrahedral lattice illuminated by a weak probe. (a) and (b) depict the case of space periodic driving: The probe propagates along the \(z\)-axis, which is the lattice symmetry axis. (c) and (d) show the case where the space period of the driving is not a multiple of the period of the underlying optical lattice: The probe propagates along a direction forming an angle \(\theta_{p}\) with the \(z\)-axis. Here, \(|\Delta\vec{k}_{1}|=k_{0}+k_{l}\sin\theta_{p}\) and \(|\Delta\vec{k}_{2}|=k_{0}-k_{l}\sin\theta_{p}\), where \(k_{l}\) is the laser wave number, and \(k_{0}=k_{l}\sin\theta_{x}\).
because the wave number of the driving field \(k_{p}\) is the same as that of the underlying lattice, \(k_{0}\). On the other hand, a probe angle \(0<\theta_{p}<\pi/2\) such that \(k_{p}/k_{0}\) is an irrational ratio produces a space quasiperiodic drive.
In the semi-classical approximation [36], the atoms in the ground state \(|\pm\rangle\) are described by the phase space density \(P_{\pm}(x,p,t)\) at the position \(x\) with momentum \(p\), which satisfies the following coupled Fokker-Planck equations
\[\left[\frac{\partial}{\partial t}+\frac{p}{m_{a}}\frac{\partial} {\partial x}-U^{\prime}_{\pm}(x)\frac{\partial}{\partial p}\right]P_{\pm}=\] \[-\gamma_{\pm}(x)P_{\pm}+\gamma_{\mp}(x)P_{\mp}+\frac{\partial^{2 }}{\partial p^{2}}\left[D_{0}P_{\pm}\right], \tag{5}\]
where \(U^{\prime}_{\pm}=\partial U_{\pm}/\partial x\), and
\[\gamma_{\pm}(x)=g_{0}\pm g_{1}\cos(k_{0}x)+g_{2}\cos(2k_{0}x) \tag{6}\]
are the transition rates between the ground state sublevels, defined in terms of \(\Gamma^{\prime}\), the photon scattering rate per lattice beam, as \(g_{0}=2\Gamma^{\prime}/3\), \(g_{1}=8\Gamma^{\prime}/9\), \(g_{2}=2\Gamma^{\prime}/9\), and \(D_{0}=5\hbar^{2}k_{0}^{2}\Gamma^{\prime}/18\) is a noise strength describing the random momentum jumps that result from the interaction with the photons. As in [31], we are neglecting the probe contribution to the transition rates, the radiation forces, and noise terms, since their effect is observed to be small in the simulations.
Directed motion is characterized by the current, defined as the average atomic velocity
\[\langle v\rangle=\lim_{t\to\infty}\frac{\langle[x(t)-x(0)]\rangle }{t}=\] \[\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}\!\!dt^{\prime}\int\!\!dx \int\!\!dp\,\frac{p}{m_{a}}[P_{+}(x,p,t^{\prime})+P_{-}(x,p,t^{\prime})], \tag{7}\]
## II Analytical results
Following the method presented in Ref. [31], we Fourier-decompose the current in (7) into contributions arising from atomic density modes excited by the probe. Using \(l\), \(n\), \(m\) to denote the mode numbers, so that the mode has a frequency \(\omega=l\delta_{p}\) and wave number \(k=nk_{0}+mk_{p}\), we obtain, in terms of the amplitudes of the excited atomic density waves,
\[P^{\pm}[l,n,m]=\frac{\delta_{p}}{2\pi}\int_{0}^{2\pi/\delta_{p} }\!\!\!dt\,e^{-il\delta_{p}t}\int\!\!dx\,e^{i(nk_{0}+mk_{p})x}\] \[\int\!\!dp\,P_{\pm}(x,p,t), \tag{8}\]
the following expansion for the current for case (b), i.e., the setup defined by (4) for the \(U^{(b)}\) system,
\[\langle v\rangle_{\rm(b)}=\frac{m_{a}}{m_{a}F_{0}g_{1}-2D_{0}k_{0}}\Bigg{[}\] \[-\frac{\mbox{Im}\left[P_{+}[0,1,0]\right]F_{0}\left(8g_{0}^{2}-4g_ {2}^{2}/3+F_{0}k_{0}/(2m_{a})\right)}{k_{0}}\] \[+\frac{\mbox{Im}\left[P_{+}[0,2,0]\right]F_{0}\left(-4g_{0}g_{1} -8g_{1}g_{2}/3+F_{0}k_{0}/m_{a}\right)}{k_{0}}\] \[+\frac{\mbox{Im}\left[P_{+}[0,3,0]\right]F_{0}\left(-16g_{0}g_{1} /3-2g_{2}^{2}-3F_{0}k_{0}/(2m_{a})\right)}{k_{0}}\] \[+\frac{\mbox{Im}\left[P_{+}[0,4,0]\right]F_{0}\left(-2g_{1}g_{2} /3+F_{0}k_{0}/(2m_{a})\right)}{k_{0}}\] \[-\frac{\mbox{Im}\left[P_{+}[0,5,0]\right]F_{0}2g_{2}^{2}}{3k_{0}} +\frac{\mbox{Im}\left[e^{i\phi_{p}}P_{+}[1,-2,1]\right]F_{0}F_{p}(2k_{0}-k_ {p})}{2m_{a}k_{p}}\] \[+\frac{\mbox{Im}\left[e^{i\phi_{p}}P_{+}[1,-1,1]\right]F_{0}F_{p}( -k_{0}+k_{p})}{m_{a}k_{p}}\] \[+\frac{\mbox{Im}\left[e^{i\phi_{p}}P_{+}[1,0,1]\right]2F_{p}k_{0} \delta_{p}^{2}}{k_{p}^{2}}\] \[+\frac{\mbox{Im}\left[e^{i\phi_{p}}P_{+}[1,1,1]\right]F_{0}F_{p}( k_{0}+k_{p})}{m_{a}k_{p}}\] \[-\frac{\mbox{Im}\left[e^{i\phi_{p}}P_{+}[1,2,1]\right]F_{0}F_{p}( 2k_{0}+k_{p})}{2m_{a}k_{p}}\] \[+\frac{\mbox{Im}\left[e^{i2\phi_{p}}P_{+}[2,0,2]\right]F_{p}^{2} k_{0}}{m_{a}k_{p}},\Bigg{]}.\]
where the force amplitudes \(F_{0}=k_{0}U_{0}/2\) and \(F_{p}=U_{0}\varepsilon_{p}k_{p}/2\). Equation (II) is valid for an arbitrary \(k_{p}\), including the periodic case \(k_{p}=k_{0}\), which was validated numerically in Ref. [31].
Similar expressions are found for case (a), defined by Eq. (3), which are reported in the appendix. The main difference between the cases (a) and (b), defined by (3) and (4), respectively, is that case (a) provides different expressions for the generic case \(k_{p}\neq k_{0}\) and the special case \(k_{p}=k_{0}\). In the expansion for the periodic case \(k_{p}=k_{0}\), reported in (10), there are more terms --thus more atomic wave modes activated-- than in the quasiperiodic case where \(k_{p}/k_{0}\) is an irrational ratio, reported in (11). These extra terms cannot be obtained from (11) after taking the limit \(k_{p}\to k_{0}\).
Furthermore, (11) shows some apparent singularities in the form of coefficients with denominators proportional to \(k_{0}-k_{p}\) or \(k_{0}-2k_{p}\), and thus apparently problematic in the periodic limits \(k_{p}\to k_{0}\) and \(k_{p}\to 2k_{0}\). This could suggest a special sensitivity to the periodic/quasiperiodic transition, similar to that observed in the case of time quasiperiodicity [33; 34; 35; 6; 39], where the large sensitivity in the system response to time quasiperiodic forces is known to yield sub-Fourier resonances.
However, our numerical results, reported in the following section, shows that this is not the case, and the
transition is smooth. This is possible because the amplitudes of the modes involved decay to zero as (or stronger than) \(k_{0}-k_{p}\) or \(k_{0}-2k_{p}\), thus removing any possible singularity in the periodic cases \(k_{0}=k_{p}\) or \(k_{0}=2k_{p}\).
The analytical expansions, though, still provide a useful decomposition into atomic waves of directed transport, and are used in the following sections to interpret the atomic transport provoked by the probe.
## III Numerical results
Numerical solutions of the equation (5) are obtained by generating a large number of individual atomic trajectories \(x_{j}(\sigma_{j}(t),t)\), where \(\sigma_{j}(t)=+1\) or \(-1\) is the occupied state at time \(t\) in that trajectory, using a stochastic algorithm [40]. Averages are computed using over \(1.5\times 10^{6}\) trajectories. Following Ref. [31], the atomic mode amplitudes (8) are calculated via the formula,
\[P_{\pm}[l,n,m]=\lim_{l^{\prime}\to\infty}\frac{\delta_{p}}{2\pi l ^{\prime}N}\times\] \[\sum_{j=1}^{N}\int_{0}^{2\pi l^{\prime}/\delta_{0}}\!\!\!\!\!dt\,e^ {i[(nk_{0}+mk_{p})x_{j}(\sigma_{j}(t),t)-l\delta_{p}t]}\delta_{\sigma_{j}(t), \pm 1} \tag{10}\]
In all simulations, units are defined such that \(m_{a}=\hbar=k_{l}=1\). In these units, the optical lattice parameters were fixed to \(U_{0}=200\), \(\Gamma^{\prime}=2.85\).
We start by first studying the system sensitivity to the transition between space periodic and quasiperiodic driving. We chose the \(U^{(a)}\) system in (3) (i.e., case (a)), because it gives different expansions in the periodic \(k_{p}=k_{0}\) and quasiperiodic (\(k_{p}\) and \(k_{0}\) incommensurate) cases. Specifically, Eq. (A3) indicates a potential singularity when \(k_{p}\to k_{0}\) in the mode Re\([P_{+}[1,-1,1]]\) due to the presence of \((k_{p}-k_{0})\) in the denominator of the coefficient.
Figure 2 shows the current and the mode contributions to the current, given by (A3), as a function of \(k_{p}\) around the periodic case \(k_{p}=k_{0}\). No mode or contribution is observed to act abruptly, on the contrary, they all are seen to behave smoothly. The potential singularity in the mode Re\([P_{+}[1,-1,1]]\) does not actually occur, because the mode amplitude itself tends to zero in the limit \(k_{p}\to k_{0}\) as \((k_{p}-k_{0})\), such that its current contribution is finite, as seen in Fig. 2.
Similar features are observed near \(k_{p}=2k_{0}\), demonstrating that the transition from space quasiperiodicity to periodicity is smooth.
Next, in order to understand how Brillouin modes are generated, we study the atomic waves when varying the driving frequency \(\delta_{p}\). A series of resonances, that is, local maxima at certain values of the driving frequencies is observed, allowing for a proper rationalization of the transport mechanisms and the experimental results.
Figure 3 shows the current as a function of \(\delta_{p}\) for several values of the driving amplitude \(\varepsilon_{p}\) in the \(U^{(b)}\) system defined by (4) (i.e., case (b)), with space periodic driving wave number is fixed to \(k_{p}=k_{0}\). Dotted vertical lines are placed at \(\omega_{0}=5.75\), \(\omega_{0}/2\), and \(2\omega_{0}\). Resonances at multiples and sub-multiples of the intrinsic frequency \(\omega_{0}\) are observed.
Figure 2: Smooth space quasiperiodicity. Current and mode contributions to the current as a function of the driving wave number \(k_{p}\) for the \(U^{(a)}\) system in (3) (i.e., case (a)), with \(\delta_{p}=7.5\), \(\epsilon_{p}=0.8\), \(\theta_{x}=30^{\beta}\) (thus \(k_{0}=0.5\)), and \(\phi_{p}=0\). Each mode \((l,n,m)\) has a frequency \(\omega=l\delta_{p}\) and wave number \(k=nk_{0}+mk_{p}\). Mode amplitudes are measured in the simulation via (10), and their precise contribution to the current determined using (A3). The dashed line is the sum of all current contributions, and the diamonds are the current calculated from its definition (7). Units are defined such that \(m_{a}=\hbar=k_{l}=1\).
Figure 3: The atomic current is plotted for the case of space periodic driving as a function of the driving frequency \(\delta_{p}\) in the \(U^{(b)}\) system defined by (4) (i.e., case (b)), with \(\theta_{x}=25^{0}\), \(\phi_{p}=\pi\), for several values of the driving amplitude \(\varepsilon_{p}\). The driving wave number is fixed to \(k_{p}=k_{0}\). Dotted vertical lines are placed at \(\omega_{0}=5.75\), \(\omega_{0}/2\), and \(2\omega_{0}\). Resonances at multiples and sub-multiples of the intrinsic frequency \(\omega_{0}\) are observed.
This behavior is not uncommon, in rocked ratchets the current is also observed [33] to peak at multiples and sub-multiples, in general a fractional number, of an intrinsic frequency.
As the driving wave number \(k_{p}\) is varied, so does the phase velocity of the propagating perturbation \(v_{p}=\delta_{p}/k_{p}\). It is expected [8] that a peak is produced when this phase velocity matches the velocity
\[v_{0}=\omega_{0}/k_{0}. \tag{11}\]
The intrinsic velocity \(v_{0}\) is associated with an average drift in one direction due to half oscillations in a well, followed by transitions between the atomic states [8]. This velocity matching (\(vm\)) mechanism thus yields a peak at
\[(\delta_{p})_{vm}=\omega_{0}k_{p}/k_{0}. \tag{12}\]
The numerical results, presented in Fig. 4 do not contradict this picture.
However, it is somehow obscured by the above mentioned frequency peaking at a fractional ratio of the intrinsic frequency. For example, let's consider the case of space-quasiperiodic driving with \(k_{p}=0.5k_{p0}\), where \(k_{p0}\equiv k_{0}/\sqrt{5}\). In this case, the current-vs-\(\delta_{p}\) curve displays a peak near \((\delta_{p})_{vm}\), but it is also leaning towards \(\omega_{0}/2\). The curves for \(k_{p}=k_{p0}\) and \(k_{p}=1.5k_{p0}\) show no velocity matching shift, just peaking at the value \(\omega_{0}\). The curve \(k_{p}=2.5k_{p0}\) does peak at \((\delta_{p})_{vm}\), but this value is very near \(\omega_{0}\) in this case. It also shows a small peak at about \(1.5\omega_{0}\). In the case \(k_{p}=3.5k_{p0}\), the frequency \((\delta_{p})_{vm}\) lies between the peaks at \(\omega_{0}\) and \(2\omega_{0}\). The corresponding curve peaks at \(\omega_{0}\), \((\delta_{p})_{vm}\) (or \(1.5\omega_{0}\), since they are very near here), and \(2\omega_{0}\). The curve \(k_{p}=4.5k_{p0}\) offers a clear confirmation of the velocity matching mechanism, because it shows no clear peak at \(\omega_{0}\), and a distinct one at \((\delta_{p})_{vm}\), which is also close to \(2\omega_{0}\).
Overall, the discussed shift due to velocity matching is clearly at play in the system, but the shift takes place through local maxima at a fractional ratio of the intrinsic frequency \(\omega_{0}\).
We may then wonder how a particular atomic mode is affected by the above discussed resonances. Figure 5 shows the results for the case \(k_{p}=3.5k_{p0}\), which was chosen because the current shows three peaks in Figure 4, at \(\omega_{0}\), \(1.5\omega_{0}\approx(\delta_{p})_{vm}\), and \(2\omega_{0}\). The plot shows peaks at these frequency values at most of the atomic modes, behaving very similarly all of them. Moreover, an extra peak in most of them is also visible at \(\delta_{p}=3\omega_{0}\), a peak which is difficult to appreciate in the current plot, Fig. 4.
## IV Experiment
Our experiments are performed on \({}^{85}\)Rb atoms in a dilutely occupied dissipative 3D lattice in a tetrahedral lin\(\_\)lin configuration, as in Fig. 1. Directed motion is produced by the weak \(\hat{y}\)-polarized probe which makes an angle \(\theta_{p}\) with the \(z\)-axis. The probe frequency \(\omega_{p}\) is scanned around the (fixed) frequency \(\omega_{l}\) of the lattice beams, and probe transmission is measured as a function of probe detuning \(\delta=\omega_{p}-\omega_{l}\). As in Ref. [28], \(\delta/\omega_{L}<10^{-9}\). Here, the intensity ratio of probe to lattice (sum of all four beams) is less than 3%,
In accordance with the notation used in Ref. [11; 28],
Figure 4: Same as in Fig. 3 (i.e., for the \(U^{(b)}\) system in (4), case (b)), but this time for space quasiperiodic driving with \(k_{p0}=k_{0}/\sqrt{5}\). The current is plotted for several values of the driving wave number, \(k_{p}=0.5k_{p0},k_{p0},1.5k_{p0},2.5k_{p0},3.5k_{p0}\), and \(4.5k_{p0}\). In all cases the driving amplitude \(\epsilon_{p}\) was varied so that \(\epsilon_{p}k_{p}=0.8\), thus keeping the force amplitude \(F_{p}\) fixed. As in Fig. 3, dotted vertical lines are placed at \(\omega_{0}=5.75\), \(\omega_{0}/2\), and \(2\omega_{0}\), plus an additional one at \(3\omega_{0}\). The truncated vertical dashed lines correspond to the shifted values \((\delta_{p})_{vm}=\omega_{0}k_{p}/k_{0}\).
Figure 5: Current and mode contributions to the current as a function of the driving frequency \(\delta_{p}\) for the quasiperiodic case \(k_{p}=3.5k_{p0}\) shown in Fig. 4 (for the \(U^{(b)}\) system in (4), case (b)). As in Fig. 4, dotted vertical lines are placed at \(\omega_{0}=5.75\), \(\omega_{0}/2\), \(2\omega_{0}\), and \(3\omega_{0}\).
\(\Delta_{0}^{\prime}\equiv\Delta s_{0}/2\), and \(\Gamma^{\prime}\equiv\Gamma s_{0}/2\) where \(s_{0}\equiv(I/I_{sat})/(1+4\Delta^{2}/\Gamma^{2})\) is just the saturation parameter. For the \(F_{g}=3\to F_{e}=4\) transition in \({}^{85}\)Rb, \(I_{sat}=1.67\) mW/cm\({}^{2}\) for \(\sigma\)-light, \(\Gamma/2\pi\) is the natural linewidth for \({}^{85}\)Rb (6.07 MHz), and \(\omega_{r}/2\pi\) is the recoil frequency (3.86 kHz). In our experiment, \(\theta_{x}=\theta_{y}=25^{0}\), and each lattice beam has intensity \(I=6.22\) mW/cm\({}^{2}\), a \(1/e^{2}\)-diameter 5.4 mm (the probe diameter is 1.4 mm), and red-detuning \(\Delta=8.75\Gamma\). In order to determine the intensity \(I\) that actually illuminates the atoms, care is taken to account for the intensity loss through the windows of the vacuum cell that houses the lattice, and the background Rb vapor. These values yield \(s_{0}=0.012\), \(\Gamma^{\prime}/2\pi=36.4\) kHz, and a well-depth \(U_{0}=445\,\hbar\omega_{r}\). Using the definition of the recoil frequency just after (2), and setting \(\hbar=m_{a}=k_{l}=1\) as in the simulations, we find this \(U_{0}\)-value corresponds to 223, and \(\Gamma^{\prime}\), in units of the recoil frequency, corresponds to 4.72 - these values are comparable to those assumed in the simulations in Sec. III.
Figs. 6(a) and (b) show distinctly different probe transmission spectra for \(\theta_{p}=0\) and \(17.5^{0}\), respectively. The peaks in the spectra correspond to photons absorbed from a lattice beam and emitted via stimulated emission into the probe, while dips correspond to photons absorbed from the probe and emitted into a lattice beam.
We discuss first the periodic case shown in Fig. 6 (a), in which the probe is aligned with the \(z\)-axis (\(\theta_{p}=0\)). This system was analyzed in detail in Ref. [28], where it was confirmed that the spectral features denoted as \(\Omega_{Z}\) arise from probe-induced Raman transitions between adjacent vibrational levels corresponding to oscillations along the \(z\)-axis in each well, and the \(\Omega_{B}\) features arise from Brillouin-like directed transport along the \(\pm x\)-directions.
Indeed, the vibrational frequency in the \(z\)-direction is determined in the system model by [28]
\[\Omega_{z}=2(\cos\theta_{x}+\cos\theta_{y})\sqrt{2|\Delta_{0}^{\prime}|\, \omega_{r}}, \tag{13}\]
thus yielding \(\Omega_{z}=2\pi(180\) kHz), which is in reasonable agreement with the observed value of about 200 kHz. Moreover, Eq. (2) yields a vibrational frequency \(\Omega_{x}/2\pi=60\) kHz, which is very close to the observed value for \(\Omega_{B}/2\pi\). This \(\Omega_{x}\)-value equals 15.5 \(\omega_{r}\), which in simulation units corresponds to 7.8, not far from the 7.3 value mentioned in Sec. III. The agreement between the predicted and observed values for \(\Omega_{x}\) and \(\Omega_{B}\) is remarkable considering that the theory assumed a \(F_{g}=1/2\to F_{e}=3/2\) atom.
It is important to note that even though \(\Omega_{B}\) coincides with \(\Omega_{x}\), the spectral features at \(\Omega_{B}\) in Fig. 6(a) could not arise from intrawell oscillatory motion in the \(x\)-direction, owing to the fact that adjacent vibrational levels are of opposite parity, and hence the overlap integral of the probe operator (i.e., the lattice-probe interference term) between these two levels is zero (because the interference term goes as \(\vec{E}_{0}\cdot\vec{E}_{p}^{*}\) and is quadratic in \(x\) for a probe that propagates purely along \(\hat{z}\); here \(\vec{E}_{0}\) and \(\vec{E}_{p}\) are the lattice and probe electric field amplitudes, respectively, [28]).
In this space periodic case \(\theta_{p}=0\), the three probe terms of (1) reduces to a single perturbation propagating with phase velocity \(v_{0}=\delta_{p}/k_{0}\), like in the 1D model (4) with \(k_{p}=k_{0}\), whose numerical results are shown in Fig. 3. In agreement with the experimental results of Fig. 6 (a), the theoretical model predicts a dominant peak at about \(\omega_{0}\), which is identified with \(\Omega_{x}\). The secondary peaks observed at other multiples of \(\omega_{0}\) in Fig. 3 for certain values of the driving amplitudes are absent in Fig. 6 (a), though.
Let us turn, then, our attention to the case where \(k_{p}\neq k_{0}\), shown in Fig. 6 (b). Equation (1) indicates that the probe driving produces three perturbations propagating with phase velocities \(v_{1}=\delta_{p}/(k_{0}+k_{l}\sin\theta_{p})\), \(v_{2}=-\delta_{p}/(k_{0}-k_{l}\sin\theta_{p})\), and \(v_{3}=\delta_{p}/(k_{l}\sin\theta_{p})\). The first two perturbations are the result of the interference between the \(\hat{y}\)-polarized probe and the \(\hat{y}\)-polarized lattice beams \(\vec{k}_{1}\), \(\vec{k}_{2}\), as depicted in Fig. 1. However, the third perturbation, being produced by the interference of the probe with the \(\hat{x}\)-polarized lattice beams \(\vec{k}_{3}\) and \(\vec{k}_{4}\), is not expected to show up in the transmission spectrum depicted in Fig. 6 (b) because of Doppler broadening [28; 9]. Specifically, the \(z\)-component of the motion makes the scattered field vary randomly, being responsible for a wash-out of the spectral features related to the \(x\)-motion [9] due to the perturbation propagating with velocity \(v_{3}\). In other words, the pump-probe measurements shown here are sensitive only to the \(U^{(b)}\)-perturbation defined by (4) (i.e., case (b)), and are not sensitive to the
\(U^{(a)}\)-perturbation defined by (3) under case (a). As indicated in Sec. III, the simulations in Figs. 3 - 5 all pertain to case (b).
Case (b) refers to the first two perturbations in (1). Following the velocity matching argument discussed in Sec. III, we expect a peak for the driving frequency values where the intrinsic velocity \(\pm v_{0}\) (11) matches the velocity of the propagating perturbation, thus leading to two possible values for the probe detuning \(\delta_{p}\), located on either side of \(\Omega_{x}\):
\[\delta_{1} =\Omega_{x}\frac{\sin\theta_{x}+\sin\theta_{p}}{\sin\theta_{x}}, \tag{14}\] \[\delta_{2} =\Omega_{x}\frac{\sin\theta_{x}-\sin\theta_{p}}{\sin\theta_{x}}, \tag{15}\]
where we have identified the vibrational frequency \(\omega_{0}=\Omega_{x}\). The theoretical analysis of Sec. III also predicts a peak at a multiple of the vibrational frequency, specially around the central value at \(\Omega_{x}\), as seen in Fig. 4.
In order to confirm the angle-dependencies in (14) and (15), we have carried out several measurements with varying values of the probe angle \(\theta_{p}\). The results are plotted in Fig. 7.
The solid lines in Fig. 7 (b) confirm the analytical predictions of Eqs. (14) and (15), with \(\delta_{1}\) and \(\delta_{2}\) explaining the observed peaks \(\Omega_{B+}\) and \(\Omega_{B-}\), respectively, that are seen in Fig. 7 (a) and Fig. 6 (b). The quantitative agreement is remarkable, specially taking into account that the theory is based on a 1D model with a simplified atomic transition.
The theory also offers an explanation [9] for the fact that the amplitudes of the \(\Omega_{B-}\) resonance are seen to be smaller than the \(\Omega_{B+}\) ones. Since the spatial period \(2\pi/(k_{0}-k_{l}\sin\theta_{p})\) associated for the \(\Omega_{B-}\) motion is larger than that of \(\Omega_{B+}\), \(2\pi/(k_{0}+k_{l}\sin\theta_{p})\), the sequence of half oscillations and well-transfers for \(\Omega_{B-}\) is more likely to be interrupted by random photon recoils associated with the Sisyphus process, causing a larger damping of the directed propagation. For the \(\Omega_{B+}\) resonance, \(k_{p}=k_{0}+k_{l}\sin\theta_{p}\), yielding \(k_{p}/k_{p0}\)-values corresponding to the four angles from \(12.5^{0}\) to \(20^{0}\) as 3.4, 3.6, 3.8, and 4.1, respectively. Note that the first two values are close to the \(k_{p}/k_{p0}\)-value of 3.5 in Fig. 5, which predicts peaks in the directed propagation at \(\omega_{0}\), \((\delta_{p})_{vm}\approx 1.5\omega_{0}\), and \(2\omega_{0}\). The peak at \(\omega_{0}=\Omega_{x}\approx 2\pi\,(60\mathrm{kHz})\) is certainly observed in Fig. 7 (a) and (b), as are single peaks at approximately 105 kHz and 110 kHz for \(\theta_{p}=12.5^{0}\) and \(15^{0}\), respectively. The experiment does not resolve each of these single peaks into possible components at the numerical peaks, but the observed single-peak values lie in between these, not too far from them. Thus the experimental findings in Fig. 7 (a-b) and the numerical simulations in Fig. 5 do not contradict each other.
Finally, figs. 7(a) and (b) show consistently a peak at \(\delta_{p}=\Omega_{x}\), regardless of the specific value of \(\theta_{p}\). While symmetry consideration for \(\theta_{p}=0\) forbids the appearance of a resonance in the spectrum due to localized vibrations about the well bottoms, that is not the case for \(\theta_{p}\neq 0\). However, as shown in Sec. III for the 1D theoretical model, the observed peak may well also be due to directed motion.
## V Conclusions
We have studied the atomic waves generated in a dissipative optical lattice under a weak beam that produces a directed current which travels perpendicular to the direction of travel of the probe.
On the theoretical side, a minimal 1D model of the
Figure 7: (a) Pump-probe spectra taken at different probe angles \(\theta_{p}\) for the lattice in Fig. 1(a). The vertical lines demarcate the fixed vibrational frequencies \(\Omega_{z}\) and \(\Omega_{x}\). At \(\theta_{p}=0\), the gray arrow denotes the Brillouin resonance \(\Omega_{B}\) that coincides with \(\Omega_{x}\). At \(\theta_{p}\neq 0\), the black and white arrows denote Brillouin resonances \(\Omega_{B+}\) and \(\Omega_{B-}\), respectively. (b) \(\Omega_{B+}\) and \(\Omega_{B-}\) depart from \(\Omega_{x}\) in accordance with Eqs. (14) and (15), respectively: The solid lines are the theoretical predictions from the equations. The two black-circled datapoints in (b) correspond to the spectrum in Fig. 6(b). It takes 10 ms to generate those spectra, and each datapoint here is an average of at least five scans.
experimental setup is studied in detail to elucidate the mechanisms of transport. An analytical method based on a Fourier decomposition of the current is applied to study the case when the driving potential perturbation has a spatial period which is the same as that of the underlying lattice, a case of space periodic driving, and when both periods are incommensurate, the regime of space quasiperiodic driving.
It is numerically demonstrated that the transition between both regimes is smooth, despite the fact that the expansion for one of the probe perturbations is different in each regime. When the frequency of the probe is varied, the current, and more specifically the mode amplitudes that contribute to the current, show several peaks. One set is identified with a multiple of the intrinsic frequency \(\omega_{0}\), and another is associated with a velocity matching mechanism, in which the velocity of the propagating modulation matches up with the average velocity of the atom in its intrawell oscillation. Both mechanisms are seen at play in both the space periodic and space quasiperiodic regime, and even at play at the same time, since the shift due to velocity matching is observed to take place through local maxima at a fractional ratio of the intrinsic frequency \(\omega_{0}\).
The pump-probe experiments confirm many of the above predictions. In the case of space-periodic driving \(k_{p}=k_{0}\), with a weak beam that is aligned with the lattice symmetry axis, the probe transmission spectrum indeed reveals a dominant peak at \(\omega_{0}\), shown in Fig. 6(a), that corresponds to a propagating Brillouin-like mode in a direction perpendicular to probe propagation. These observations are borne out by the numerical predictions in Fig. 3, although secondary peaks predicted at multiples and sub-multiples of \(\omega_{0}\) were not observed in this work. In the case of driving with the weak beam incident at angle \(\theta_{p}\) with the lattice symmetry axis, the spectrum reveals a peak at \(\omega_{0}\), and also additional peaks where the velocity matching mechanism above is satisfied, as shown in Fig. 6(b). The angle-dependence of these additional spectral features, which correspond to two distinct propagating modulations with different spatial periods, is shown in Fig. 7 to be in accordance with the analytical predictions. These findings are not contradicted by the numerical predictions in Figs. 4 and 5, although it was not experimentally possible to tease apart the contributions to the observed resonances made by \((\delta_{p})_{vm}\) versus those made by multiples and sub-multiples of \(\omega_{0}\). We hope that the new understanding of Brillouin transport modes gained here would pave the road toward ratcheting of cold atoms confined in a weakly modulated optical lattice, along a precisely predictable, arbitrary direction [7].
## VI Acknowledgments
This work is supported by the Army Research Office under award/contract number W911NF2110120 and the Ministerio de Ciencia e Innovacion of Spain of Spain, Grant No. PID2019-105316GB-I00 (DC). We thank the Instrumentation Laboratory at Miami University for electronics and LabView support. We gratefully acknowledge invaluable assistance in the lab from Ian Dilyard and Jordan Churi.
## Appendix A Further analytical results
The setup defined by case (a) in (3) requires different expressions for the quasiperiodic (\(k_{p}/k_{0}\) irrational) and periodic \(k_{p}=k_{0}\) cases. The calculation proceeds along the same lines sketched in [31]. The atomic state symmetry produced by the probe for setup (3) is given by
\[P_{-}[l,n,m]=(-1)^{n+l}P_{+}[l,n,m], \tag{10}\]
which can be also written as
\[P_{-}[l,n,m]=(-1)^{n+m}P_{+}[l,n,m], \tag{11}\]
the latter being a useful expression in the special (periodic) case \(k_{p}=k_{0}\).
In the quasiperiodic case we find
\[\langle v\rangle_{(a)}=\frac{m_{a}}{m_{a}F_{0}g_{1}-2D_{0}k_{0}}\Bigg{[}\] \[-\frac{\mathop{\rm Im}\left[P_{+}[0,1,0]\right]F_{0}\left(8g_{0}^{2 }-4g_{2}^{2}/3+F_{0}k_{0}/(2m_{a})\right)}{k_{0}}\] \[+\frac{\mathop{\rm Im}\left[P_{+}[0,2,0]\right]F_{0}\left(-4g_{0} g_{1}-8g_{1}g_{2}/3+F_{0}k_{0}/m_{a}\right)}{k_{0}}\] \[+\frac{\mathop{\rm Im}\left[P_{+}[0,3,0]\right]F_{0}\left(-16g_{0} g_{1}/3-2g_{2}^{2}-3F_{0}k_{0}/(2m_{a})\right)}{k_{0}}\] \[+\frac{\mathop{\rm Im}\left[P_{+}[0,4,0]\right]F_{0}\left(-2g_{1} g_{2}/3+F_{0}k_{0}/(2m_{a})\right)}{k_{0}}\] \[-\frac{\mathop{\rm Im}\left[P_{+}[0,5,0]\right]F_{0}2g_{2}^{2}}{3 k_{0}}+\frac{\mathop{\rm Im}\left[e^{i\phi_{P}}P_{+}[1,-4,1]\right](2F_{p}g_{ 2}^{2}k_{0})}{(2k_{0}-k_{p})k_{p}}\] \[+\frac{\mathop{\rm Im}\left[e^{i\phi_{P}}P_{+}[1,-3,1]\right]2F_ {p}g_{1}g_{2}k_{0}}{(2k_{0}-k_{p})k_{p}}\] \[+\frac{\mathop{\rm Im}\left[e^{i\phi_{P}}P_{+}[1,-2,1]\right]}{( 2k_{0}-k_{p})k_{p}^{2}}[8F_{p}g_{0}g_{2}(-k_{0}^{2}+k_{0}k_{p})\] \[+F_{0}F_{p}(2k_{0}^{2}k_{p}-2k_{0}k_{p}^{2}+k_{p}^{3}/2)/m_{a}]\] \[-\frac{\mathop{\rm Re}\left[e^{i\phi_{P}}P_{+}[1,-1,1]\right]4F_ {p}g_{2}k_{0}(k_{0}-k_{p})\delta_{p}}{(2k_{0}-k_{p})k_{p}^{2}}\] \[+\frac{\mathop{\rm Im}\left[e^{i\phi_{P}}P_{+}[1,-1,1]\right]}{( 2k_{0}-k_{p})k_{p}^{2}}\,F_{p}\Bigg{[}4g_{0}g_{1}k_{0}(-2k_{0}+k_{p})\] \[+2g_{1}g_{2}k_{0}k_{p}+\frac{F_{0}}{m_{a}}(-2k_{0}^{2}k_{p}+3k_{0} k_{p}^{2}-k_{p}^{3})\Bigg{]}\] \[-\frac{\mathop{\rm Re}\left[e^{i\phi_{P}}P_{+}[1,-1,1]\right]2F_ {p}g_{1}k_{0}(k_{0}-2k_{p})\delta_{p}}{(k_{0}-k_{p})k_{p}^{2}}\] \[-\frac{\mathop{\rm Im}\left[e^{i\phi_{P}}P_{+}[1,0,1]\right]}{(2k _{0}-k_{p})k_{p}^{2}(2k_{0}+k_{p})}F_{p}[g_{0}^{2}(32k_{0}^{3}-8k_{0}k_{p}^{2})\] \[-4g_{2}^{2}k_{0}k_{p}^{2}+\delta_{p}^{2}(-8k_{0}^{3}+2k_{0}k_{p}^{ 2})]\] \[-\frac{\mathop{\rm Re}\left[e^{i\phi_{P}}P_{+}[1,0,1]\right]}{k_ {p}^{2}}\] \[+\frac{\mathop{\rm Im}\left[e^{i\phi_{P}}P_{+}[1,1,1]\right]}{(2k _{0}+k_{p})k_{p}^{2}}F_{p}[-4g_{0}g_{1}k_{0}(2k_{0}+k_{p})\] \[-2g_{1}g_{2}k_{0}k_{p}+F_{0}(2k_{0}^{2}k_{p}+3k_{0}k_{p}^{2}+k_{p} ^{3})/m_{a}]\] \[-\frac{\mathop{\rm Re}\left[e^{i\phi_{P}}P_{+}[1,1,1]\right]2F_ {p}g_{1}k_{0}(k_{0}+k_{p})\delta_{p}}{(k_{0}+k_{p})k_{p}^{2}}\] \[+\frac{\mathop{\rm Im}\left[e^{i\phi_{P}}P_{+}[1,2,1]\right]}{(2k _{0}+k_{p})k_{p}^{2}}[8F_{p}g_{0}g_{2}(-k_{0}^{2}-k_{0}k_{p})\] \[+F_{0}F_{p}(-2k_{0}^{2}k_{p}-2k_{0}k_{p}^{2}-k_{p}^{3}/2)/m_{a}]\] \[-\frac{\mathop{\rm Re}\left[e^{i\phi_{P}}P_{+}[1,2,1]\right]4F_ {p}g_{2}k_{0}(k_{0}+k_{p})\delta_{p}}{(2k_{0}+k_{p})k_{p}^{2}}\] \[-\frac{\mathop{\rm Im}\left[e^{i\phi_{P}}P_{+}[1,3,1]\right]2F_ {p}g_{1}g_{2}k_{0}}{(2k_{0}+k_{p})k_{p}}\] \[-\frac{\mathop{\rm Im}\left[e^{i\phi_{P}}P_{+}[1,4,1]\right](2F_ {p}g_{2}^{2}k_{0})}{(2k_{0}+k_{p})k_{p}}\] \[+\frac{\mathop{\rm Im}\left[e^{i2\phi_{P}}P_{+}[2,0,2]\right]F_ {p}^{2}k_{0}}{m_{a}k_{p}}\Bigg{]}.\]
Equation (A) is not valid when the ratio \(k_{p}/k_{0}\) is a rational number. In this case there are resonances which require a special derivation [31]. An indication of this fact is that the coefficient associated with the mode amplitude \(\mathop{\rm Re}[P_{+}[1,-1,1]]\) goes to infinity in the limit \(k_{p}\to k_{0}\), whereas the actual coefficient when computed directly in the case \(k_{p}=k_{0}\) remains finite, as shown in Eq. (A).
Equation (A) is not valid when the ratio \(k_{p}/k_{0}\) is a rational number. In this case there are resonances which require a special derivation [31]. An indication of this fact is that the coefficient associated with the mode amplitude \(\mathop{\rm Re}[P_{+}[1,-1,1]]\) goes to infinity in the limit \(k_{p}\to k_{0}\), whereas the actual coefficient when computed directly in the case \(k_{p}=k_{0}\) remains finite, as shown in Eq. (A).
The full expansion in the periodic case \(k_{p}=k_{0}\) is given
by
\[\langle v\rangle_{(a)}=\frac{m_{a}}{m_{a}F_{0}g_{1}-2D_{0}k_{0}} \Bigg{[}\] \[-\frac{\text{Re}\left[P_{+}[0,1,0]\right]F_{0}\left(F_{p}^{2}g_{1} \right)}{m_{a}\delta_{p}}\] \[-\frac{\text{Im}\left[P_{+}[0,1,0]\right]F_{0}\left(8g_{0}^{2}-4g _{2}^{2}/3+F_{0}k_{0}/(2m_{a})\right)}{k_{0}}\] \[+\frac{\text{Im}\left[P_{+}[0,2,0]\right]F_{0}\left(-4g_{0}g_{1} -8g_{1}g_{2}/3+F_{0}k_{0}/m_{a}\right)}{k_{0}}\] \[+\frac{\text{Im}\left[P_{+}[0,3,0]\right]F_{0}\left(-16g_{0}g_{1} /3-2g_{2}^{2}-3F_{0}k_{0}/(2m_{a})\right)}{k_{0}}\] \[+\frac{\text{Im}\left[P_{+}[0,4,0]\right]F_{0}\left(-2g_{1}g_{2} /3+F_{0}k_{0}/(2m_{a})\right)}{k_{0}}\] \[-\frac{\text{Im}\left[P_{+}[0,5,0]\right]F_{0}2g_{2}^{2}}{3k_{0} }+\frac{\text{Im}\left[e^{i\phi_{p}}P_{+}[1,-4,1]\right]\left(2F_{p}g_{2}^{2} \right)}{k_{0}}\] \[+\frac{\text{Re}\left[e^{i\phi_{p}}P_{+}[1,-3,1]\right]\left(F_{ 0}F_{p}g_{1}\right)}{m_{a}\delta_{p}}\] \[+\frac{\text{Im}\left[e^{i\phi_{p}}P_{+}[1,-3,1]\right]\left(2F_{ p}g_{1}g_{2}\right.}{k_{0}}\] \[+\frac{\text{Im}\left[e^{i\phi_{p}}P_{+}[1,-2,1]\right]F_{0}F_{p}} {2m_{a}\delta_{p}}\] \[+\frac{\text{Im}\left[e^{i\phi_{p}}P_{+}[1,-1,1]\right]F_{0}F_{p}} {k_{0}}\] \[-\frac{\text{Re}\left[e^{i\phi_{p}}P_{+}[1,-2,1]\right]F_{0}F_{p}g _{1}}{m_{a}\delta_{p}}\] \[+\frac{\text{Im}\left[e^{i\phi_{p}}P_{+}[1,-1,1]\right]\left(-F_ {0}F_{p}g_{1}k_{0}/\delta_{p}+8F_{p}g_{0}\delta_{p}\right)}{k_{0}}\] \[+\frac{\text{Im}\left[e^{i\phi_{p}}P_{+}[1,1,1]\right]}{(3k_{0}} F_{p}[-12g_{0}g_{1}-2g_{1}g_{2}+6F_{0}k_{0}/m_{a}]\] \[-\frac{\text{Re}\left[e^{i\phi_{p}}P_{+}[1,1,1]\right]\left(F_{ p}F_{0}g_{1}k_{0}/\delta_{p}+3F_{p}g_{1}\delta_{p}\right)}{k_{0}}\] \[+\frac{\text{Im}\left[e^{i\phi_{p}}P_{+}[1,2,1]\right]F_{p}(-32g_ {0}g_{2}-9F_{0}k_{0}/m_{a})}{k_{0}}\] \[-\frac{\text{Re}\left[e^{i\phi_{p}}P_{+}[1,2,1]\right]8F_{p}g_{2} \delta_{p}}{3k_{0}}\] \[-\frac{\text{Im}\left[e^{i\phi_{p}}P_{+}[1,3,1]\right]2F_{p}g_{1} g_{2}}{3k_{0}}\] \[-\frac{\text{Im}\left[e^{i\phi_{p}}P_{+}[1,4,1]\right]\left(2F_{ p}g_{2}^{2}\right)}{3k_{0}}\] \[+\frac{\text{Im}\left[e^{i2\phi_{p}}P_{+}[2,-1,2]\right]F_{p}^{2} g_{1}}{\delta_{p}}\] \[+\frac{\text{Im}\left[e^{i2\phi_{p}}P_{+}[2,0,2]\right]F_{p}^{2} }{m_{a}}\Bigg{]}\quad(k_{p}=k_{0}). \tag{24}\] |
2309.13136 | Contextual Emotion Estimation from Image Captions | Emotion estimation in images is a challenging task, typically using computer
vision methods to directly estimate people's emotions using face, body pose and
contextual cues. In this paper, we explore whether Large Language Models (LLMs)
can support the contextual emotion estimation task, by first captioning images,
then using an LLM for inference. First, we must understand: how well do LLMs
perceive human emotions? And which parts of the information enable them to
determine emotions? One initial challenge is to construct a caption that
describes a person within a scene with information relevant for emotion
perception. Towards this goal, we propose a set of natural language descriptors
for faces, bodies, interactions, and environments. We use them to manually
generate captions and emotion annotations for a subset of 331 images from the
EMOTIC dataset. These captions offer an interpretable representation for
emotion estimation, towards understanding how elements of a scene affect
emotion perception in LLMs and beyond. Secondly, we test the capability of a
large language model to infer an emotion from the resulting image captions. We
find that GPT-3.5, specifically the text-davinci-003 model, provides
surprisingly reasonable emotion predictions consistent with human annotations,
but accuracy can depend on the emotion concept. Overall, the results suggest
promise in the image captioning and LLM approach. | Vera Yang, Archita Srivastava, Yasaman Etesam, Chuxuan Zhang, Angelica Lim | 2023-09-22T18:44:34Z | http://arxiv.org/abs/2309.13136v1 | # Contextual Emotion Estimation from
###### Abstract
Emotion estimation in images is a challenging task, typically using computer vision methods to directly estimate people's emotions using face, body pose and contextual cues. In this paper, we explore whether Large Language Models (LLMs) can support the contextual emotion estimation task, by first captioning images, then using an LLM for inference. First, we must understand: how well do LLMs perceive human emotions? And which parts of the information enable them to determine emotions? One initial challenge is to construct a caption that describes a person within a scene with information relevant for emotion perception. Towards this goal, we propose a set of natural language descriptors for faces, bodies, interactions, and environments. We use them to manually generate captions and emotion annotations for a subset of 331 images from the EMOTIC dataset. These captions offer an interpretable representation for emotion estimation, towards understanding how elements of a scene affect emotion perception in LLMs and beyond. Secondly, we test the capability of a large language model to infer an emotion from the resulting image captions. We find that GPT-3.5, specifically the text-davinel-003 model, provides surprisingly reasonable emotion predictions consistent with human annotations, but accuracy can depend on the emotion concept. Overall, the results suggest promise in the image captioning and LLM approach.
Large language model, emotion estimation, image captioning, context, ChatGPT, GPT-3.5 +
Footnote †: This work was supported by NSERC Discovery Grant 06908-2019.
## I Introduction
_"She sat in a hospital hallway, with an empty stare and slumped shoulders."_ How does this person feel? Writers have long known that describing a scene with carefully selected words, without specifically naming the emotion, is an effective way of moving their reader. The ability to place ourselves in the shoes of another underlies our ability to infer their emotion, towards taking socially appropriate and empathetic actions. Similarly, a photo can capture the emotion of a person in a scene. Automatic emotion estimation systems based on images or videos have the potential to facilitate better human-machine interaction, yet performance in the wild is still poor [1].
Many emotion recognition studies focus on using facial [2] or body [3] features. The context in which emotions are expressed can also affect the perception of emotions [4, 5, 6, 7], whether the face is visible or covered in the image. As a result, the context-based emotion recognition task was introduced. It was elaborated with the introduction of the EMOTIC dataset [8], and there has been an increased focus on improving accuracy on this task [9, 10, 11]. These models utilize a variety of inputs beyond facial data by including, for examples, body posture and context, which encompass factors such as the presence of other humans or environmental aspects. Context-based emotion recognition in audio-visual media has also been studied [12], but how exactly specific cues contribute to the detected emotion remains a relatively unexplored area [13].
In recent years, large language models (LLMs) have emerged as a hot topic in the field of Natural Language Processing (NLP). This growth can be attributed to the introduction of transformers in 2017 by Vaswani et al. [14], which provided a more efficient way of processing sequences of data. Subsequently, other researchers have introduced various methods based on transformer encoder/decoder structures and different pre-training techniques [15, 16, 17]. These approaches have allowed sophisticated language models to perform a range of tasks with high accuracy and efficiency. These improvements in LLMs paved the way not only to the improvements in NLP problems, but also to many multi-modal problems such as Visual Question Answering [18], and Caption Generation [19]. The ability of these models to understand human language and store data in their extensive neural network attributed this success. At the same time, to what extent these language models have the ability to perceive human emotion remains an open question.
In this study, we aim to answer the following questions: how well do LLMs perceive human emotions? And which parts of the information enable them to determine emotions? We first created an annotation interface that allows for annotating images with various factors related to emotion, such as physical signs, social interactions, environmental cues, and demographic information. Using this information, we created an image caption describing a person's facial expressions and body poses, their social contexts with other people in an image, and their environmental surrounding. We then passed the image caption to a GPT-3.51 model to predict an emotion from the text description only.
Footnote 1: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
We conducted an emotion prediction experiment using full image captions, followed by two ablation studies using cropped image captions. For the ablation studies, we altered
our image captions by selectively removing certain types of contextual information, such as social interactions and environmental features. Through the ablation studies, we aim to evaluate the impact of each type of data on the emotion detection output. To summarize, our contributions are:
* Compiling a set of physical signals for each of our emotion labels using LLMs and a writer's thesaurus [20].
* Developing an interface to annotate the image data and collected emotion labels and description showing the physical signals, human interactions, and environmental features for each person.
* Providing an initial analysis on GPT-3.5's ability to predict human emotion from image captions, and how well it can predict the emotion given information on physical signals and contextual information.
* Analyzing the importance of context on how large language models perceive emotions, and how different types of context will affect the prediction.
## II Methodology
The ultimate goal of our research is to explore whether automatic emotion estimation of people in images could be implemented by first captioning images appropriately, then feeding the caption to a large language model for inference on the text. Our approach is comprised of three steps: a) generating a large list of physical signals used for writing about emotion, b) annotating images using these signals along with questions about demographic information, interaction, and environment, and c) using a large language model to predict an emotion based on the image caption.
### _Generation of Physical Signals_
Because existing algorithms perform relatively poorly on negative labels compared to positive ones (e.g. 14.5% vs. 40.3% [8]), we focused on the 13 negative emotion labels from the EMOTIC dataset [8]: Anger, Annoyance, Aversion, Confusion, Disapproval, Disconnection, Disquietment, Embarrassment, Fatigue, Fear, Pain, Sadness, and Suffering. To mitigate an annotator's potential confusion with the labels Pain and Suffering, we merged and replaced these labels with Pain/Suffering - Emotional and Pain/Suffering - Physical.
As a first step, we generated descriptions of physical signals indicative of our 13 target emotions. We used an Emotion Thesaurus, "A Writer's Guide to Character Expression" by Becca Puglisi and Angela Ackerman [20], which provided a range of physical signals associated with commonly recognized emotions including Anger, Annoyance, Confusion, Embarrassment, Fear, and Sadness. For emotions not listed in the book, we utilized the Large Language Models ChatGPT and GPT-3.5 to generate a list of physical descriptions or expressions associated with each emotion label. The prompts used to generate the physical descriptions were of the form, _"List physical cues/physical expressions that would indicate the emotion of 'disapproval' in an image."_ and _"Give a list of facial expressions/physical descriptions/physical movements that might indicate that a person is feeling 'fatigued."_
The generated descriptions were then filtered and combined to create a comprehensive set of physical signals for our set of emotion labels. This resulted in a total of 222 distinct physical signals that could indicate the emotion an individual is experiencing in an image. It should be noted that the remainder of the study did not assume that any specific physical signals were associated with any particular emotion.
Fig. 1: Manual Annotation for the given image produces the following caption: _Sean is a male adult. Sean is a(n) passenger. Sean is or has raising eyebrows, side-eyeing. Mia is a child and she is sitting behind Sean and kicking Sean’s chair. Sean’s physical environment is on an airplane._
### _Annotations_
The interface shown in Fig. 1 was created to facilitate image annotation. To assess a large language model's ability to predict human emotions from images, we annotated a set of images from the EMOTIC dataset [8]. These images contained bounding boxes of different colours surrounding the people in the scene. This allowed us to focus on one person (e.g. marked with a "red" bounding box) at a time within an image.
During the annotation process, both physical signals and contextual components were considered. To make the annotation process easier, we divided the physical signals into multiple categories based on body parts, and annotators could use checkboxes to select relevant descriptions. The annotator could also tag the person within a bounding box with various attributes, including their perceived age group, perceived sex, and social identity or occupation.
Finally, contextual information including factors such as their social interactions (e.g. alone, surrounded by people), social relationships with others in the image (e.g. mother and daughter, husband and wife), and their environmental setting could be input into an open text box. In the end, the annotation interface generated an appropriate image caption based on all the chosen tags (e.g. creating a sentence with a first name), allowing the human annotator to double-check the caption before saving their work.
Out of 222 physical signals proposed to the annotators, 153 were ultimately used to describe the images in the dataset in this study and are reported in Table I. A full listing of interactions and environmental contexts derived from annotators is provided in Supplementary Materials.
### _Assessing Prediction Abilities of Large Language Models_
Once the annotations were complete, GPT-3.5 was used to predict emotion labels with the help of a prompt. The prompt was structured to elicit single emotion prediction when presented with an image annotation.
The prompt was as follows (considering the annotation from Fig. 1): _"Sean is a male adult. Sean is a(n) passenger. Sean is or has raising eyebrows, side-eyeing. Mia is a child and she is sitting behind Sean and kicking Sean's chair. Sean's physical environment is on an airplane. Sean is likely feeling a high level of {placeholder}? Choose one emotion from the list: Anger, Annoyance, Aversion, Confusion, Disapproval, Disconnection, Disquietment, Embarrassment, Fatigue, Fear, Pain/Suffering (emotional), Pain/Suffering (physical), and Sadness."_
To evaluate the performance of the models, we compared the LLM's predictions to the ground truth of the images established by the annotators. For instance, from the manual annotation as shown in Fig. 1, the ground truth for the person within the green bounding box was determined to be "Annoyance". It should be noted that the ground truth labels in our study were different from that of the EMOTIC dataset [8], which was a multi-label dataset. For this reason, it is
not straightforward to evaluate the existing multilabel baseline algorithms on our dataset.
## III Experiments
We conducted three experiments with our manually annotated image captions to test a large language model's ability to estimate human emotions. The first experiment used the full image captions that included all the visually contextual information in an image. We then performed two ablation studies to test the contribution of social interactions and environmental contexts in predicting emotions. Table II shows how every image caption could differ between experiments.
### _Dataset and Annotation_
The image samples used in this study are from the EMOTIC dataset [8]. An image could contain one or more bounding boxes and each bounding box enclosed a person. Every person could depict multiple emotions, but only one emotion label mutually agreed upon by two annotators was picked as the ground truth and counted towards a sample for that emotion. If an image contained multiple people where one person showed _Anger_ while the other person showed _Fear_, then the image counted towards a sample for both _Anger_ and _Fear_. If two people in an image were showing the same emotion such as _Sadness_, then the image was counted twice as a sample for _Sadness_. Table III shows the sample distribution. To summarize, our sample dataset2consisted of:
Footnote 2: [https://rosielab.github.io/emotion-captions/](https://rosielab.github.io/emotion-captions/)
* 331 unique images
* 360 samples
* 360 captions generated through manual annotation
* Two types of images: _One person_ and _Multiple people_
All emotion categories had a sample size of 30, except for _Confusion_ (16) and _Embarrassment_ (14), and ground truth was observed by two annotators upon mutual agreement.
### _Model Parameters and Stability_
We used OpenAI's Completions API 3 to provide our image captions as prompts to the GPT-3.5 model, and it returned predicted emotions through completions. The model version was _text-davinci-003_, which is part of the GPT3.5 family. To ensure that the results generated from GPT-3.5 were stable and reproducible, we set the model's _temperature_ parameter to 0, allowing the model to give a nearly deterministic answer to every prompt. To further the reproducibility of our results, we ran the GPT-3.5 model over each caption ten times to generate a list of ten predicted emotions. We limited the emotions that GPT-3.5 could output to the 13 negative emotions that we focused on in this study. For each caption, the emotion with the maximum number of occurrences was selected as the final prediction. This prediction generation and selection process was done for all three experiments.
Footnote 3: [https://platform.openai.com/docs/api-reference/completions](https://platform.openai.com/docs/api-reference/completions)
apparent, facial expression and body poses if applicable. It also includes their interactions with other people and their environmental surrounding if applicable. Table II shows a full caption for Fig. 1.
### _Experiment B: Ablation Study on Interactions with People_
Experiment B studies how describing a person's interactions and relationships with other people in an image contributes to determining a person's emotional state. It used the same dataset as Experiment A. We removed all the information about a person's social interactions and relationships in an image from the full captions. Thus, the captions used in this experiment contained only perceived age, perceived sex, applicable social identity, face and body signals, and environment. Table II shows a caption without social interaction for Fig. 1. Fig. 2 (a) and (b) show two examples where social interactions may need to be considered to fully understand _Embarrassment_ depicted in the images.
### _Experiment C: Ablation Study on Environmental Contexts_
Experiment C studies how describing a person's environmental context in an image contributes to predicting a person's emotion. Environmental contexts can range from location and time to different types of animals and activities, and this information provides valuable insight into what a person may be feeling. Especially when facial and body features are missing in an image, we can rely on scene context to predict an emotion. Fig. 2 (c) and (d) show two examples where a person's face is not visible in the images, and therefore, scene context becomes important to accurately predict their emotion.
This experiment used the same dataset as Experiment A, but all information about a person's environment and physical surrounding was removed from the caption. Therefore, the captions used in the experiment contained only perceived age, perceived sex, applicable social identity, face and body signals, and interactions and relationships with others. Table II shows a caption without environmental context for Fig. 1.
## IV Results and Analysis
The results of our GPT-3.5 emotion prediction are shown in Table IV. The table contains precision, recall, and F1 score for each emotion and the total accuracy for each experiment. Experiment A with full captions has the highest accuracy. Experiment C with environments removed has the lowest accuracy. The confusion matrix results are in Fig. 3, Fig. 4, and Fig. 5. From the confusion matrix for full captions in Fig. 3, we can notice that _Anger_ and _Sadness_ were the most picked, but overall, according to F1 scores, _Physical Pain/Suffering_ was best estimated. GPT-3.5 was not able to predict _Aversion_. _Disconnection_ and _Disquietment_ were also not well recognized. _Emotional Pain/Suffering_ was frequently recognized as _Sadness_, which may be reasonable. _Annoyance_ and _Confusion_ were often recognized as _Disapproval_. _Fear_ appeared to need environmental cues to be well predicted. _Disapproval_ and _Fatigue_ seem not to be impacted by social and environmental contexts. _Embarrassment_ was fairly well
Fig. 3: Confusion matrix from the experiment using full captions.
predicted with social interactions.
### _Importance of Interactions in Emotion Estimation_
The lack of context about a person's social interactions with other people seems to impact how _Embarrassment_ was perceived by GPT-3.5 the most. Its F1 score for Experiment B without interactions is \(0.33\), which is lower than the two experiments with interactions, \(0.41\) and \(0.38\). A potential reason is that for all the sample images that were annotated as _Embarrassment_ as shown in Fig. 2 (a) and (b), multiple people were present, and there were interactions between the people. In this case, removing social interaction contexts from the captions removes a critical piece of information that may indicate a person may be feeling embarrassed.
_Physical Pain/Suffering_ is another emotion that may need social interactions to be well recognized. Its F1 score dropped from \(0.73\) with full captions to \(0.52\) without interaction and was predicted as _Fear_ by GPT-3.5 more times when compared to full captions.
### _Importance of Environments in Emotion Estimation_
The lack of description about environment in the image captions seemed to impact _Physical Pain/Suffering_ the most. The F1 score dropped from \(0.73\) with full captions to \(0.46\) and it is even lower than the F1 score for captions without interactions. As a result, we observed that _Physical Pain/Suffering_ needed both the interaction and environment descriptions to be well recognized by GPT-3.5, especially environments out of the two. An example is shown in Fig. 6 (a) where the predicted emotion changed from _Physical Pain/Suffering_ to _Disapproval_ after removing the environmental description. The full caption for this image with the removed part in italics is: Jack is a male adult. Jack is or has frowning, rubbing the back. _Jack's physical environment is on a bed with medication on the side._
Another emotion that may benefit from environmental context is _Fear_. Its F1 score dropped from \(0.52\) and \(0.48\) with
Fig. 4: Three emotions - Excitement, Happiness and Joy, that were not on the list of emotions that we provided to GPT-3.5 to choose from, were predicted.
Fig. 5: Two emotions - Happiness and Love, that were not on the list of emotions that we provided to GPT-3.5 to choose from, were predicted.
Fig. 6: Image (a) shows that when _medication on the side_ was removed from the full caption, the predicted emotion changed from Physical Pain/Suffering to Disapproval. Image (b) shows that when _right in front of an alien hand in the dark_ was removed from the full caption, the predicted emotion changed from Fear to Disapproval.
Fig. 7: Examples of images where new positive emotions such as Excitement, Happiness, and Love were predicted by GPT-3.5 when interactions and environments were removed from the captions. The first emotion above each image was generated using full captions.
environments to \(0.34\) without environments. An example is shown in Fig. 6 (b) where the predicted emotion changed from _Fear_ to _Disapproproval_. The full caption for this image with the removed part in italics is: Chloe is a female adult. Chloe is or has frowning, open mouth. _Chloe's physical environment is right in front of an alien hand in the dark._
### _Importance of Facial and Body Signals_
Across the three experiments, we noticed _Anger_ was well recognized as _Anger_ by GPT-3.5 with F1 scores of \(0.67\), \(0.63\), and \(0.56\). A reason for this may be that our samples had distinct facial expressions and body postures that differentiated it from the other emotions, like furrowed eyebrows, gritting teeth, and wrinkling nose.
GPT-3.5's understanding of _Disapproval_ also seems to not be affected by the existence of interaction and environment descriptions in the captions. The F1 scores are \(0.32\), \(0.31\), and \(0.32\). This can suggest that _Disapproval_ was recognized through a person's facial expressions and body poses more than a person's physical surrounding. Body gestures such as crossed arms, pointing finger, and thumbs down may all be indicators of _Disapproval_ to GPT-3.5.
_Sadness_ was almost always predicted as _Sadness_, but _Emotional Pain/Suffering_ was frequently predicted as _Sadness_ in all three experiments as well. In fact, the F1 scores for _Emotional Pain/Suffering_ are only \(0.06\), \(0\), and \(0.05\). This may indicate that these two emotions share similar physical signals, such as crying, downturned mouth, and tilting head downward, and thus GPT-3.5 was not capable of differentiating the two emotions. Or, it could be that GPT-3.5 tends to select the more commonly known emotion from the list that it was provided.
### _New Emotions Predicted by GPT-3.5_
Interestingly, four positive emotions were predicted but they were not on the list of 13 negative emotions that we provided to GPT-3.5 to choose from. The emotions and their number of occurrences are Excitement (1), Happiness (2), Joy (1), and Love (1). All four cases happened in the two ablation studies when interaction or environmental features were missing from the captions. Fig. 7 shows the image for these cases and the corresponding captions. We also show the emotions that were originally predicted using the full captions and the new emotions they were changed to. The removed interactions and environments are in italics:
* Fig. 7 (a) **Fear to Excitement.** Terry is a male adult. _Karl is a security guard and he is grabbing onto Terry and carrying him out from the stadium._ Terry's physical environment is at a sports game.
* Fig. 7 (b) **Embarrassment to Happiness.** Jack is a male adult. Jack is or has smiling. _Beth is a customer and she is side-eyeing Jack. Zoe is a customer and she is staring at Jack._ Jack's physical environment is eating in a movie theatre.
* Fig. 7 (c) **Embarrassment to Happiness.** Lucas is a male adult. Lucas is a(n) groom. Lucas is or has lips that flatten, palms open. Mia is Lucas' bride and she is smiling. _Lucas' physical environment is cake falling down at wedding._
* Fig. 7 (d) **Sadness to Love.** Jane is a female adult. Jane is or has taking off eyeglasses. Mia is Jane's daughter and Jane is putting her hand on Mia's shoulder while Mia has her back turned to Jane. _Jane's physical environment is on a couch._
There was no new negative emotion.
## V Discussion and Future Work
We proposed a new approach for emotion estimation which couples text-based contextual descriptions of people in images with LLMs. Towards this goal, this study provided a benchmark of GPT-3.5 on a set of image captions depicting negative emotions. We also investigated the contributions of social cues and broader contextual information when perceiving human emotions. For example, we observed that _Embarrassment_ and _Sadness_ contained overlapping physical signals, such as "covering own face" or "tilting head downwards", and that social interactions could help distinguish between these two labels, i.e., a person covering their face with a tilted down head, along with being pointed at and laughed at by others, could appear to suggest Embarrassment. Moreover, our results showed that _Aversion_ was never predicted as an emotion across all three experiments on 360 captions, while _Disquietment_ was only predicted three times. _Emotional Pain/Suffering_ and _Disconnection_ were also frequently predicted as _Sadness_, even in the presence of scene contexts. A possible explanation for this may be that GPT-3.5 might not have been sufficiently trained on language data that contained such emotion words.
The study is not without limitations. Firstly, we only focused on the negative emotions of the EMOTIC dataset, and the social signals list employed for annotations was restricted to those associated with our set of negative emotions. Secondly, the list of social signals was partially generated by LLMs and subsequently tested on them. The resulting contextual descriptions were ultimately determined and validated by our team of annotators. Aside from that, the size of the resulting descriptions list, as well as the number of annotated images, was relatively small. Finally, our study did not detve into the individual contribution of each physical description or demographic information for emotion detection, making it an interesting area to explore for future work. In the future, an independent perception study of the captions and ablations could also help provide a comparison to the GPT-3.5 results, as well as addressing the challenge of fully automatic captioning, and evaluating over all EMOTIC labels.
Overall, our approach may be used to enhance transparency and facilitate an effective breakdown of scene representation for contextual emotion estimation. It is hoped that our study can also serve as a catalyst for future research in interpretability of LLMs, as well as understanding human perception of emotions, especially if reproduced with other languages and cultures.
## VI Ethical Impact Statement
_Issues Related to Human Subjects._ In this study, all social signal and context coding and emotion annotation of the images was performed by two members of the research team. The photos are from the EMOTIC dataset which contain images from the internet, of which some belong to the public datasets MSCOCO and Ade20k. Access to the EMOTIC dataset requires a request to the database authors. The research team are not related to the pictures. The images are of people who may be experiencing negative emotions including grief at a funeral, protesting, or war. It does not contain images stronger than images that a person might encounter in news media (e.g. no nudity, torture, etc.)
_Potential Negative Societal Impact._ Software that can detect negative emotions from images accurately could potentially be used for surveillance by authorities for intervention and restriction of autonomy. The application of this research for such use is not condoned by the authors.
_Limits of Generalizability_. The proposed list of physical signals is not claimed to be exhaustive, not only because we focus on a limited set of negative emotions, but also that different cultures will express emotions with different facial and bodily signals. The source Emotion Thesaurus is written by North American authors, and similar writing guides in other languages may produce differing results. ChatGPT, used to supplement the Emotion Thesaurus, also contains its own biases [21]. In addition, GPT-3.5, trained in English, carries biases in the association of facial, bodily and contextual signals with the final emotion. In addition, there were also only two annotators, and we acknowledge that they may also carry their own cultural bias.
_Other Issues._ This work relied on a pre-trained large language model GPT-3.5. While this work did not perform any additional training, the carbon cost of training LLMs cannot be underestimated [22].
|
2309.12228 | Generalized Mie theory for full-wave numerical calculations of
scattering near-field optical microscopy with arbitrary geometries | Scattering-type scanning near-field optical microscopy is becoming a premier
method for the nanoscale optical investigation of materials well beyond the
diffraction limit. A number of popular numerical methods exist to predict the
near-field contrast for axisymmetric configurations of scatterers on a surface
in the quasi-electrostatic approximation. Here, a fully electrodynamic approach
is given for the calculation of near-field contrast of several scatterers in
arbitrary configuration, based on the generalized Mie scattering method.
Examples for the potential of this new approach are given by showing the
coupling of hyperbolic phonon polaritons in hexagonal boron nitride layers and
showing enhanced scattering in core-shell systems. In general, this method
enables the numerical calculation of the near-field contrast in a variety of
strongly resonant scatterers and is able to accurately recreate spatial
near-field maps. | Dániel Datz, Gergely Németh, László Rátkai, Áron Pekker, Katalin Kamarás | 2023-09-21T16:23:39Z | http://arxiv.org/abs/2309.12228v1 | Generalized Mie theory for full-wave numerical calculations of scattering near-field optical microscopy with arbitrary geometries
###### Abstract
Scattering-type scanning near-field optical microscopy is becoming a premier method for the nanoscale optical investigation of materials well beyond the diffraction limit. A number of popular numerical methods exist to predict the near-field contrast for axisymmetric configurations of scatterers on a surface in the quasi-electrostatic approximation. Here, a fully electrodynamic approach is given for the calculation of near-field contrast of several scatterers in arbitrary configuration, based on the generalized Mie scattering method. Examples for the potential of this new approach are given by showing the coupling of hyperbolic phonon polaritons in hexagonal boron nitride layers and showing enhanced scattering in core-shell systems. In general, this method enables the numerical calculation of the near-field contrast in a variety of strongly resonant scatterers and is able to accurately recreate spatial near-field maps.
## 1 Introduction
Scattering-type scanning near-field optical microscopy (s-SNOM) has become one of the leading methods for determining local optical information of materials with spatial resolution well below the diffraction limit. This method is especially effective in visualizing and otherwise investigating exotic optical phenomena, such as plasmon and phonon polaritons in 2D van der Waals crystals [1, 2] and detecting strong coupling between hexagonal boron nitride (hBN) and nanotube plasmons [3] or molecular vibrations [4, 5, 6].
The extreme spatial focusing and amplification of the illuminating light happens by approaching the (often metallized) probing tip of an atomic force microscope (AFM) to close proximity of the investigated sample. In this electromagnetic environment the AFM tip acts as an antenna and scatters light in every direction. The complex scattering processes between the sample/substrate and the tip slightly modify the scattering character of the probe. This small change in the amplitude and the phase of the scattered light can be detected by interferometric techniques, such as the heterodyne or the pseudo-heterodyne method [7].
The exact nuances of the tip-sample interaction are still not entirely understood. A number of methods exist, with different complexity levels, that try to capture the essential details of the scattering process. Full-wave calculations with large program packages, such as finite element modeling (FEM) with COMSOL [8, 9, 10] or finite difference time domain (FDTD) [11] include all the complexities in exchange for largely increased computational time and effort. Simpler models, such as the point dipole model (PDM) [12], the finite dipole model (FDM) [13] or the the extended finite dipole model (EFDM) [14] provide a quasi-static approximation to the solution of the scattering problem. In the PDM, the AFM tip is approximated by a point dipole, or a sphere with much smaller effective radius than the exciting laser wavelength. In the FDM and ECDM spheroidal scatterers are included for more realistic tip shape approximation. Lately, full-wave finite element calculations were combined with PDM and FDM formulations to achieve efficient extraction of the near-field contrast, albeit still in the quasi-static limit [9, 10].
More involved quasi-static calculations include the polarizability of spheroidal and more complicated tip shapes and multilayer structures [15]. These models all severely underestimate the penetration depth
which is important in the examination of multilayered thin films. A common drawback of these approximations is the use of fitting parameters with questionable physical interpretation to adjust the calculated results to measured ones.
More sophisticated numerical methods without _ad hoc_ fitting parameters have also been developed for the calculation of the scattered signal in the fully electrodynamical limit. The so-called "lightning rod" model [16] does give a full description of electrodynamic effects, such as retardation effects, by using a method similar to the one presented in this paper.
Reference [17] uses the generalized spectral method for the description of the near-field contrast in case of highly resonant samples and realistic tip shapes.
While these models are able to calculate the near-field contrast for realistic tip shapes in the fully electrodynamic limit, they require translational symmetry in the plane (not accounting for the tip), which limits their applications regarding multiple scatterers in arbitrary configuration.
In this paper, we present a full-wave, tractable, relatively time-efficient method of calculating complex far-field scattering amplitudes and phases related to s-SNOM measurements, without spurious fitting parameters, using the "generalized Mie scattering" or multipole reflection theory (MRT) method.
## 2 Methods
Mie's theory yields the scattering properties of a sphere suspended in a homogeneous, non-absorbing medium by expanding the electromagnetic field in terms of spherical vector functions. The generalization of the classical Mie theory involves the inclusion of multiple, possibly layered and non-spherical scatterers in the vicinity of a plane interface representing the surface of a possibly layered substrate.
Following the MRT method [18, 19, 20, 21, 22, 23], the fields can be expanded in terms of the spherical multipoles \(\mathbf{J}_{lm}^{(p)}\) and \(\mathbf{H}_{lm}^{(p)}\). The extension of the scattered electric field takes the form
\[\mathbf{E}_{sca}=\sum_{\eta}E_{sca,\eta}\sum_{plm}\mathbf{H}_{lm}^{(p)}( \mathbf{r}^{\prime})A_{\eta lm}^{(p)}, \tag{1}\]
where \(\eta\) is the index for the different polarizations in a given polarization basis, and \(A_{\eta lm}^{(p)}\) are the expansion coefficients. The index \(p\) is either 1 or 2 and distinguishes between electric and magnetic type of vector functions. The indices \(l\) and \(m\) are the azimuthal and magnetic indices, respectively.
In the presence of multiple scatterers, the usual boundary conditions on the surface of each scatterer (the continuity of the electric and the magnetic field) result in a system of linear equations for the expansion coefficients:
\[\sum_{p^{\prime}l^{\prime}m^{\prime}}M_{lm,l^{\prime}m^{\prime}}^{(pp^{\prime })}A_{\eta l^{\prime}m^{\prime}}^{(p^{\prime})}=-\mathcal{W}_{\eta lm}^{(p)}, \tag{2}\]
where the matrix \(M\) is the matrix that describes the boundary conditions, while the coefficients \(\mathcal{W}_{\eta lm}^{(p)}\) are the expansion coefficients of the exciting field.
In the presence of a plane interface, the reflected spherical multipoles are described by the formulation of Bobbert and Vlieger [24]. Solving Equation 2 for the expansion coefficients of the scattered field allows the formulation of the scattering amplitude matrix that connects the incident electric field to the observed electric field scattered in a given direction
\[\mathbf{E}_{obs}(\theta,\phi)=\mathbf{f}(\theta,\phi)\cdot\mathbf{E}_{inc}. \tag{3}\]
This equation is analogous to the commonly cited relation between the near field and the far field in SNOM literature (\(E_{N}\,=\,\sigma E_{I}\)) and makes the scattering amplitude matrix the final derived quantity of this formulation.
The scattering parameters of objects of varying shapes can be given by the extended boundary condition method (EBCM). This method does not limit the shape of the particle as long as the surface is parametriable [25, 26].
Details about the calculations can be found in the supporting information.
## 3 Results
### Phonon polariton coupling in hexagonal boron nitride
Hexagonal boron nitride is a naturally occurring hyperbolic material [27]. This type of anisotropy causes the bulk phonon polaritons in the crystal to propagate on the surface of a cone. For the near-field detection of these polaritons, the phonon polaritons have to be coupled into the crystal by sharp features of the surface, such as the SNOM tip itself [27], the crystal edge or other objects on the surface [28] that are able to provide the missing momentum components for the coupling. The s-SNOM method is then able to visualize the phonon polaritons by mapping the fringe pattern arising from the interference of the polaritons coupled in by the tip of the SNOM device and the sharp feature.
The main strength of the MRT formulation is the ability to handle additional particles besides the probing tip. To illustrate the power of this method, we calculated the phonon polariton interference fringes in a thin layer of hexagonal boron nitride (hBN). A gold nanosphere is placed on the surface of the hBN layer to couple phonon polaritons into the layer that can interfere with the tip-launched polaritons.
The layered structure consists of a 40 nm thick hBN layer on top of a 4 nm SiO\({}_{2}\) layer on a semi-infinite silicon substrate (see Figure 1a). The refractive index of the 3 nm radius gold sphere is extrapolated from measured data of Johnson and Cristi [29]. The SNOM tip is modeled as a platinum spheroid of 600 nm length and 20 nm tip-inscribed sphere radius. The refractive index of platinum is also extrapolated from measured data [30]. The illumination is a p-polarized plane wave with 60\({}^{\circ}\) angle of incidence. The calculations were conducted at 1540 cm\({}^{-1}\) excitation. A measurement value (amplitude and phase) is obtained by imitating the vibration of the tip by calculating at different separations between the tip and the sample. All the steps of the pseudo-heterodyne detection technique are replicated in the background-free extraction of the near-field amplitude and phase.
The near-field amplitude, demodulated at the third harmonic, is shown in Figure 1b. It clearly shows a high contrast peak at the position of the gold nanosphere caused by the high refractive index of gold. On either side of the sphere peak, the amplitude shows a characteristic ripple pattern. The approximate wavelength of this pattern (120-130 nm) agrees reasonably well with measured data [31]. This shows the ability of the generalized Mie scattering model to capture the propagation of phonon polaritons in multi
Figure 1: (a) Modeled geometry of a gold sphere and platinum tip above the multilayered Si/SiO\({}_{2}\)/hBN surface. (b) Calculated near-field amplitude (third harmonic) acquired along the dashed line in (a).
layered surfaces containing hBN.
A feature of the calculated linescan in Figure 1 is the enhanced forward scattering of phonon polaritons marked by the larger amplitude in the forward scattering direction. This can be understood by analyzing the electric field on the surface of the multilayer structure. In Figure 2a, the in-plane components of the electric field are visualised on the surface of the substrate in the presence of only the SNOM tip. The asymmetry of the in-plane field distribution in the forward and backward scattering direction (see Figure 2b for the z component) results in significantly different amplitude Fourier components for the propagating modes in the multilayer pattern. In Figure 2c the available Fourier amplitudes are depicted for 1540 cm\({}^{-1}\) excitation, showing the larger in-plane amplitude in the forward scattering direction.
### Enhanced scattering from encapsulated molecules in boron nitride nanotubes
The near-field optical properties of boron nitride nanotubes (BNNTs) are exciting due to the nanotubes' ability to enhance the scattering of the encapsulated material inside its cavity [32]. Using the generalized Mie scattering model, the approximate numerical modeling of BNNT scattering is possible. For simplicity, the BNNT is modeled as a hollow sphere (3 nm outer, 2 nm inner radius) with its refractive index set to that of hBN's out-of-plane refractive index (see Figure 3a). The substrate is 4 nm SiO\({}_{2}\) on top of a semi-infinite silicon half-space. The calculation parameters otherwise coincide with the ones mentioned in the previous section.
The calculated near-field amplitude and phase (third harmonic) are shown in Figure 3b. Since the scattering properties of anisotropic spheres are hard to calculate numerically [34], the particle under the SNOM tip is practically a hollow, metallic shell. Since the average diameter of BNNTs is rather large (5-20 nm diameter), the TO phonon polariton peak at 1370 cm\({}^{-1}\) coincides well with the same peak in hBN. The results in Figure 3b are thus the sum of the results from the hollow sphere and from an hBN layer of the same thickness as the BNNT wall thickness. These results are compared to nano-FTIR measurements reported in Reference [33]. The calculated spectra show reasonable agreement for the position and the symmetry of the peaks. The difference of the peak widths can be attributed to the large difference in diameter between the measured nanotubes and the one used for the calculation.
An important conclusion drawn from the calculated results is that the BNNT peak at around 1500 cm\({}^{-1}\) is caused by the characteristic Mie scattering of hollow nanospheres and not by any additional polaritonic excitation inherent to only boron nitride materials. This example shows that the generalized Mie scattering method is able to reproduce the near-field spectral features of more complicated geometries and give meaningful information about the origin of the peaks.
In Reference [32], we showed that the signal of weakly absorbing C\({}_{60}\) molecules can be detected inside BNNTs due to the enhancing effect of the highly confined electromagnetic fields inside the walls of the nanotube. This configuration can also be handled with the MRT model, using core-shell layered spheres representing the nanotubes. The inner sphere's refractive index is given by a single Lorentzian with parameters describing a weak oscillator. In Figure 4a, the resonance frequency of the inner sphere is set to 1428 cm\({}^{-1}\) which coincides with a \(T_{1u}\) vibrational mode of C\({}_{60}\)[35, 36, 37]. Using the MRT method, a small peak in both the near-field amplitude and phase is present, the shape of which is typical for weak vibrations. On the other hand, if the resonance frequency of the Lorentzian of the inner sphere is shifted to 1358 cm\({}^{-1}\) (see Figure 4)b) which approximately coincides with a \((C_{60})_{3}\) trimer vibration mode [38] and is outside of the Reststrahlen band, this effect disappears. The results show that the field enhancement in the inner sphere predicted by Mie's theory of a core-shell spherical system [39] is enough for a significant enhancement of the near-field signal. In a more realistic configuration, the additional field confinement due to the hyperbolic nature of the nanotube can result in even higher electric field amplitude which can explain the amplitude of the detected peaks in Reference [32].
Figure 2: (a) Magnitude of the in-plane Fourier components of the near field above the surface of the multilayered structure in the presence of the platinum tip. Black and red dashed lines indicate the forward and backward scattering directions respectively. (b) Amplitude of the z component of the electric field (logarithmic scale), showing the nanofocus and the asymmetry of the field distribution. (c) (Below) Imaginary part of the reflection coefficient of the multilayered structure. (Above) The Fourier components in the forward (black) and backward (red) scattering directions.
## Chapter 3 Enhanced scattering from encapsulated molecules in boron nitride nanotubes
Figure 3: MRT calculations of the layered sphere representing a BNNT (a) Schematic geometry of the system. (b) Calculated spectrum with peaks at 1380 cm\({}^{-1}\) and 1500 cm\({}^{-1}\). The 1380 cm\({}^{-1}\) peak comes from a separate calculation of an hBN layer of the same thickness as the BNNT wall. (c) Extracted nano-FTIR spectrum of 20 nm thick BNNT from [33]. The peak positions match well with the calculated results. The observed broadening of the measured peaks can be caused by the significantly larger diameter of the BNNTs.
Figure 4: MRT calculations of layered spheres representing filled BNNTs. (a) The resonance frequency of the dielectric function of the inner sphere is set to 1428 cm\({}^{-1}\) (see inset). A small peak is visible at this wavenumber. (b) The resonance frequency of the dielectric function is set to 1358 cm\({}^{-1}\). The additional peak from the inner sphere disappears.
Conclusion
In this paper, we introduced a novel approach to calculate near-field contrast in s-SNOM measurements using a generalized Mie theory approach, the multipole reflection theory. This model is able to calculate the near-field contrasts of several, complex scatterers in arbitrary configuration in a fully electrodynamic treatment. The scattering parameters of all of the interacting particles are calculated using Mie's theory, therefore the use of arbitrary fitting parameters is unnecessary.
Using the MRT model, we showed that it is suitable to calculate hyperbolic polariton interference fringes coupled into a thin hBN layer by a nanoparticle on its surface. Furthermore, the polaritonic enhancement of scattering from the inner sphere of core-shell nanoparticles can also be calculated that provides valuable insight into the enhancement of molecular vibrations inside BNNTs.
Further improvements to the presented numerical model are still possible. The scattering properties of disks ([40]) and cylinders ([41]) are possible to calculate numerically and can be included into the model without a great increase in complexity. The illumination is also better described by a Gaussian beam [42] rather than a plane wave. The results presented in this paper are the first steps in accurate numerical modeling of non-symmetric geometries with multiple particles using the scattering theory approach.
**Supporting Information**
Supporting Information is available from the Wiley Online Library or from the author.
**Acknowledgements**
We gratefully acknowledge the support from the National Research, Development and Innovation Office - NKFIIH FK-138411 and K-143153. Research infrastructure was provided by the Hungarian Academy of Sciences (MTA).
|
2309.15547 | Trainability and Expressivity of Hamming-Weight Preserving Quantum
Circuits for Machine Learning | Quantum machine learning (QML) has become a promising area for real world
applications of quantum computers, but near-term methods and their scalability
are still important research topics. In this context, we analyze the
trainability and controllability of specific Hamming weight preserving
variational quantum circuits (VQCs). These circuits use qubit gates that
preserve subspaces of the Hilbert space, spanned by basis states with fixed
Hamming weight $k$.
In this work, we first design and prove the feasibility of new heuristic data
loaders, performing quantum amplitude encoding of $\binom{n}{k}$-dimensional
vectors by training an $n$-qubit quantum circuit. These data loaders are
obtained using dimensionality reduction techniques, by checking the Quantum
Fisher Information Matrix (QFIM)'s rank. Second, we provide a theoretical
justification for the fact that the rank of the QFIM of any VQC state is
almost-everywhere constant, which is of separate interest. Lastly, we analyze
the trainability of Hamming weight preserving circuits, and show that the
variance of the $l_2$ cost function gradient is bounded according to the
dimension $\binom{n}{k}$ of the subspace. This proves conditions of
existence/lack of Barren Plateaus for these circuits, and highlights a setting
where a recent conjecture on the link between controllability and trainability
of variational quantum circuits does not apply. | Léo Monbroussou, Eliott Z. Mamon, Jonas Landman, Alex B. Grilo, Romain Kukla, Elham Kashefi | 2023-09-27T10:11:07Z | http://arxiv.org/abs/2309.15547v2 | # Trainability and Expressivity of Hamming-Weight Preserving Quantum Circuits for Machine Learning
###### Abstract
Quantum machine learning has become a promising area for real world applications of quantum computers, but near-term methods and their scalability are still important research topics. In this context, we analyze the trainability and controllability of specific Hamming weight preserving quantum circuits. These circuits use gates that preserve subspaces of the Hilbert space, spanned by basis states with fixed Hamming weight \(k\). They are good candidates for mimicking neural networks, by both loading classical data and performing trainable layers. In this work, we first design an \(n\) prove the feasibility of new heuristic data loaders, performing quantum amplitude encoding of \(\binom{n}{k}\)-dimensional vectors by training a n-qubit quantum circuit. Then, we analyze more generally the trainability of Hamming weight preserving circuits, and show that the variance of their gradients is bounded according to the size of the preserved subspace. This proves the conditions of existence of Barren Plateaus for these circuits, and highlights a setting where a recent conjecture on the link between controllability and trainability of variational quantum circuits does not apply.
## I Introduction
Variational quantum circuits (VQCs) are promising candidates for near term quantum computing [1], but existing and near-term quantum devices still offer limited resources to implement important tasks for quantum machine learning (QML), such as encoding and training. Considering fault-tolerant quantum computation, more advanced QML algorithms that present potential to achieve a quantum advantage exist. The key step of some of these "quantum linear algebra" algorithms [2; 3; 4] is amplitude encoding, where the input vector's components become the quantum amplitudes of the input state in the computational basis. Amplitude encoding on the entire Hilbert space is unlikely to be achieved in the near term, and recent work proposed to use amplitude encoding in small subspaces [5; 6; 7]. In other methods for variational QML, the data encoding is done by putting directly the vector component as the gate parameters, which leads to some limitations [8; 9].
In this work, we propose a method to achieve the amplitude encoding of any \(\binom{n}{k}\)-dimensional vector using \(n\) qubits by training a VQC made of Hamming weight (HW) preserving gates. In recent work [5], a \(n\)-qubit quantum data loader using HW preserving gates was presented to encode any \(n\)-dimensional vector using a VQC, but without training. HW preserving quantum circuits allow one to restrict the state created to a superposition of states of the same HW as the input, i.e., to maintain the number of qubits in state \(\ket{1}\) at the same time. Such subspace invariant quantum circuits can tackle an important scaling problem with QML methods called Barren Plateaus (BP) [10]. It has been recently shown that one can avoid BP while using input data on an invariant subspace of low dimension under certain conditions on their controllability and expressivity [11]. However, knowing if HW preserving VQCs are prone to BP without those assumptions is still an open question that we tackle in this work.
One of our main results can be informally stated as follows: for a \(n\)-qubit circuit made of specific HW preserving gates (RBS and FBS), in the subspace spanned by Hamming weight \(k\), the gradient of the cost function vanishes as \(O(1/\binom{n}{k})\). This is neither exponentially decreasing with \(n\) (Barren Plateau), nor quadractically decreasing as one could have expected for the FBS case, despite its lower controllability.
We first develop in Section II a framework to design a quantum data loader on any invariant subspace of dimension \(\binom{n}{k}\), with a \(n\)-qubit circuit and \(k\) the chosen HW (see Fig. 1). In addition, we propose a study of the trainability for HW preserving VQCs in Section III (this concerns both blocks in Fig. 1). We show that one could avoid BP using HW preserving gates according to the choice of the subspace used for the encoding without any hypothesis on the controllability or expressivity of the VQC.
Figure 1: Representation of a Hamming weight preserving quantum circuit for Quantum Machine Learning purposes: (1) is the encoding part trained to represent the classical input, and (2) is the trainable layer or quantum neural network. The gates represented with B and S signs are RBS.
### Related work
The preservation of the HW is a symmetry that we use in this work in order to propose encoding and trainable layers with theoretical guarantees on their trainability. Quantum machine learning models that present symmetries have been proposed as potential more efficiently trainable than common models [12; 13; 14; 15; 16]. More recently, problem-inspired ansatz have been study using tools from quantum optimal control to highlight a link between the dimension of their corresponding dynamical Lie algebra and their trainability [11] using a certain set of hypothesis. One main conjecture left open in [11] concerned the link between trainability and controllability _in a subspace_. Very recent works just prove that this conjecture applies to many commonly used ansatzes [17; 18] such as the Hamiltonian Variational Ansatz [19], Quantum Alternating Operator Ansatz [20; 21], and many equivariant quantum neural networks. However, in our work, we show that this conjecture is not respected in the case of HW preserving ansatzes, which received increasing attention [5; 6; 7; 22]. Our results are consistent and independent of the two works recently released [17; 18] as HW preserving ansatzes don't respect the assumptions these papers are relying on. One can notice that [18] studies the same ansatzes as we do (see their Appendix C), and propose an upper-bound on their controlability in a specific setting. In Section III, we give tighter theoretical guarantees on the trainability of those ansatzes, using a different analytical proof. See Section V for discussion.
## II Space-efficient amplitude encoding.
We first define the Amplitude Encoding and HW preserving quantum data loaders. Then we show how to achieve it efficiently using HW preserving gates and their subspace preserving properties.
### Hamming weight preserving quantum data loaders
Let us first define the Amplitude Encoding scheme and explain what are HW preserving quantum data loaders.
**Definition 1** (Amplitude Encoding).: _A data loader is a parameterized \(n\)-qubit quantum circuit that, given a classical vector \(x=(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\), prepares the quantum state:_
\[\ket{x}=\frac{1}{||x||}\sum_{i=1}^{d}x_{i}\ket{e_{i}}, \tag{1}\]
_where \(\ket{e_{i}}\) are \(n\)-qubit orthogonal quantum states._
**Definition 2** (Data Loader in a HW preserving subspace).: _We define a n-qubit data loader in the subspace of HW \(k\), the quantum circuit that performs amplitude encoding on the basis:_
\[B_{k}^{n}=\{\ket{e}\ket{e}e\in\{0,1\}^{n}\text{ and HW}(e)=k\} \tag{2}\]
_with \(d_{k}=\ket{B_{k}^{n}}=\binom{n}{k}\)._
For example, when considering \(n=3\) qubits and a HW \(k=2\), the resulting basis state is:
\[B_{2}^{3}=\{\ket{110},\ket{101},\ket{011}\}\]
One could use the Reconfigurable Beam Splitter (RBS) gate to perform such an encoding. This HW preserving gate is easy to implement or native on many quantum devices. Notice that our results hold for another HW preserving gate named the Fermionic Beam Splitter (FBS) which was already used for QML application in [22] but has less good properties in term of controllability (see Section II.2 and Appendix C).
**Definition 3** (Reconfigurable Beam Splitter gate).: _The Reconfigurable Beam Splitter (RBS) gate is a 2-qubit gate that corresponds to a \(\theta\)-planar rotation between the states \(\ket{01}\) and \(\ket{10}\):_
\[RBS(\theta)=e^{i\theta H_{RBS}}=\begin{pmatrix}1&0&0&0\\ 0&\cos(\theta)&\sin(\theta)&0\\ 0&-\sin(\theta)&\cos(\theta)&0\\ 0&0&0&1\end{pmatrix} \tag{3}\]
_with its corresponding Hamiltonian:_
\[H_{RBS}=\begin{pmatrix}0&0&0&0\\ 0&0&-i&0\\ 0&i&0&0\\ 0&0&0&0\end{pmatrix} \tag{4}\]
**Definition 4** (Fermionic Beam Splitter).: _Let \(i,j\in[n]\) be qubits and \(S=s_{1}\ldots s_{n}\in\{0,1\}^{n}\) a binary word corresponding to a single state \(\ket{S}\) with \(n\) the total number of qubits. Then the Fermionic Beam Splitter (FBS) acts on the qubits \(i\) and \(j\) as the following unitary:_
\[FBS_{i,j}(\theta)\ket{S}=\begin{pmatrix}1&0&0&0\\ 0&\cos(\theta)&(-1)^{f}\sin(\theta)&0\\ 0&(-1)^{f+1}\sin(\theta)&\cos(\theta)&0\\ 0&0&0&1\end{pmatrix} \tag{5}\]
_with \(f=f_{i,j,S}=\sum_{i<l<j}s_{l}\)_
The Hilbert space spanned by a subspace preserving VQC can be expressed as a direct sum of invariant subspaces. By rearranging the order of the computational basis, we can express the HW preserving VQC unitary as a block matrix. Each block is an orthogonal matrix \(W^{k}\) that corresponds to a unitary in the subspace of states of a certain Hamming weight \(k\).
We now explain our data loading scheme. First, we initialize the quantum state to be \(\ket{e_{s}}\), a single state of HW \(k\). Then we split off this state onto the states in \(B_{k}^{n}\) using RBS. In [5], the authors used a similar method on the unary basis \(B_{1}^{n}\). Notice that achieving an amplitude encoding with such a basis would allow us to encode many more parameters, namely \(\binom{n}{k}\gg n\), in an \(n\) qubit state. To design our quantum data loader, we need to ensure that any \(\binom{n}{k}\)-dimensional real vector \(x\) can be encoded, i.e., that there exists a set of RBS gate parameters \(\Theta=\{\theta_{1},\ldots,\theta_{D}\}\) such that:
\[W^{k}(\Theta)\ket{e_{s}}-\frac{1}{||x||}\sum_{i=1}^{\binom{n}{k}}x_{i}\ket{e_ {i}}=0, \tag{6}\]
Finding the corresponding set of variational parameters or proving their existence is very hard when \(k>1\), and we discuss how one can try to do it efficiently in future work (see Appendix A). In this work, we focus on the existence of a solution, and we find the solution by defining an equivalent optimization problem that can be solved using a gradient descent based method. Theoretical guarantees to solve the optimization problem easily are described in Section III.
\[\Theta^{*}=\arg\min_{\Theta}||\frac{1}{||x||}\sum_{i=1}^{\binom{n}{k}}x_{i} \ket{e_{i}}-W^{k}(\Theta)\ket{e_{s}}||_{2}^{2} \tag{7}\]
Notice that the previous cost function does not induce a Barren Plateaus while considering a global cost function [23], as the Hilbert space described by the state of HW \(k\) is not exponentially large for small choice of \(k\). We confirm in Section III that a large choice of \(k\) results in the existence of Barren Plateaus.
Subspace preserving quantum circuits are easier to simulate in small subspaces than random quantum circuits over the entire Hilbert space [24]. In the case of a HW preserving VQC, the speedup of using a quantum computer grows with \(k\). Classical simulability can be appreciate for the encoding part when combined with a trainable layer that is hard to simulate.
### Existence of Quantum Data Loader
Previously, we proposed the use of HW preserving ansatz such as a n-qubit RBS based VQC to create a quantum data loader. This circuit will be trained to perform the amplitude encoding of a \(\binom{n}{k}\)-dimensional vector with \(k\) the chosen HW. In this Section, we give tools to study the existence of a circuit that always presents a solution for our optimization problem given by Eq. (7). More precisely, we show how to know if, given a qubit connectivity and RBS gates, one can design a quantum data loader. If so, we give a method to design the circuit in Section II.3.
We propose to use quantum optimal control tools to show that, according to the circuit connectivity, we can prove the existence of a quantum data loader. This method can be generalized to find quantum data loaders for any subspace invariant ansatz. We first define an essential tool to study the controllability of a quantum circuit in the unitary space.
**Definition 5** (Dynamical Lie Algebra).: _Let us consider that we have an Hamiltonian of the form:_
\[H=H_{0}+\sum_{k\geq 1}^{G}u_{k}(t)H_{k} \tag{8}\]
_where each \(u_{k}\) is a real function that we can freely choose. We call \(\{H_{k}\}_{k\in\llbracket 0,G\rrbracket}\) the set of generators of our quantum system. The Dynamical Lie algebra is defined as:_
\[\mathcal{L}=\text{span}\left\langle iH_{0},\ldots,iH_{G}\right\rangle_{ \mathcal{L}}\subseteq\mathfrak{su}(d) \tag{9}\]
_with \(\left\langle S\right\rangle_{\mathcal{L}}\) the Lie closure, i.e., the set of all Lie commutators between the elements in \(S\)._
In the case of RBS based quantum circuits, the generators are given by the qubit connectivity as we restrict ourselves to the use of a unique 2-qubit gate (see Fig. 4). We show in Appendix B how to compute the dimension of the DLA. We can restrict this study of the DLA to a particular subspace of HW \(k\). Then, its dimension indicates the maximal number of coefficients we can independently fix in \(W^{k}\). As this matrix is orthogonal (see Appendix A), the dimension of the DLA is upper bounded by \(\frac{1}{2}d_{k}(d_{k}-1)\).
We can thus compute the DLA corresponding to our circuit 1 in the subspace of the chosen HW \(k\).
Figure 2: Block representation of the HW preserving unitaries. \(\tilde{W}\) is the \(2^{n}\times 2^{n}\) unitary corresponding to a n-qubit HW preserving quantum circuit. Each block \(k\) is the unitary matrix corresponding to the preserved subspace of HW \(k\), and the state basis \(B_{k}^{n}\). Their size are \(d_{k}\times d_{k}\) where \(d_{k}=\binom{n}{k}\).
If the dimension of the DLA in the subspace of HW \(k\) is lower than \(d_{k}-1\) (with \(d_{k}=\binom{n}{k}\)), we cannot control enough coefficients to achieve the encoding described in Eq. (6). If the dimension is maximal (equal to \(\frac{1}{2}d_{k}(d_{k}-1)\)), we can perfectly control \(W^{k}\), and thus we can design a loader. Between those two values, we cannot ensure the existence of a loader using the DLA dimension as we may control at least \(d_{k}-1\) coefficients but not those in the columns corresponding to \(|e_{s}\rangle\). In practice, one could choose to reduce the choice of \(k\) to achieve the full controllability of the subspace or could proceed with our method to find the data loader given in Section II.3. Indeed, one can use the rank of the Quantum Fisher Information matrix to ensure this sufficient controlability in the state space (see Section II.3).
We know that having enough control to design a quantum data loader requires considering a connected graph in order to reach any state in our encoding basis. Therefore, the nearest neighbor's connectivity is the worst as it has a minimal number of edges, and full connectivity maximizes the dimension of the DLA. The scaling of the DLA dimension according to a specific type of graph needs to be tackled in future work, but Fig. 3 gives numerical evidence of good scaling in terms of controllability for an RBS based quantum circuit.
We observe in the Fig. 3 that the evolution of the DLA dimension in the nearest neighbors connectivity setting seems to follow the maximum controllability of the first subspace, which is given by \(\frac{1}{2}n(n-1)\). The DLA dimension for the maximum connectivity seems to evolve according to the upper bound given by the maximum controllability of a \(d_{n/2}\times d_{n/2}\) orthogonal matrix and equal to \(\frac{1}{2}d_{n/2}(d_{n/2}-1)\) when considering RBS gates. In the case of the FBS gates, we know that each block \(W^{k}\) is perfectly determined by the first one \(W^{1}\)[22] (see Appendix C). As a result, the dimension of the DLA of FBS based quantum circuit is upper-bounded by \(\frac{1}{2}n(n-1)\) as observed in the previous figure. The limitation of the controllability of FBS gates is reminded in Appendix C.
In this section we show that the DLA study gives us tools to know if we can design a quantum data loader from a qubit connectivity and a specific quantum gate.
### Finding the quantum data loader
Using previous results, we know that if we have a quantum hardware such that the DLA dimension is high enough, there exists a quantum data loader circuit. A remaining question is how to design the quantum data
Figure 4: Hamiltonian representation of a HW preserving VQC: (1) highlights the link between the qubit connectivity and the generators of the circuit, and (2) represents our quantum data loader. The bit flips used to prepare the initial state are represented in green, and RBS are represented red.
Figure 3: Evolution of the dimension of the DLA in the subspace of HW \(k=\lfloor\frac{n}{2}\rfloor\) for: (1) the use of RBS gates; (2) the use of FBS gates with nearest neighbors connectivity; (3) the use of FBS gates with full connectivity. This plot highlights the difference of controllability potential between RBS and FBS based quantum circuit.
loader from a given subspace and the connectivity that induces the existence of such a circuit. In this Section, we will present two algorithms to design the quantum data loader based on the study of controllability in the state space.
A subspace preserving circuit's ability to achieve amplitude encoding on one of its preserved subspaces is equivalent to saying that this circuit perfectly controls the state space created by its output. In particular, a RBS based VQC can achieve amplitude encoding (see Definition 1) on the subspace of HW \(k\) if the output state can be any superposition of states in \(B_{k}^{n}\). The state space spanned by the output of the VQC is thus a sphere of dimension \(d_{k}-1\) noted \(S^{d_{k}-1}\) and illustrated in Fig. 5.
Now we define an essential tool to study the controllability of a quantum circuit in the state space.
**Definition 6** (Quantum Fisher Information Matrix).: _Let us consider an initial state \(|e_{s}\rangle\) and \(U(\Theta)\) the unitary that represents a quantum circuit with \(\Theta=\{\theta_{1},\ldots,\theta_{D}\}\) the set of variational parameters. The Quantum Fisher Information Matrix (QFIM) is a \(D\times D\) matrix defined as:_
\[[\text{QFIM}_{s}(\Theta)]_{i,j} =4\text{Re}[\langle\partial_{\theta_{i}}\psi_{s}(\Theta)\big{|} \partial_{\theta_{j}}\psi_{s}(\Theta)\rangle-\] \[\langle\partial_{\theta_{i}}\psi_{s}(\Theta)|\psi_{s}(\Theta) \rangle\,\langle\psi_{s}(\Theta)|\partial_{\theta_{j}}\psi_{s}(\Theta)\rangle] \tag{10}\]
_with \(|\psi_{s}(\Theta)\rangle=U(\Theta)\,|e_{s}\rangle\)_
The maximal rank of the QFIM is a metric of controllability in the state space [25], as it gives us the number of independent directions that can be taken by the state when tuning the gate parameters \(\Theta\). For our encoding method in the subspace of HW \(k\), the maximum rank is given by the topology of the sphere \(S^{d_{k}-1}\) :
\[\max_{\Theta}\;\text{rank}[QFIM_{s}(\Theta)]\leq d_{k}-1 \tag{11}\]
As in [26], we find numerical evidence (see Fig. 6) that:
\[\forall\Theta\in[0,2\pi]^{D}\quad\text{rank}[QFIM_{s}(\Theta)]=\max_{\Theta} \;\text{rank}[QFIM_{s}(\Theta)] \tag{12}\]
By property of the orthogonal group, we know that the unit sphere is a homogeneous space of the action of the Lie Group that is described by our encoding method. As a result, we can state the following lemma:
**Lemma 1**.: _Let us consider the subspace of HW \(k\) of our \(n\)-qubits encoding method. If the rank of the QFIM in the subspace is maximal on one point:_
\[\exists\Theta\in[0,2\pi]^{D}\;|\;\operatorname{rank}[QFIM(\Theta)]=d_{k}-1 \tag{13}\]
_Then:_
\[\forall\Theta\in[0,2\pi]^{D},\quad\text{rank}[QFIM(\Theta)]=d_{k}-1 \tag{14}\]
Using the QFIM on a given subspace of HW \(k\), we propose a first algorithm to design a quantum data loader in this subspace from an initial state created using bit-flips, and the possible generators \(\mathcal{G}\) given by the qubit connectivity and the RBS gate Hamiltonian. When the rank of the QFIM of a quantum data loader circuit is equal to \(d_{k}-1\), the state space is \(S^{d_{k}-1}\). It can therefore achieve any superposition of states from \(B_{k}^{n}\), i.e., achieve the amplitude encoding on the subspace of HW \(k\). The Algorithm 1 consists in adding RBS gates and measure the new rank of the QFIM, until it achieves its maximal value, ensuring the data loader capability.
```
1:\(\mathcal{G}\) the generators, \(|e_{s}\rangle\) the initial state
2:circuit = \(\emptyset\)
3:while (\(\max_{\Theta}\text{rank}[QFIM(\text{circuit},\Theta)]<d_{k}-1\)) do
4:for\(RBS\in\mathcal{G}\)do
5:circuit' = circuit + \(RBS\)
6:if (\(\max_{\Theta^{\prime}}\text{rank}[QFIM(\text{circuit'},\Theta^{\prime})]\)\(>\)\(\max_{\Theta}\text{rank}[QFIM(\text{circuit},\Theta)]\)) then
7:circuit = circuit'
8:return circuit
```
**Algorithm 1** to design a HW preserving quantum data loader
Figure 5: Representation of the unitary and output state spaces. The DLA is the tangent space of the unitary space. The possible directions for the evolution of the output state are given by the Quantum Fisher Information Matrix eigenvectors.
Figure 6: Evolution of the rank of the QFIM for a periodic structure ansatz presented in (1). Each block is represented in (2), and the evolution of the rank of the corresponding QFIM is given by (3). The derivation of the QFIM rank is done in the largest subspace (\(k=n/2=3\)).
Using the conjecture given by Eq.(12), we can only consider one point of the state space to derive the maximum rank of the QFIM. To avoid this conjecture, we propose the Algorithm 2, using overparametrization, a concept introduced in [11]:
**Definition 7** (Overparametrization).: _A VQC overparametrized if the number of parameters \(D\) is such that the QFIM, for all the states in the training set, simultaneously saturate their rank \(R_{s}\):_
\[\max_{D\geq D_{c},\Theta}\operatorname{rank}[QFIM_{s}(\Theta)]=R_{s} \tag{15}\]
The authors showed that for a general type of periodic-structured VQCs, we have:
\[D_{c}\sim\dim(DLA) \tag{16}\]
The quantum circuit that performs the full rank of the QFIM can be easily completed using overparametrization according to Eq.(16). Using Theorem 1, we know that we only need to derive the rank of the QFIM on a single point to find its maximal value.
```
1:circuit, flag = True
2:while flag do
3: flag = False
4:for\(RBS\in\) circuit do
5: circuit = circuit - \(RBS\)
6:if\((\max_{\Theta^{\prime}}\operatorname{rank}[QFIM(\operatorname{circuit}^{ \prime},\Theta^{\prime})]=\max_{\Theta}\operatorname{rank}[QFIM(\operatorname{ circuit},\Theta)])\)then
7: circuit, flag = circuit', True
8:break for
9:return circuit
```
**Algorithm 2** to design a HW preserving quantum data loader
The reason we can naively increase the rank of the QFIM to its maximal in Algorithm 1 is because of the results from [11] on the Theory of Overparametrization, reminded in Definition 7. In practice, Algorithm 2 is initialized by considering a quantum circuit made of a high number of gates according to the dimension of its DLA as explained in Eq.(16). Those gates can be chosen randomly or in such a way to reduce the circuit depth.
In practice and with both algorithms, one must pay particular attention to the order of the generators that are tested in the algorithm with regard to the circuit depth.
## III Trainability of HW preserving quantum circuits
In the previous Section, we discussed how to prove the existence of a quantum data loader using RBS gates and how to design such an encoding circuit for a specific subspace. In this Section, we show a study of the trainability of HW preserving quantum circuits that could be used for our encoding but also for other types of trainable layers, such as represented in Fig. 1. It is known that some QML proposals suffer from negative optimization landscape properties [10] that lead to strong limitation in their trainability. In this Section, we will give strong results on the gradient of the cost function for VQC made of RBS or FBS gates. First, we present in Section III.1 the backpropagation formalism applied to RBS and FBS based VQC. Then we present the resulting theorems on the variance and expectation value of the cost function gradient in Section III.2.
### Backpropagation for gradient calculus
We describe a HW preserving quantum circuit only made of RBS gates. We decompose the quantum circuit as a set of \(\lambda_{max}\) gates. At each time step \(\lambda\), we call the inner layers \(\zeta^{\lambda}\) the quantum states in the circuit, and the inner errors \(\delta^{\lambda}=\partial\mathcal{C}/\partial\zeta^{\lambda}\). We call \(w^{\lambda}\) the unitary in the considered basis \(B_{k}^{n}\) for each RBS. The cost function is \(\mathcal{C}(\Theta)=||z-y||_{2}^{2}\) with \(z\) the output of the quantum circuit, and \(y\) the desired output.
The equivalent weight matrix of our VQC is \(W^{k}=w^{\lambda_{max}}\ldots w^{1}w^{0}\). To train our circuit, we want to update each RBS parameter \(\theta_{i}\) with respect to the gradient of the cost function \(\mathcal{C}\). We derive the gradient by decomposing it using each component, indexed by the integer \(p\), of the inner layer and inner error vectors:
\[\frac{\partial\mathcal{C}}{\partial\theta_{i}}=\sum_{p}\frac{\partial\mathcal{ C}}{\partial\zeta_{p}^{\lambda+1}}\frac{\partial\zeta_{p}^{\lambda+1}}{ \partial\theta_{i}}=\sum_{p}\delta_{p}^{\lambda+1}\frac{\partial(w_{p}^{ \lambda}\cdot\zeta^{\lambda})}{\partial\theta_{i}} \tag{17}\]
Each parameter \(\theta_{i}\) corresponds to applying a \(\theta_{i}\)-planar rotation between two qubits. We call \((l,j)\) the tuples of states affected by the rotation. Using RBS gates, it comes to:
\[\begin{split}\frac{\partial\mathcal{C}}{\partial\theta_{i}}= \sum_{(l,j)}&\delta_{l}^{\lambda}(-\sin(\theta_{i})\zeta_{l}^{ \lambda}+\cos(\theta_{i})\zeta_{j}^{\lambda})+\\ &\delta_{j}^{\lambda}(-\cos(\theta_{i})\zeta_{l}^{\lambda}-\sin( \theta_{i})\zeta_{j}^{\lambda})\end{split} \tag{18}\]
Figure 7: Decomposition of the HW preserving quantum circuit for the backpropagation method.
One can decompose the same way a FBS based quantum circuit in a specific subspace of a given HW:
\[\begin{split}\frac{\partial\mathcal{C}}{\partial\theta_{i}}=\sum_{(l,j)}\delta_{l}^{\lambda}(-\sin(\theta_{i})\zeta_{l}^{\lambda}+(-1)^{f(a,b, \zeta_{j}^{\lambda})}\cos(\theta_{i})\zeta_{j}^{\lambda})+\\ \delta_{j}^{\lambda}((-1)^{f(a,b,\zeta_{l}^{\lambda})+1}\cos( \theta_{i})\zeta_{l}^{\lambda}-\sin(\theta_{i})\zeta_{j}^{\lambda})\end{split} \tag{19}\]
With \(f(a,b,\zeta_{l}^{\lambda})=\sum_{a<p<b}s_{p}\) where \(s\in\{0,1\}^{n}\) is the binary word corresponding to the state given by the index \(i\): \(|\zeta_{i}\rangle=|s_{1}\cdots s_{n}\rangle\) (\(a\) and \(b\) are the qubits affected by the FBS).
### Avoiding Barren Plateaus
We can use the backpropagation analytic definition of the cost function gradient to study the phenomenon of Barren Plateaus, where certain circumstances lead to exponentially vanishing gradients.
**Definition 8** (Barren Plateau).: _The cost function \(\mathcal{C}(\Theta)\) landscape of a \(n\)-qubit VQC is said to exhibit a Barren Plateau (BP) if:_
\[\forall\theta_{i}\in\Theta,\quad\mathbb{E}_{\Theta}[\partial_{ \theta_{i}}\mathcal{C}(\Theta)]=0\quad\text{and}\quad\forall\text{ar}_{\Theta }[\partial_{\theta_{i}}\mathcal{C}(\Theta)]=O(\frac{1}{\bar{b}^{n}}) \tag{20}\]
_with \(b>1\)_
It is possible to determine the existence of BP under the hypothesis of approximate 2-design. This allows to derive the value of \(\forall\text{ar}_{\Theta}[\partial_{\theta_{i}}\mathcal{C}(\Theta)]\) using the integration formulas from [27]. This results in an expression of the variance inverse of the size of the Hilbert space. In recent work [28], authors have shown under this hypothesis, and the hypothesis of full controllability of the subspace (meaning that the dimension of the DLA is maximal), that using a subspace invariant quantum circuit the expression of the variance is inverse of the dimension of the subspace. As a result, one could avoid BP using a subspace invariant quantum circuit with a subspace of small dimension.
In the following, we will show that one could avoid BP for subspace invariant quantum circuits based on RBS or FBS gates by choosing any subspace of HW \(k\) with a fixed \(k\) without considering the 2-design hypothesis and with any controllability.
**Theorem 1** (Evolution of the variance for RBS and FBS based quantum circuits).: _Let us consider a n-qubit HW preserving VQC made of RBS or FBS gates. We consider here the subspace of HW \(k\), i.e., corresponding to the basis \(B_{k}^{n}\). If the initial state and the wanted output are Haar distributed on the basis \(B_{k}^{n}\), we have that:_
\[\mathbb{E}_{\Theta}[\partial_{\theta_{i}}\mathcal{C}(\Theta)]=0,\quad\forall \text{ar}_{\Theta}[\partial_{\theta_{i}}\mathcal{C}(\Theta)]\approx\frac{k(n- k)}{n(n-1)}\frac{8}{d_{k}} \tag{21}\]
_for any \(\theta_{i}\in\{\theta_{1},\ldots,\theta_{D}\}\), and with \(d_{k}=\binom{n}{k}\)._
The complete proof of this Theorem is presented in Appendix D. We use the hypothesis that the input and the desired output of the RBS quantum circuit are Haar distributed on the considered state basis. This hypothesis might not be verified in all cases. In the encoding use case, we may consider a Haar distributed wanted output but a single input state. In the case of a trainable layer made of RBS or FBS gates (see Appendix D) we may consider Haar distributed input, but the desired output may be very concentrated. We illustrate this result in Fig. 9 in the additional simulations given in Appendix D.
**Theorem 2** (Evolution of the variance for RBS and FBS based quantum data loader).: _Let us consider a n-qubit HW preserving VQC made of RBS or FBS gate. We consider here the subspace of HW \(k\), i.e. corresponding to the basis \(B_{k}^{n}\). If the initial state is a single state \(|e_{s}\rangle\in B_{k}^{n}\) and the wanted output is Haar distributed on the basis \(B_{k}^{n}\), we have that:_
\[\forall i\geq\lambda_{0},\quad\mathbb{E}_{\Theta}[\partial_{\theta_{i}} \mathcal{C}(\Theta)]=0,\quad\forall\text{ar}_{\Theta}[\partial_{\theta_{i}} \mathcal{C}(\Theta)]\approx\frac{k(n-k)}{n(n-1)}\frac{8}{d_{k}} \tag{22}\]
_with \(\lambda_{0}\) is such that from the gate with variational parameter \(\theta_{\lambda_{0}}\) every state in \(B_{k}^{n}\) can be reached. We recall that \(d_{k}=\binom{n}{k}\)._
The complete proof of this Theorem is presented in Appendix E. After a certain number of gates \(\lambda_{0}\), the expectation value is uniformly spread on the basis states, which is very similar to the situation of Theorem 1. The same Theorem could be stated for the case where the desired output is included in a restricted number of states and the input is Haar distributed on the basis \(B_{k}^{n}\), corresponding to the use of a VQC in a subspace of HW \(k\) as a neural network. We illustrate this result in Fig. 10 in the additional simulations given in Appendix D.
We can conclude from these results that there is no Barren Plateaus for the subspace invariant RBS and FBS based quantum circuits for a fixed choice of subspace \(k\) according to the scaling of \(d_{k}\), the dimension of the subspace of HW \(k\).
### Complexity and simulation of subspace preserving RBS circuit
According to its better controllability properties, we focus on RBS gates. In the subspace of HW \(k\), the effect of a RBS gate is a rotation between \(k\) couples of states in the basis \(B_{k}^{n}\). Therefore, a RBS is easy to simulate classically in the unary basis but requires an exponential number of operations when \(k\) is close to \(n/2\) as \(d_{n/2}\) grows exponentially with \(n\) the number of qubits. Nevertheless, using a greater subspace may be a wrong choice according to the circuit depth. For example, achieving the full controllability of a larger subspace requires a more significant amount of gates but the number of qubits is limited.
It results in a rise in the circuit depth. In addition, using a greater subspace results in the apparition of BP, as shown in 3.
However, the use of subspace preserving RBS quantum circuits is keen on giving a speedup advantage for an application that requires to proceed a large amount of information but with a limited number of degrees of freedom (and thus of RBS gates). In Section IV.4, we give an example of such an application for a fully connected orthogonal neural network that is not fully controllable. Problem-inspired architectures using symmetries are perfect candidates for this type of application.
## IV Additional simulations
In this section, we present additional simulations. We propose to illustrate the use of Algorithm 1 for an existing hardware and the corresponding simulations in Section IV.1. In Section IV.2, we propose additional simulations to illustrate the results in Section III.
### Using our encoding method for an existing hardware
In this part, we propose to use the Algorithm 1 to design a 5-qubit quantum data loader that fits the Rigetti ASPEN M2 connectivity given in Fig. 8. This hardware is keen on implementing such a data loader thanks to its high connectivity and to the fact that the RBS gate is native to this platform (and called the XY gate).
Using the Algorithm 1, we can design a quantum data loader for this connectivity. To minimize the depth, one can use in practice a variation of this algorithm that minimizes the depth by testing in priority the gates that are more likely to parallelize themselves. As a result, the circuit obtained is given in Fig. 8.
In order to highlight the performance of our quantum data loader, we simulate our method using python numpy but also using Qiskit, a quantum instruction language developed as a python library by IBM. We test our data loader for a well-known data set called the Fashion MNIST dataset.
To achieve this simulation, we use our codes after applying a Principal Component Analysis (PCA) to reduce our dataset to \(\binom{n}{k}\)-dimensional vector, with \(n=5\) the number of qubits, and \(k=2\) the chosen HW. We used 1000 samples to derive those values.
The approximation error is measured using the following cost function between the real state that we are supposed to have \(x^{*}\) and the output of our quantum system given by \(x=W^{k}(\theta)\cdot e_{s}\) (with \(e_{s}\) the vector representation of the initial state in our method):
\[\mathcal{C}(x)=||W^{k}(\Theta)\cdot e_{s}-x^{*}||_{2}^{2} \tag{23}\]
For an actual implementation on a Quantum Processor Unit, one can generalize the tomography procedure described in [6].
### Trainability of Hamming weight preserving quantum circuits
In this part, we illustrate the results obtained from section III on the trainability of RBS and FBS gate based quantum circuits in a specific subspace corresponding of a choice of Hamming weight \(k\).
In Fig. 9, we plot the average value and the variance of the gradient for quantum circuits made only of RBS
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Simulation tool used** & **Average Error** & **Error Variance** \\ \hline Numpy & 0.0097 & 0.00030 \\ \hline Qiskit & 0.0094 & 0.00028 \\ \hline \end{tabular}
\end{table}
Table 2: Performance of the quantum data loader presented in Fig. 8.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Algorithm** & **Feedforward** & **Training** \\ \hline Quantum RBS VQCs & \(\mathcal{O}(D/(p*\delta^{2}))\) & \(\mathcal{O}((D/p)^{2})\) \\ \hline Classical RBS VQCs & \(\mathcal{O}(D\binom{n-2}{k-1})\) & \(\mathcal{O}((D\binom{n-2}{k-1})^{2})\) \\ \hline \end{tabular}
\end{table}
Table 1: Running time summary. \(n\) is the number of qubits, \(k\) the chosen Hamming weight corresponding to the selected subspace, and \(\delta\) is the error parameter in the quantum implementation. We call \(p\) the average number of gates we can parallelize at the same time (upper-bounded by \(\lfloor\frac{n}{2}\rfloor\)).
Figure 8: Rigetti ASPEN M2 connectivity graph and the Quantum Data loader quantum circuit obtained using our Algorithm 1.
and FBS with a random choice of parameters, input, and target output. This setting corresponds to the case of Theorem 1. We observe that the average gradient is close to zero and its variance corresponds to the theoretical values. In Theorem 1, the result is an average for a Haar distributed inputs and the target outputs. In our simulation, we choose to plot the study of the cost function while choosing randomly the inputs, and the target outputs. To do so, we considered those states as \(\binom{n}{k}\)-dimensional vectors with coefficients uniformly distributed in \([-1,1]\), and we normalized them afterwards. As a result, the distribution of the input and target states are not exactly Haar distributed.
In the previous Figure, one can notice that the value of the variance of the gradient is unchanged with the number of periodic ansatz \(L\). As the number of periodic ansatz increase, the number of gates increase but also the controllability in the state space. Indeed, Fig. 6 shows the evolution of the controllability in the state space by presenting the evolution of the QFIM rank with \(L\) (the maximal controllability is achieve with \(L=4\)). Therefore, Fig. 9 highlights a in-dependency between the controllability in the state space and the trainability.
In Fig. 10, we plot the average value and the variance of the gradient for quantum circuits made only of RBS and FBS with a random choice of parameters and target output. We fix the choice of the input state to be the initial state where the \(k\) first qubits are initialized in state \(\ket{1}\). This setting corresponds to the case of Theorem 2, very similar to the case of our encoding described in Section II. This is the reason why the variance of the gradient is null for some of the first gates that don't affect the initial state.
The theoretical variance values in this Theorem is stated for the parameters corresponding to the gates after a certain rank \(\lambda_{0}\). According to the subspace choice \(k\), the rank \(\lambda_{0}\) represents the number of gates necessary to reach any states in the state basis \(B_{k}^{n}\). Therefore, we can see in Fig. 10 that the first points for any subspace \(k\) are quite far from the theoretical values, but as the number of the gate increases, the points get closer to our theoretical results.
### HW preserving VQC for Neural Networks
In our work, we introduce an encoding method using subspace invariant quantum circuits with a focus on the HW preserving systems, the RBS, and the FBS gates. In this particular setting, the capacity of classically simulate such a circuit is not an issue as one may use our encoding method with an additional circuit hard to classically simulate. However, the results given in section III highlights the interest of using such circuit as neural networks. In addition, the use of Hamming weigh preserving quantum circuits for machine learning purposes has already been proposed in [5; 6; 7; 22], and we could generalize those methods for larger subspace with better speedups.
In this Section, we present how to use our results for Neural Networks. First, we introduce in III.3 the complexity of a RBS based quantum circuit. Then we present
Figure 10: Gradient study for a quantum circuit made of 5 consecutive block circuits as described in Fig. 6. The plots are numerical evidences of Theorem 2 as the expectation values of the gradient are close to 0 and the variances follow the theoretical values given by the doted lines. The initial state is fixed as a single state in \(B_{k}^{n}\) for any subspace choice \(k\). We use 10000 random target output and set of parameters.
Figure 9: Gradient study for the same Periodic Structure Ansatz as in Fig. 6. Using 100 random choices of input, wanted output, and parameters, we plot the average value of the gradient for each number of periodic ansatz L in (1). In (2), we plot the average gradient in the case where the RBS are replaced by FBS. In (3), we derive the variance of the gradient for the RBS case, and in (4), the variance for the FBS case. The dotted lines correspond to the theoretical values from Theorem 1.
in IV.4 the use of a RBS based VQC for image classification for a toy example.
### RBS based Quantum Orthogonal Neural Network
In this section, we illustrate the fact that we can use our results for Quantum Neural Network (QNN) application. In particular, one can consider a RBS circuit in a particular HW basis \(B_{k}^{n}\) in order to design an orthogonal neural network, as the equivalent unitary in the corresponding Hilbert space \(W^{k}(\Theta)\) is orthogonal according to Theorem 3. In Fig. 11, we plot the training and the testing accuracy for binary classification on the Fashion MNIST dataset where we use the quantum circuit described in Fig. 8 as a QNN.
We do not claim that this plot exhibits any advantage to use RBS based quantum circuits as neural networks, but it illustrates that we can easily use such an architecture. The fact that the testing accuracy is higher than the training one v Large simulation for more complex HW preserving quantum neural networks for large value of \(k\) must be tackled in future work.
According to the complexity of such RBS models described in Table 1, there is no exponential advantage in using a quantum orthogonal neural network. In [6], the authors present the use of a specific case of such a RBS model in the unary basis and show a quadratic advantage for a fully controllable QNN in the unary basis called the Pyramidal Quantum Neural Network (PQNN). For specific use cases that require less controllability but higher-dimensional input data, one can prefer to use a RBS based orthogonal neural network in a larger Hamming weight basis and can achieve a more significant speedup.
## V Discussion
In this work, we study the controllability and the trainability of a subspace-preserving variational quantum circuit through the prism of designing a quantum data loader using RBS gates. We show that the encoding capacity is linked with the controllability of the quantum system and that using only a certain type of gates, the controllability is linked with the connectivity of the hardware. In addition, we show how to study mathematical tools such as the Dynamical Lie Algebra or the Quantum Fisher Information Matrix in order to design a data loader. Finally, we show how to avoid Barren Plateaus for variational quantum circuits only made of RBS or FBS gates.
The results we show for the trainability of Hamming weight preserving quantum circuits must be compared with our results on the controllability. Indeed, it is now clear that there is a trade-off between the expressivity of a variational quantum circuit and its trainability. In recent work [11], authors have shown the connection between the dimension of the Dynamical Lie Algebra (DLA) and the gradient variance of subspace preserving variational quantum circuit in the specific setting of the subspace full controllability and with a 2-design hypothesis on the used ansatz. As a result, they show that such gradient variance evolves in an inverse manner with the dimension of the DLA. They also left open Conjecture 1. More recently, [17] and [18] have shown that this evolution is true in a more general setting and hence proved the Conjecture 1 to be true, under some assumptions on the initial state and the observable. In [18], the authors also claim that RBS and FBS based quantum circuits are not part of their framework and show a coherent result to ours by given an upper-bound on the abstract variance of such circuit. Our results are independent from those papers, and show theoretical guarantees for RBS and FBS based VQC in a general setting.
**Conjecture 1** (from [11]).: _Let the state \(\rho\) belong to a subspace \(\mathcal{H}_{k}\) associated with a subspace DLA \(\mathfrak{g}_{k}\) (or sub-DLA, the subrepresentation in \(\mathfrak{g}\) where \(\rho\) has support on). Then, the scaling of the cost function partial derivative is inversely proportional to the scaling of the dimension of the DLA, i.e._
\[\forall\!ar_{\Theta}[\partial_{\mu}\mathcal{C}(\Theta)]\in\mathcal{O}\left( \frac{1}{poly(\dim(\mathfrak{g}_{k}))}\right) \tag{24}\]
In our work, we show that for the specific cases of RBS/FBS circuits and preserved subspace based on Hamming weight, the cost gradient variance evolves in an inverse manner with the dimension \(d_{k}\) of the subspace, and not as stated in Conjecture 1. In general, it is very hard to compute the expectation value and the variance of a subspace preserving variational circuit gradient. The use of the 2-design hypothesis gives tools to study the gradient [27] but is a huge hypothesis on the expressivity of the quantum system that often leads to
Figure 11: Binary classification on the Fashion MNIST data set for 10000 training samples and 5000 testing samples. The optimization method is ADAM with a batch size of 5, and we used is the Cross Entropy Cost.
considering that the system is fully controllable. In the specific case of RBS and FBS gates, the expressions of those gates are easy to manipulate analytically, and we are able to prevent the use of use hypothesis.
Although our work give theoretical guarantees on the trainability of a specific type of VQC, we show that those methods could be useful for near term QML. In addition, we would like to insist on the fact that one cannot state in general that the trainability of a VQC is perfectly defined by its controlability through the dimension of the corresponding DLA. First because the dimension of the DLA is only an upper-bound on the controlability, but also because we show an example of a VQC where the previous conjecture from [11] is refuted. In our setting, only the dimension of the subspace is involved in the scaling of the cost gradient variance. Therefore, one needs to be careful to consider the smallest subspace possible. For example, if you consider a RBS based quantum circuit but where some states in \(B_{k}^{n}\) cannot be reached, you may need to consider a new smaller subspace more appropriate.
## VI Acknowledgment
This work is also supported by the H2020-FETOPEN Grant PHOQUSING (GA no.: 899544), the Engineering and Physical Sciences Research Council (grants EP/T001062/1), and the Naval Group Centre of Excellence for Information Human factors and Signature Management (CEMIS). ABG is supported by ANR JCJC TCS-NISQ ANR-22-CE47-0004, and by the PEPR integrated project EPiQ ANR-22-PETQ-0007 part of Plan France 2030. This work is part of HQI initiative (www.hqi.fr) and is supported by France 2030 under the French National Research Agency award number ANR-22-PNCQ-0002.
|
2302.14503 | Can We Use Diffusion Probabilistic Models for 3D Motion Prediction? | After many researchers observed fruitfulness from the recent diffusion
probabilistic model, its effectiveness in image generation is actively studied
these days. In this paper, our objective is to evaluate the potential of
diffusion probabilistic models for 3D human motion-related tasks. To this end,
this paper presents a study of employing diffusion probabilistic models to
predict future 3D human motion(s) from the previously observed motion. Based on
the Human 3.6M and HumanEva-I datasets, our results show that diffusion
probabilistic models are competitive for both single (deterministic) and
multiple (stochastic) 3D motion prediction tasks, after finishing a single
training process. In addition, we find out that diffusion probabilistic models
can offer an attractive compromise, since they can strike the right balance
between the likelihood and diversity of the predicted future motions. Our code
is publicly available on the project website:
https://sites.google.com/view/diffusion-motion-prediction. | Hyemin Ahn, Esteve Valls Mascaro, Dongheui Lee | 2023-02-28T11:34:55Z | http://arxiv.org/abs/2302.14503v1 | # Can We Use Diffusion Probabilistic Models for 3D Motion Prediction?
###### Abstract
After many researchers observed fruitfulness from the recent diffusion probabilistic model, its effectiveness in image generation is actively studied these days. In this paper, our objective is to evaluate the potential of diffusion probabilistic models for 3D human motion-related tasks. To this end, this paper presents a study of employing diffusion probabilistic models to predict future 3D human motion(s) from the previously observed motion. Based on the Human 3.6M and HumanEva-I datasets, our results show that diffusion probabilistic models are competitive for both single (deterministic) and multiple (stochastic) 3D motion prediction tasks, after finishing a single training process. In addition, we find out that diffusion probabilistic models can offer an attractive compromise, since they can strike the right balance between the likelihood and diversity of the predicted future motions. Our code is publicly available on the project website: [https://sites.google.com/view/diffusion-motion-prediction](https://sites.google.com/view/diffusion-motion-prediction).
## I Introduction
Estimating how a human would move in the near future is an essential task for various applications such as surveillance [1, 2], autonomous driving [3, 4], and human-robot/computer-interaction [5]. Many approaches have been proposed to solve this problem, often based on the motion capture datasets such as Human3.6M [6] or SMPL [7]-based datasets such as AMASS [8]. In this paper, we concern with a task whose goal is to predict a sequence of 3D pose skeletons in Human3.6M and HumanEva-I [9] datasets, when a previously observed 3D pose sequence is given as an input.
Existing works on 3D skeleton motion prediction can be categorized as follows. One line of research focuses on models for deterministic motion prediction [10, 11, 12, 13, 14, 15]. These works aim at predicting a single motion that is most likely to be observed in the future. Therefore, their performance is usually evaluated based on an \(L2\)-distance between a prediction and a ground truth. Another line of research focuses on generative models for stochastic motion prediction [16, 17, 18, 19]. Their performance is evaluated based on the metrics for likelihood and diversity. After generating a fixed number of prediction samples from a single observation, the likelihood is measured based on the minimum distance between the prediction samples and ground truth, and the diversity is measured based on the average distance between all pairs of prediction samples.
However, we cannot judge which approach is always better than the other, since the efficiency would depend on the target application values. For instance, when one needs only the most precise sample with low latency, deterministic approaches would be better. If we say that both approaches are necessary, our next question would be whether we can propose an efficient model for both types of prediction. To answer the question, we study the possibility of using diffusion probabilistic models [20, 21] for both deterministic and stochastic 3D motion prediction tasks.
If we propose a diffusion probabilistic model [20, 21] as a solution, one might ask us whether this is because we are fascinated by its performance in image generation [22, 23]. Frankly speaking, yes, we initiated this study out of our curiosity - can we use diffusion probabilistic models for 3D motion prediction? Unfortunately, our experimental results show that the diffusion model cannot perfectly replace existing state-of-the-arts for both deterministic and stochastic motion prediction tasks. However, we found a glimpse of hope in diffusion models, due to their effectiveness in both prediction types after a single training procedure, and their ability to properly balance the trade-off between diversity and likelihood.
Fig. 1: Example results when diffusion probabilistic models are used for 3D human motion prediction tasks, when observed motion is ‘walking’. After a single training procedure, diffusion models can be effectively used for both deterministic (Deter), and stochastic (Sto.) motion prediction tasks.
Figure 1 shows the example results when the diffusion models are used for both deterministic and generative motion prediction tasks. Although a diffusion model is essentially a generative model, we found that the deterministic sample with a fair performance can be obtained from the diffusion model when all randomness is excluded from its denoising process. In addition, we found out that the diffusion models can fix the flaws of several generative methods [18], which highlight the diversity of generated samples. Existing works as [18] claim that the likelihood of predicted samples is high when the minimum distance between samples and ground truth is low. Because of this, [18] can often generate the motions that are out-of-context as [24] pointed out. Compared to this, our diffusion models can generate prediction samples that are more likely to occur, so the generated motion does not diverge too much to be called out-of-context.
The remaining paper is constructed as follows. After representing our literature survey in Section II, Section III will explain how general diffusion models work as well as how we design ours to solve 3D motion prediction tasks. Section IV will show both qualitative and quantitative experiment results, and a related discussion will be also presented. Finally, this paper will end in Section V by mentioning limitations and future works.
## II Related Work
### _3D Motion Prediction_
**Deterministic Models.** The goal of deterministic 3D motion prediction is to minimize the distance between a predicted motion and ground truth. To solve this problem, early works relying on deep neural networks [10, 11, 12] often employed recurrent neural networks (RNNs) [25, 26], which are still well-known for their effectiveness in processing time-series data. Among RNN-based works, a notable model is a structure RNN (S-RNN) [12], which considers the spatio-temporal information of human motion, by manually designing the high-level spatio-temporal graph to explicitly model the human body structure (i.e., spine, arm, and leg).
While S-RNN understands the human body structure based on the handcrafted network structure, there is another line of research [13, 27] that uses graph convolutional network (GCN) to overcome this manually designed spatial relationship understanding. For instance, [13] suggested a model named DCT-GCN, where discrete cosine transform (DCT) understands the temporal information of motion, and GCN learns the spatial relationship between human body joints. DCT-GCN obtains the state-of-the-art result when evaluated on Euler-angle-based mean squared error, but its best result can be obtained when the model is separately trained for each short- or long-term prediction.
Recently, several works for deterministic motion prediction [14, 15] are based on the Transformer [28], which was originally suggested for language understanding problems. Models named Spatio-Temporal Transformer (ST-TR) [14] and 2-Channel Transformer (2CH-TR) [15], understand the spatio-temporal relationship of human motion by putting the self-attention mechanism on each pose-parameter (spatial) and time (temporal) dimension. After understanding each spatial and temporal information in parallel, outputs from both attention mechanisms are properly combined. The difference between ST-TR and 2CH-TR comes from when and how often the model combines spatial and temporal information.
**Generative Models.** The goal of stochastic 3D motion prediction is to build a generative model which can sample out several future motions that are likely to happen after the observed human motion. To solve this problem, early works [16, 17] employed deep generative models such as variational autoencoders (VAEs) [29] or generative adversarial networks (GANs) [30]. For instance, [17] suggested a generative model based on the conditional VAEs, and showed that VAEs can sample out several future motions that are reasonable as well as diverse. Compared to VAE, [16] showed that GANs based on the Wasserstein loss function can be effectively used in stochastic motion prediction tasks.
While these works [16, 17] focused on exploring the potential of using deep generative models in stochastic motion prediction tasks, another line of works [18, 19] focused on sampling out as much as diverse motions that can contain the most plausible motion at the same time. For instance, [18] proposed to train a post-hoc model which can be attached to the pre-trained deep generative model. This post-hoc model maps a random variable to several latent vectors of the pre-trained generative model. Based on the _diversity-promoting prior_, the post-hoc model is trained to improve the diversity between samples, which can be obtained by decoding the mapped latent vectors.
Experiments in [18, 19] evaluate the likelihood of prediction samples based on the _minimum_ distance between the samples and the ground truth(s). They denote the prediction samples as plausible based on the sample that is closest to the ground truth(s). However, this can make it difficult for users to choose the most plausible motion among the prediction samples, since all samples will not be distributed near the most plausible motion. For instance, if the observed motion is a human sitting down and drinking something, [18] and [19] can produce motion samples that predict the human suddenly standing up and starting discussing something with others. As [24] has pointed out, we would like to also focus on the necessity of contextually plausible and diverse motion sampling. Therefore, our paper would evaluate the likelihood of prediction also based on the mean and standard deviation of distances between the samples and ground truth.
### _Diffusion Probabilistic Models_
Diffusion probabilistic models [20] have become a new rising star in generative models after showing excellent performance in image synthesis. Especially, its performance on text-conditioned image synthesis [22] makes researchers as well as the public in ave. Diffusion models consider two processes: a forward process that slowly destructs the data sample by gradually injecting the random noise, and a reverse process that learns how to reconstruct the data
sample by gradually denoising the random noise. While the advantage of diffusion models can be empirically shown based on their performances, the disadvantage is the speed of their sampling process. If the reverse process includes \(1000\) times of denoising processes, it means that the data sample can be obtained after feed-forwarding the random noise to the denoising network for \(1000\) times. Of course, this disadvantage can be circumvented if the application does not require the prediction samples with low latency.
Aside from image generation tasks, nowadays researchers are suggesting to use diffusion models in various generation tasks, such as text-to-speech [31], text-to-sound [32], and video [33]. Focusing on motion-related tasks like ours, several works incorporate diffusion models in text-conditioned motion generation tasks [34, 35]. For the motion of intelligence agents, [36] suggests using diffusion models to sample out trajectories for properly solving a given task. In our paper, we use diffusion models in 3D human motion prediction tasks, but to the best of our knowledge, there is no attempt yet to use diffusion models in the 3D motion prediction task. But we believe more researchers would involve in using diffusion models to answer this question - can diffusion models be our new savior in any kind of data generation tasks?
## III Method
### _Preliminaries_
We will provide a short description of diffusion probabilistic models first. Note that our description relies on [20] and [21], which provide a basis for our work.
**Diffusion Probabilistic Model.** Let \(\mathbf{x}^{0}\sim q(\mathbf{x}^{0})\) denote a data point sampled from its distribution \(q\). In order to learn \(p_{\theta}(\mathbf{x}^{0})\) which can model \(q(\mathbf{x}^{0})\), diffusion probabilistic models consider two processes. One is a _forward process_ which gradually deconstructs \(\mathbf{x}^{0}\) by injecting a subtle Gaussian noise for \(K\) times, such that \(\mathbf{x}^{0}\) can be destroyed into \(\mathbf{x}^{1},\ldots,\mathbf{x}^{K}\), where \(p(\mathbf{x}^{K})=\mathcal{N}(\mathbf{0},\mathbf{I})\). This process can be formulated as below, which is to follow a Markov chain \(q(\mathbf{x}^{k}|\mathbf{x}^{k-1})\) for \(K\) times:
\[q(\mathbf{x}^{1:K}|\mathbf{x}^{0}) =\prod_{k=1}^{K}q(\mathbf{x}^{k}|\mathbf{x}^{k-1}) \tag{1}\] \[q(\mathbf{x}^{k}|\mathbf{x}^{k-1}) =\mathcal{N}(\sqrt{1-\beta_{k}}\mathbf{x}^{k-1},\beta_{k}\mathbf{ I}), \tag{2}\]
where \(\beta_{k}\) denotes a constant for a noise level. Note that \(\mathbf{x}^{k}\) can be sampled from \(\mathbf{x}^{0}\) directly with a closed-form solution:
\[\mathbf{x}^{k}=\sqrt{\alpha_{k}}\mathbf{x}^{0}+\sqrt{1-\alpha_{k}}\mathbf{ \epsilon},\ \ \mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I}), \tag{3}\]
where \(\hat{\alpha}_{k}=1-\beta_{k}\) and \(\alpha_{k}=\prod_{i=1}^{k}\hat{\alpha}_{i}\).
Another is a _reverse process_, which goal is to obtain \(\mathbf{x}^{0}\) starting from \(\mathbf{x}^{K}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\), by gradually denoising \(\mathbf{x}^{K}\). This process can also be formulated as following a Markov chain \(p_{\theta}(\mathbf{x}^{k-1}|\mathbf{x}^{k})\) for \(K\) times:
\[p_{\theta}(\mathbf{x}^{0:K}) =p(\mathbf{x}^{K})\prod_{k=1}^{K}p_{\theta}(\mathbf{x}^{k-1}| \mathbf{x}^{k}), \tag{4}\] \[p_{\theta}(\mathbf{x}^{k-1}|\mathbf{x}^{k}) =\mathcal{N}\big{(}\mathbf{x}_{t-1};\mathbf{\mu}_{\theta}(\mathbf{x}^ {k},k),\sigma^{2}(k)\mathbf{I}\big{)}, \tag{5}\]
where \(p(\mathbf{x}^{K})=\mathcal{N}(\mathbf{0},\mathbf{I})\). To obtain \(\mathbf{\mu}_{\theta}\) and \(\sigma\), [20] suggests denoising diffusion probabilistic models (DDPM), which get \(\sigma^{2}(k)=\frac{1-\alpha_{k-1}}{1-\alpha_{k}}\beta_{k}\), parameterize \(\mathbf{\mu}_{\theta}\) with \(\theta\), and sample \(\mathbf{x}^{k-1}\sim p_{\theta}(\mathbf{x}^{k-1}|\mathbf{x}^{k})\) as below:
\[\mathbf{\mu}_{\theta}(\mathbf{x}^{k},k) =\frac{1}{\sqrt{\hat{\alpha}_{k}}}\bigg{(}\mathbf{x}^{k}-\frac{ \beta_{k}}{\sqrt{1-\alpha_{k}}}\mathbf{\epsilon}_{\theta}(\mathbf{x}^{k},k)\bigg{)}. \tag{6}\] \[\mathbf{x}^{k-1} =\mathbf{\mu}_{\theta}(\mathbf{x}^{k},k)+\sigma(k)\mathbf{z},\ \ \mathbf{z} \sim\mathcal{N}(\mathbf{0},\mathbf{I}). \tag{7}\]
In practice, \(\mathbf{\epsilon}_{\theta}\) is modeled with a neural network, and it learns how much to denoise from \(\mathbf{x}^{k}\). To train this, [20] suggested a simplified loss function as below:
\[\mathcal{L}(\theta) =\|\mathbf{\epsilon}-\mathbf{\epsilon}_{\theta}(\mathbf{x}^{k},k)\|^{2}\] \[=\|\mathbf{\epsilon}-\mathbf{\epsilon}_{\theta}(\sqrt{\alpha_{k}} \mathbf{x}^{0}+\sqrt{1-\alpha_{t}}\mathbf{\epsilon},k)\|^{2}. \tag{8}\]
In a training process, \(k\) is randomly sampled to obtain \(\mathcal{L}(\theta)\). For more details, please refer to [20] and [21].
**Conditional Diffusion Model.** A conditional score-based diffusion model for imputation (CSDI) [21] is proposed to solve a time-series imputation problem using diffusion models. It adds conditional information \(\mathbf{x}_{co}\) to eq. (4)-(5):
\[p_{\theta}(\mathbf{x}^{0:K}) =p(\mathbf{x}^{K})\prod_{k=1}^{K}p_{\theta}(\mathbf{x}^{k-1}| \mathbf{x}^{k},\mathbf{x}_{co}), \tag{9}\] \[p_{\theta}(\mathbf{x}^{k-1}|\mathbf{x}^{k},\mathbf{x}_{co}) =\mathcal{N}(\mathbf{x}^{k-1};\mathbf{\mu}_{\theta}(\mathbf{x}^{k},k| \mathbf{x}_{co}),\sigma^{2}(k)\mathbf{I}) \tag{10}\]
To define \(\mathbf{\mu}_{\theta}(\mathbf{x}^{k},k|\mathbf{x}_{co})\), eq. (6)-(7) can be rewritten by adding \(\mathbf{x}_{co}\) as a condition to \(\mathbf{\mu}_{\theta}\) and \(\mathbf{\epsilon}_{\theta}\). Note that \(\mathbf{\epsilon}_{\theta}(\mathbf{x}^{k},k|\mathbf{x}_{co})\) is modeled with a neural network to learn how much to denoise from \(\mathbf{x}^{k}\) given \(\mathbf{x}_{co}\). When training the network, the same loss function as eq. (8) is used, by replacing \(\mathbf{\epsilon}_{\theta}\) properly with \(\mathbf{x}_{co}\) as a condition.
### _Problem Formulation_
Let \(\mathbf{p}_{t}\in\mathbb{R}^{D}\) be a 3D pose vector at time \(t\), which can be denoted with various representations such as axis-angle, Euler-angle, or \(xyz\)-position. Here, \(D=3n\) and \(n\) denotes the number of joints. A task of 3D human motion prediction can be defined as predicting future \(L\) poses, \(P_{pre}=\{\mathbf{p}_{T+1},\ldots\mathbf{p}_{T+L}\}\in\mathbb{R}^{L\times D}\), when \(T\) poses, \(P_{obs}=\{\mathbf{p}_{1},\ldots\mathbf{p}_{T}\}\in\mathbb{R}^{T\times D}\) are observed.
We utilize CSDI [21] for obtaining \(P_{pre}\) from given \(P_{obs}\). Starting from \(P_{pre}^{0}=P_{pre}\), our forward process can obtain \(P_{pre}^{k}\) as below:
\[P_{pre}^{k}=\sqrt{\alpha_{k}}P_{pre}^{0}+\sqrt{1-\alpha_{k}}\mathbf{\epsilon},\ \ \mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I}) \tag{11}\]
For a reverse process, we propose a denoiser network which models \(\mathbf{\epsilon}_{\theta}(\mathbf{x}^{k},k|\mathbf{x}_{co})=\mathbf{\epsilon}_{\theta}(P_{ pre}^{k},k|P_{obs})\). This network is trained by minimizing \(\mathcal{L}(\theta)=\|\mathbf{\epsilon}-\mathbf{\epsilon}_{\theta}(P_{pre}^{k},k|P_{obs})\|^{2}\).
After training, we can sample \(P^{0}_{pre}\) by repeating below reverse process for \(K\) times, starting from \(P^{K}_{pre}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\):
\[P^{k-1}_{pre}=\boldsymbol{\mu}_{\theta}(P^{k}_{pre},k|P_{obs})+ \sigma(k)\mathbf{z},\ \ \mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I}), \tag{12}\]
where \(\boldsymbol{\mu}_{\theta}(P^{k}_{pre},k|P_{obs})\) is defined with \(\boldsymbol{\epsilon}_{\theta}(P^{k}_{pre},k|P_{obs})\) and properly modified version of eq. (6). After finishing training, if our denoiser network is used for deterministic prediction in the test phase, we set \(P^{K}_{pre}\) and \(\mathbf{z}\) as zero-vectors, such that all randomness in eq. (12) can be ignored.
### _Transformer-based Motion Denoiser_
Since \(P^{k}_{pre}\) and \(P_{obs}\) are time-series of human pose vectors, one can model \(\boldsymbol{\epsilon}_{\theta}(P^{k}_{pre},k|P_{obs})\) with neural network architectures which can understand time-series data. For example, network architectures such as RNNs [25, 26] or Transformers [28] can be candidates. We empirically found out that the denoisers based on the Transformers that process both spatial and temporal information are most effective.
Figure 2 shows how we design our Transformer-based denoisers in two ways. Inspired by [21], the first denoiser shown on top of the figure processes both information in series. After concatenating \(P^{k}_{pre}\in\mathbb{R}^{L\times D}\) and \(P_{obs}\in\mathbb{R}^{T\times D}\) such that input can be \(P^{k}_{inp}\in\mathbb{R}^{(T+L)\times D}\), \(P^{k}_{inp}\) passes spatial and temporal transformer layers in series, where each layer applies self-attention to time and pose-parameter dimension. Before passing each transformer layer, positional encoding is added to the input as [28] suggests, with respect to pose-parameter \(d\in[0,D]\) (spatial) or time \(t\in[0,T]\) (temporal) dimension. Also, the additional learnable positional encoding that projects a diffusion step \(k\) into a vector space is added to the input as [21] suggests. Let \(P^{k}_{out}\in\mathbb{R}^{(T+L)\times D}\) denote the output which can be obtained after \(P^{k}_{inp}\) passing two layers. Then, the last \(L\times D\) parts from \(P^{k}_{out}\) is obtained as \(\boldsymbol{\epsilon}_{\theta}(P^{k}_{pre},k|P_{obs})\), which would be used for denoising \(P^{k}_{pre}\).
The second denoiser shown on the bottom of Figure 2, is inspired by [14] and [15], and works in parallel to understand spatio-temporal information. After \(P^{k}_{inp}\) passes both spatial and temporal transformer layers in parallel, two matrices with the same size as \(P^{k}_{inp}\) are obtained, and concatenated into a 3rd-order tensor whose size is \(2\times(T+L)\times D\). After this tensor passes 2-dimensional convolutional layer with \((1\times 1)\)-sized kernel, the output \(P^{k}_{out}\in\mathbb{R}^{(T+L)\times D}\) is obtained. From \(P^{k}_{out}\), \(\boldsymbol{\epsilon}_{\theta}(P^{k}_{pre},k|P_{obs})\) is obtained as same as in the first denoiser.
Note that we do not use encoder-decoder based structure, which encode a set of feature vectors from \(P_{obs}\) and decode \(\boldsymbol{\epsilon}_{\theta}(P^{k}_{pre},k|P_{obs})\) from the encoded feature vectors and \(P^{k}_{pre}\). We tried various denoisers of Transformer- or RNN-based encoder-decoder, but none of them turns out to be effective.
### _Implementation Details_
Our transformer-based motion denoisers have a self-attention module with 8 multi-heads and 512-dimensional query, key, and value vectors. And each temporal or spatial transformer layer shown in Figure 2 consists of a single-layered transformer encoder. To train denoisers, we set batch size as 512 and update parameters for 50,000 iterations with Adam optimizer of learning rate \(0.0001\). The diffusion step is set as \(k\in[0,20]\), with linearly scheduled noise levels \(\beta_{k}\) that ranges between \(0.001\) (\(k\downarrow\)) and \(0.333\) (\(k\uparrow\)).
Fig. 2: Two designs of our Transformer-based motion denoiser. Inspired by ST-TR [14], 2CH-TR [15] and CSDI [21], our motion denoiser processes both spatial and temporal information in series (top) or in parallel (bottom). Here, \(d\) and \(t\) stand for the dimension of each pose-parameter and time, and TF stands for Transformer [28]. Note that the positional encoding also involves adding a learnable vector that represents a diffusion step \(k\) as [21] suggests.
## IV Experiment
### _Dataset and Metric_
**Dataset.** We conduct our experiment for both deterministic and stochastic motion prediction tasks. For deterministic experiments, we use the Human3.6M dataset [6] and measure the Euler-angle mean square error (MSE) for evaluation as other works [12, 13, 14, 15] do. Here, with 25 fps, input observation has 50 frames, and output prediction has 25 frames. For stochastic experiments, we preprocess Human3.6M [6] and HumanEva-I [9] datasets into \(xyz\)-based representation as [18, 19] do. Based on that, various metrics for evaluating likelihood and diversity are measured. Here, with 50 fps, an input observation has 25 frames, output prediction has 100 frames, and the number of prediction samples is 50.
**Metrics.** As mentioned above, we measure the performance of our denoiser based on the Euler-angle MSE when it is used for deterministic prediction. For stochastic prediction, we use several metrics from what [18] suggests to evaluate likelihood and diversity. But we propose more metrics such as aDE, sDE, aFDE, and sFDE to measure how the samples are distributed near the ground truth. Note that some of the below sentences describing metrics are borrowed from [18].
(1) **Average Pairwise Distance (APD)**: average \(L2\) distance between pairs from \(N\) predictions \(\hat{\mathbf{x}}\in\mathbb{R}^{L\times D}\), which is computed as \(\frac{1}{N(N-1)}\sum_{i=1}^{N}\sum_{j\neq i}^{N}\|\hat{\mathbf{x}}_{i}-\hat{ \mathbf{x}}_{j}\|_{2}\). This measures the diversity within \(N\) predictions. (2) **minimum Displacement Error (mDE)**: the minimum \(L2\) distance between all \(N\) predictions \(\hat{\mathbf{x}}\) and ground truth \(\mathbf{x}\), which is computed as \(\min_{\hat{\mathbf{x}}}\frac{1}{L}\|\hat{\mathbf{x}}-\mathbf{x}\|_{2}\). This metric was defined as ADE in [18]. (3) **average Displacement Error (aDE)**: the average \(L2\) distance between all \(N\) predictions \(\hat{\mathbf{x}}\) and ground truth \(\mathbf{x}\), which is computed as \(\frac{1}{NL}\sum_{i=1}^{N}\|\hat{\mathbf{x}}_{i}-\mathbf{x}\|_{2}\). (4) **standard deviation of Displacement Error (sDE)**: the standard deviation of \(L2\) distances between all \(N\) predictions and ground truth. (5) **minimum Final Displacement Error (mFDE)**: the minimum \(L2\) distance between final poses of \(N\) predictions and ground truth, which is calculated as \(\min_{\hat{\mathbf{x}}}\|\hat{\mathbf{x}}(L)-\mathbf{x}(L)\|_{2}\). This metric was defined as FDE in [18]. (6) **average Final Displacement Error (aFDE)**: the average \(L2\) distance between final poses of \(N\) predictions and ground truth, which is calculated as \(\frac{1}{N}\sum_{i=1}^{N}\|\hat{\mathbf{x}}_{i}(L)-\mathbf{x}(L)\|_{2}\). (7) **standard deviation of Final Displacement Error (sFDE)**: the standard deviation of \(L2\) distances between final poses of \(N\) predictions and ground truth.
### _Quantitative Results_
**Deterministic Prediction.** Table I compares Euler-angle MSEs when our diffusion model is used for deterministic motion prediction. Here, bold fonts denote the best results among all approaches, and underlines denote the best results among our denoisers (series or parallel). It is shown that the overall performance of DCT-GCN [13] is still the best. Among our approaches, the denoiser which understands spatial and temporal information in series is better than the parallel denoiser. Although our models do not achieve state-of-the-art results, it is shown that our approaches are better in long-term prediction (1000ms) when compared with other transformer-based models [14, 15]. This is a notable result, since (1) our models are originally generative ones, and (2) our models do not require additional training for deterministic prediction since ignoring all randomness in the denoising process is all they need.
**Stochastic Prediction** Table II shows the comparison of metrics for measuring the likelihood and diversity. Here, bold fonts denote the best result and underlines denote the second best result among all approaches. It is shown that previous works [18, 19] focusing on sample diversity best perform in APD. Also, it is shown that they are generally better in terms of mDE and mFDE. We would like to argue here that the high diversity in prediction increases the probability of having one sample closest to the ground truth. Then, how can we choose the most plausible result among predictions that are sampled to be diverse?
This is the same question that [24] also pointed out. So in [24], metrics for measuring the quality and context are proposed. For measuring the quality, [24] used a pre-trained binary classifier which can discriminate the ground truths (real) from predictions (fake). If this classifier fails to discriminate the predicted motions as fake, a higher quality score is obtained. For measuring the context, [24] used a pre-trained model which classifies action from motion. If it estimates that the action label of prediction is as same as the observed motion, a higher context score is obtained.
However, we were not able to use the same metric as [24] since its pre-trained classifiers were not openly released. Therefore, we instead propose metrics such as aDE, sDE, aFDE, and sFDE, to measure how closely the samples are distributed near the ground truth. Results show that our approaches generally perform better in terms of these new metrics, and the parallel denoiser performs better than the series one. We also present the result from VAEs [29] that were implemented by [18], to check how other non-diffusion generative models work. It is shown that the overall performances of our series/parallel denoiser in diversity and likelihood are generally better than the VAEs, especially in the HumanEva-I dataset.
### _Qualitative Results_
Figure 3 shows two example results from our transformer-based motion denoiser. Predictions on the left of the dotted line are obtained from the motion observation labeled as'smoking'. It is shown that the deterministic prediction is similar to the ground truth, while the stochastic predictions show the diversity between samples. But note that still
the context of'smoking' looks remained in all samples. This phenomenon is also observed from the predictions on the right, which are obtained from the motion observation of 'walking'. While its deterministic prediction resembles the ground truth, the stochastic predictions are diverse and contain the context of 'walking'. For better visualization, please refer to our supplementary video.
## V Conclusion
In this work, we study the potential of diffusion probabilistic models for 3D human motion prediction tasks. We propose two types of diffusion models based on the transformers, which understand the motion's spatial and temporal information in series or parallel. Since the diffusion model is originally a generative model, its main usage would be for the stochastic motion prediction task. But once it is trained, we show that it can also be used in deterministic prediction if all randomness in its denoising process is ignored.
To show the effectiveness of diffusion models in both deterministic and stochastic motion prediction tasks, we conduct experiments based on various metrics. Results from deterministic prediction show that the diffusion model is not superior to the state-of-the-art. But it is shown that our long-term (1000ms) prediction performance is better than other transformer-based approaches. When it comes to evaluating stochastic predictions, it is conventional to suggest metrics measuring both likelihood and diversity. However, we claim that the conventional metrics for measuring the likelihood do not represent how much the samples are distributed near the plausible motion, since they measure the _minimum_ distance between samples and ground truth. Therefore, we suggest additional metrics to measure the mean and standard deviation of that distances, and the results show that our diffusion models can properly balance the trade-off between diversity and likelihood.
Although our results would provide nice answers to our first question - can we use diffusion probabilistic models for 3D motion prediction? - the most concerning disadvantage of a diffusion model is its sampling frequency. Since our diffusion model requires a \(K=20\) number of denoising processes to obtain prediction samples, this might occur a bit high latency. To overcome this issue, one might consider recent works for efficient sampling [37], which would be our future work, such that efficient 3D human motion prediction can be made for various real-time applications.
## Acknowledgment
This work was supported by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-01336, Artificial Intelligence Graduate School Program (UNIST)), and funded by Marie Sklodowska-Curie Action Horizon 2020 (Grant agreement No. 955778) for project 'Personalized Robotics as Service Oriented Applications' (PERSEO).
Fig. 3: Deterministic (Deter.) and stochastic (Sto.) predictions from our transformer-based motion denoiser. Note that two results are given and divided based on the vertical dotted line. Predictions are obtained from observed motions labeled as ‘smoking’ (left) and ‘walking’ (right). |
2309.06275 | Re-Reading Improves Reasoning in Large Language Models | To enhance the reasoning capabilities of off-the-shelf Large Language Models
(LLMs), we introduce a simple, yet general and effective prompting method, Re2,
i.e., \textbf{Re}-\textbf{Re}ading the question as input. Unlike most
thought-eliciting prompting methods, such as Chain-of-Thought (CoT), which aim
to elicit the reasoning process in the output, Re2 shifts the focus to the
input by processing questions twice, thereby enhancing the understanding
process. Consequently, Re2 demonstrates strong generality and compatibility
with most thought-eliciting prompting methods, including CoT. Crucially, Re2
facilitates a "bidirectional" encoding in unidirectional decoder-only LLMs
because the first pass could provide global information for the second pass. We
begin with a preliminary empirical study as the foundation of Re2, illustrating
its potential to enable "bidirectional" attention mechanisms. We then evaluate
Re2 on extensive reasoning benchmarks across 14 datasets, spanning 112
experiments, to validate its effectiveness and generality. Our findings
indicate that, with the exception of a few scenarios on vanilla ChatGPT, Re2
consistently enhances the reasoning performance of LLMs through a simple
re-reading strategy. Further analyses reveal Re2's adaptability, showing how it
can be effectively integrated with different LLMs, thought-eliciting prompting,
and ensemble strategies. Our code is available at
\url{https://github.com/Tebmer/Rereading-LLM-Reasoning/} | Xiaohan Xu, Chongyang Tao, Tao Shen, Can Xu, Hongbo Xu, Guodong Long, Jian-guang Lou, Shuai Ma | 2023-09-12T14:36:23Z | http://arxiv.org/abs/2309.06275v3 | # Re-Reading Improves Reasoning in Language Models
###### Abstract
Reasoning presents a significant and challenging issue for Large Language Models (LLMs). The predominant focus of research has revolved around developing diverse prompting strategies to guide and structure the reasoning processes of LLMs. However, these approaches based on decoder-only causal language models often operate the input question in a single forward pass, potentially missing the rich, back-and-forth interactions inherent in human reasoning. Scant attention has been paid to a critical dimension, i.e., the input question itself embedded within the prompts. In response, we introduce a deceptively simple yet highly effective prompting strategy, termed question "re-reading". Drawing inspiration from human learning and problem-solving, re-reading entails revisiting the question information embedded within input prompts. This approach aligns seamlessly with the cognitive principle of reinforcement, enabling LLMs to extract deeper insights, identify intricate patterns, establish more nuanced connections, and ultimately enhance their reasoning capabilities across various tasks. Experiments conducted on a series of reasoning benchmarks serve to underscore the effectiveness and generality of our method. Moreover, our findings demonstrate that our approach seamlessly integrates with various language models, though-eliciting prompting methods, and ensemble techniques, further underscoring its versatility and compatibility in the realm of LLMs.
## 1 Introduction
In the ever-evolving landscape of artificial intelligence, large language models (LLMs) have emerged as a cornerstone of natural language understanding and generation [8; 56; 56; 40]. However, as these models have grown in size and complexity, a pivotal challenge has come to the forefront: imbuing them with the ability to reason effectively. The capacity to engage in sound reasoning is a hallmark of human intelligence, enabling us to infer, deduce, and solve problems. In LLMs, this skill is paramount for enhancing their practical utility across a multitude of tasks. Despite their remarkable capabilities, LLMs often struggle with nuanced reasoning [6; 4], prompting researchers to explore innovative strategies to bolster their reasoning prowess [65; 17; 5; 32].
The existing body of research in this domain has predominantly concentrated on designing diverse thought-eliciting prompting strategies to guide and channel their reasoning processes. Noteworthy strategies such as Chain-of-Thought (CoT) [65], Tree of Thoughts (ToT) [69], Graph of Thoughts [5], Plan-and-Solve (PS) [60], and program-aided language model (PAL) [17] have emerged to structure and elicit logical trains of thought from these models. However, existing decoder-only causal language modeling (CLM) architecture often operates in a single forward pass, which may miss the richer, back-and-forth interactions that humans use when reasoning through a challenging problem. Meanwhile, an intriguing observation emerges - while significant efforts have been directed towards
molding the path of reasoning, CoT family for example, scant attention has been paid to a critical dimension, i.e., the input problem itself embedded within the prompts.
Herein lies the foundation of our approach, wherein we present a deceptively simple yet profoundly effective prompting strategy: _re-reading_, or Re2 for short. Drawing inspiration from human learning and problem-solving processes, we posit that revisiting the question information embedded in the input prompts can reassess the context, refine understanding, and correct potential misconceptions. This strategy aligns with the cognitive principle of reinforcement, allowing the models to iteratively build upon their initial understanding of the problem. By engaging in multiple passes over the input, the models can clean deeper insights, identify intricate patterns, and construct more nuanced connections that contribute to heightened reasoning outcomes. Our re-reading mechanism is far simpler than existing approaches that either perform the reasoning with multiple stages of prompting [72] or sample multiple reasoning paths [62] to improve generation quality. Besides, our re-reading works off-the-shelf with various pre-trained language models and prompting strategies as a "plug & play" module, and avoids any intricate stages of prompting or sampling.
To substantiate the efficacy of our proposed re-reading strategy, we conducted a comprehensive series of experiments across different reasoning tasks including arithmetic, commonsense and symbolic reasoning. Our evaluation encompassed both qualitative and quantitative assessments, accessing the performance of LLMs equipped with the re-reading strategy over the conventional as well as contemporary prompting techniques. The results of our study illuminate a remarkable trend: the models employing the re-reading strategy exhibit a fair consistent improvement in reasoning performance on most datasets, especially when applied to the chain-of-though prompting method. Besides, extensive experiments demonstrate that our re-reading strategy can generally extend across various prompting methodologies, and is also compatible with the self-consistency approach.
## 2 Related Work
Reasoning with Large Language Models.LLMs represent a significant milestone in the journey towards artificial general intelligence (AGI) [40; 57]. Their remarkable abilities span a broad range of tasks, facilitated through a unified natural language interface that operates in a generative manner. Here, reasoning ability is particularly crucial on the way towards AGI, where artificial intelligence needs to act or think like human beings [43; 21]. In the literature on LLMs, performing reasoning tasks via interaction in natural language plays a significant role in evaluating an LLM, into which academia and industry have been dedicating many endeavors [64; 52; 58]. In principle, most works for reasoning with large language models could fall into the paradigm of "Chain-of-Though" [65; 26], which assists LLMs in fulfilling complex reasoning tasks by generating intermediate steps explicitly. Therefore, most of the endeavors are dedicated to improving the basic principle by the following aspects: i) the structure of "chain", e.g., tree [69], graph [70]; ii) the modality of the chain, e.g., program [17]; iii) the reliability of the chain, e.g., self-consistency [62], faithful [36], retrieval-based verifying [19]; and iv) decomposition of the chain, e.g., least-to-most [71], decomposed [44], plan-to-solve [60]. In contrast, our simple re-reading strategy for LLMs is orthogonal to these improvements via a trade-off between the intermediate steps and the query itself. Besides, our re-reading strategy is complementary to many previous works by preventing the answer from being derived overwhelmingly from the CoT but overlooking the original query.
Re-reading Strategy in Text Understanding.In deep learning, the success of performing text-understanding tasks [51; 34; 68; 27] depends on the heuristics of human reading strategy, e.g., pre-reading, reading and post-reading [47; 55; 42]. Specifically, many effective algorithms have been crafted around the idea of re-reading. Although deep architectures, from multi-layer Bi-LSTM [22] to Transformer-encoder [59], have their mechanisms that provide a form of "re-reading", the notion that simply processing an input once might not be sufficient for understanding or generating a complex output has been long-standing. Initially, [48] and [49] found that repeated reading mechanisms do improve performance on some tasks, e.g., sentiment analysis, semantic relation classification, and event extraction. Then, [31] propose to mimic the repeated reading strategy and present neural networks with multi-level attention, which is proven effective in recognizing implicit discourse relations. Sequentially, [73] propose a multi-glance mechanism, modeling the habit of reading behavior, which can benefit a wide range of tasks. More recently, [35] adopt a network to encode the gist of paragraphs for rough reading and a decision-making policy for careful reading, which
can improve extractive summarization. Therefore, it is natural to introduce a re-reading strategy to large language models since i) the Transformer-decoder architecture of LLMs, with mono-directional attention mechanisms, hinders the implicit re-reading capability, and ii) the context combined with the input query to prompt LLMs could be intricate, including streamed text, background information, external knowledge, intermediate rationale, and few-shot demonstrations, thus overwhelming the original target query.
Instruction Following.To handle the prompts with intricate context and difficult queries, instruction-following capability is fundamental for LLMs to perform as expected, especially in zero-shot scenarios. A straightforward solution is to build supervised fine-tuning datasets to align an LLM with intricate, difficult, rich-constrained, and specific prompts, e.g., retrieval-aware [33], hard [24], complex [67], and multi-skill [38]. These methods boost the performance of an LLM in corresponding aspects. However, the challenge of following intrinsic context still exists because the LLM is trained with biases to specific parts of the input, e.g., front & rear [30]. In the literature on reasoning with intermediate steps (e.g., CoT), a source of intricacy comes from the chains of thoughts involving spurious and wrong rationales [26; 66; 28]. Such failure rationales could dominate the answer-deriving procedures and lead to incorrect answers, which motivates this work to increase the exposure of the original query.
Knowledge Recall.From the perspective of information seeking, prompting an LLM can be seen as a sort of "knowledge recall" via a parametric fashion, where the prompt can be seen as a retrieval query. In contrast to conventional non-parametric retrieval - vector database [25; 23] for example, the LLM as a neural knowledge model [7; 1] can easily generalize for huge knowledge coverage, contributing to its efficacy in broad applications. In the context of CoT-based reasoning, [9] conjuncture that LLM can be exposed to certain CoTs during training and easily complete reasoning by knowledge recall. As such, it is natural to adapt the basic but prevalent query augmentation technique in the term-based retrieval domain [14], which repeats the original query multiple times over the augmented part [61; 50], into prompting LLMs.
## 3 Methodology
We begin with a unified formulation to leverage LLMs as a general solver for natural language processing (NLP) and natural language understanding (NLU) tasks.
### Large Language Models as Task Solver
In general, given an input \(x\) in natural language, an arbitrary task aims to predict its target \(y\) with respect to a task-specific instruction (which can also be described in natural language, noted as \(t\)). As such, a conventional way to solve a specific task is to find or learn an \(t\)-specific mapping function \(f^{t}\) from \(x\in\mathcal{X}\) to \(y\in\mathcal{Y}\), i.e.,
\[f^{t}:\mathcal{X}\rightarrow\mathcal{Y}. \tag{1}\]
This formula is widely applicable to almost every natural language task, no matter its output type (i.e., discriminative and generative), category (e.g., sentiment analysis, question answering, machine translation), dataset (e.g., NQ and TriviaQA for open-domain QA) [11]. In deep representation learning literature, we usually learn a \(t\)-specific model, parameterized by \(\theta^{(0)}\), via supervised or semi-supervised learning strategy to fulfill the task \(t\), i.e.,
\[y\sim P(\text{y}|x;\theta^{(0)}). \tag{2}\]
Despite aligning closely with independent identically distributed (i.i.d) assumptions in classical statistics, the task-specific models come with certain shortcomings: 1) limited intra-task flexibility 2) neglected inter-task transfer These shortcomings all lead to an inferior capability in zero-shot transfer - training once but benefiting beyond. Empowered by LLMs pre-trained on trillions of tokens, a unified mapping function \(f\)[63] applicable to a broad spectrum of tasks, \(\mathcal{T}\), is more desirable:
\[f:\mathcal{X}\times\mathcal{T}\rightarrow\mathcal{Y}. \tag{3}\]
To implement \(f\), an LLM (parameterized as \(\theta^{(\text{llm})}\)) is utilized as the unified task solver:
\[y\sim P(\text{y}|\operatorname{c}(t,x);\theta^{(\text{llm})}), \tag{4}\]
where \(\mathrm{c}(\cdot)\) denotes combine the task instruction \(t\) and an input \(x\) by using a template in line with prompt heuristics of \(\theta^{\text{(Ilm)}}\).
Providing its superior capability in knowledge transfer, the above simple LLM-based task-solving paradigm smashes most traditional natural language tasks in both zero-shot and few-show scenarios. However, its superiority can be significantly reduced when encountering reasoning tasks, e.g., math, requiring a long rationale chain to derive the final answer [43; 21].
### Vanilla Chain-of-Thought for Reasoning Task
Therefore, improving the performance of reasoning tasks in natural language is more urgent for LLMs. This motivates the recent simple yet fundamental solving paradigm for reasoning tasks, known as Chain-of-Thought (CoT) [65]. In formal, CoT rewrites Eq.(4) as
\[y\sim\sum_{z\sim\ P(x|\,\mathrm{c}^{\text{(con)}}(t,x);\theta^{\text{(Ilm)}}) }P(\mathbf{y}|\,\mathrm{c}^{\text{(coot)}}(t,x,z);\theta^{\text{(Ilm)}})\cdot P (z|\,\mathrm{c}^{\text{(coot)}}(t,x);\theta^{\text{(Ilm)}}), \tag{5}\]
where two changes are highlighted [26]: i) \(\mathrm{c}^{\text{(coot)}}(\cdot)\) involves CoT-specific instructions like '_let's think step by step_' and ii) z stands for a latent variable of rationale and \(z\) denotes a sampled rationale in natural language. As such, the LLMs could break down complex tasks into more manageable reasoning steps, treating each step as a piece of the overall solution chain.
We take CoT as a baseline to solve reasoning tasks without sacrificing its generality. More extensively, our proposed simple method can function as a "plug & play" module to most other algorithms (SS3.4).
### Re-reading (Re2) Improves Reasoning
Essentially, prior reasoning methods, including CoT, are vulnerable to two inherent limitations of LLMs, leading to inferior query-focusing capability: i) A lack of bi-directional contextualization: Although mechanisms like self-attention in Transformer architectures have provided a way to weigh the importance of various tokens in input, they are limited by the decoder-only causal language modeling (CLM) architecture, which often operates in a single forward pass. As such, they may miss the richer, back-and-forth interactions that humans use when reasoning through a challenging problem. And ii) a bias to specific parts of input context: Often, due to the training data's nature or the inherent biases of the model, LLMs tend to focus disproportionately on certain aspects of input, possibly neglecting other crucial information. For example, recent work has proven LLMs are prone to 'lost in the middle' and biased to rear parts [30].
To mitigate these limitations, we introduce the Re-reading (Re2) strategy. Re-reading emerges as a fundamental strategy in human cognition when faced with intricate questions or statements. Especially in complex reasoning scenarios, individuals tend to revisit the information sources, be it a text or a diagram, to reassess the context, refine understanding, and correct potential misconceptions. Analogously, for LLMs to effectively tackle such complex tasks, implementing a re-reading strategy can be advantageous.
Intuitively, our proposed Re2 strategy encapsulates a dual-pass mechanism, where the first pass scans the input context in its entirety, and the subsequent re-read pass emphasizes refining understanding by focusing on salient regions, which is defined as
\[y\sim\sum_{z\sim\ P(x|\,\mathrm{c}^{\text{(con)}}(t,\mathrm{re2}(x));\theta^{ \text{(Ilm)}})}P(\mathbf{y}|\,\mathrm{c}^{\text{(coot)}}(t,\mathrm{re2}(x),z); \theta^{\text{(Ilm)}})\cdot P(z|\,\mathrm{c}^{\text{(coot)}}(t,\mathrm{re2}(x) );\theta^{\text{(Ilm)}}). \tag{6}\]
We don't seek complex adjustments or intricate computational overhead for LLMs but a general implementation of \(\mathrm{re2}(x)\) simplicity as follows:
Question: {Input Query} Read the question again: {Input Query} (7)
#Thought-eliciting prompt (e.g.,"Let's think step by step")#
where '{Input Query}' is a placeholder for the core target query, \(x\). As such, the Re-reading (Re2) strategy attempts to emulate human-like revisitation of textual information to improve comprehension and reasoning in LLMs.
Beyond, emphasizing the input through re-reading can enhance knowledge recall in a parametric retrieval manner. Given the vast amounts of data on which models like ChatGPT have been trained, it's plausible that they have encountered tasks with CoT instructions or similar methodologies. Thus, by reintroducing the input query, the model can better align its response with pre-existing knowledge or patterns. It's analogous to how a human, upon revisiting a problem statement or question, might remember a similar problem they've solved before. This approach harnesses the implicit memory of the model to follow previously learned structures, even if the CoT instruction is absent.
### Generality of Re2
The true power of the Re-reading (Re2) strategy lies in its universality, offering adaptability across a range of tasks without necessitating significant architectural modifications. At its core, Re2 taps into the primary cognitive mechanisms by which humans process information, promoting depth of understanding through iterative engagement with textual data. This is particularly salient in the world of language models, where context reigns supreme.
Hence, this general approach can be seamlessly integrated into various models and algorithms, as evidenced by its application in both CoT with self-consistency and CoT with Few-shot Demonstration, which we elaborate upon below.
CoT with Self-consistency.Self-consistency [62] is an approach aimed at ensuring that a model's outputs are reliable and aligned over multiple runs. It is based on the principle that repeated evaluations of the same input, under minor randomness, should yield consistent answers if the model truly understands the underlying context.
Incorporating the Re2 strategy within this framework, we enable the model to be consistently accurate in its readings. The enhanced focus on context ensures that any inconsistencies arising from earlier readings are ironed out, bolstering the model's overall reliability. The model's outputs are then aggregated with voting, with an emphasis on the most consistent outcomes, as depicted:
\[y =\operatorname{Vote}(\{\hat{y}|\hat{y}\sim P\}),\text{ where} \tag{8}\] \[P \coloneqq\sum_{z\sim P(x|\operatorname{c}^{(\text{out})}(t, \operatorname{re2}(x));\theta^{\text{thm}})}P(\operatorname{y}|\operatorname{ c}^{(\text{coo})}(t,\operatorname{re2}(x),z);\theta^{(\text{llm})})\cdot P(z| \operatorname{c}^{(\text{coo})}(t,\operatorname{re2}(x));\theta^{(\text{llm}) }).\]
This method reinforces model confidence and reduces susceptibility to any singular anomalies in its output.
Compatibility with Thought-Eliciting Prompt StrategiesThe prevailing body of research in this field has primarily emphasized the development of diverse thought-eliciting prompting strategies to guide and channel reasoning processes in generating output. In contrast, Re2 shifts its focus towards input, engaging in multiple passes over the provided information, thereby enhancing its comprehension of the question at hand. Consequently, Re2 exhibits a fair compatibility with respect to these thought-eliciting prompting strategies and can seamlessly serve as a 'plug & play' module alongside them. This synergy holds the potential to further enhance the reasoning abilities of LLMs.
With a specific strategy \(s\) for eliciting thoughts from the LLMs, like Chain-of-Thoughts, Plan-and-Solve, Program-Aided Prompt, and so on, the Eq. (6) is rewritten as:
\[y\sim\sum_{z\sim P(x|\operatorname{c}^{(s)}(t,\operatorname{re2}(x));\theta^ {(\text{llm})})}P(\operatorname{y}|\operatorname{c}^{(s)}(t,\operatorname{ re2}(x),z);\theta^{(\text{llm})})\cdot P(z|\operatorname{c}^{(s)}(t, \operatorname{re2}(x));\theta^{(\text{llm})}), \tag{9}\]
## 4 Experiments
We carried out a set of experiments to confirm the efficacy of the proposed re-reading prompts across various reasoning assessments. Our findings indicate that across a wide range of model scales and prompting methods, re-reading generally enhances the accuracy of reasoning in language models.
### Benchmarks
We have assessed the effectiveness of our re-reading prompting strategy across a range of reasoning benchmarks. Our evaluation encompasses three key categories:
Arithmetic ReasoningWe consider the following seven arithmetic reasoning problem benchmarks: (1) the GSM8K benchmark of math word problems [13], (2) the SVAMP dataset of math word problems with varying structures [41], (3) the ASDiv dataset of diverse math word problems [37], (4) the AQuA dataset of algebraic word problems [29], (5) the AddSub [20] of math word problems on addition and subtraction for third, fourth, and fifth grader, (6)MultiArith [45] dataset of math problems with multiple steps, and (7) the SingelEQ [46] dataset of elementary math word problems with single operation.
Commonsense and Symbolic ReasoningFor Commonsense reasoning tasks, we used CommonsenseQA [54], StrategyQA [18], and the AI2 Reasoning Challenge (ARC) [12]. CommonsenseQA dataset consists of multiple-choice questions that necessitate various forms of common-sense knowledge to arrive at correct answers. The StrategyQA benchmark dataset comprises questions that demand multi-step reasoning, with the reasoning steps left implicit, requiring inference. The ARC dataset (denoted as ARC-t), designed for grade-school level questions, promotes advanced question-answering research. It is divided into two sets: a Challenge Set (denoted as ARC-c), containing questions that both retrieval-based and word co-occurrence algorithms answered incorrectly, and an Easy Set (denoted as ARC-e). We evaluate two symbolic reasoning tasks: date understanding [53] and Coinflip [65]. Date understanding is a subset of BigBench datasets [53], which have posed challenges for previous fine-tuning efforts. Coinflip is a dataset of questions on whether a coin is still heads up after it is flipped or not flipped based on steps given in the questions.
### Language Models and Implementations
In our implementation, we rigorously evaluate the performance of our Re2 model on two baseline prompting methods: Vanilla and CoT. The Vanilla approach aligns with the standard prompting method outlined in [65; 26], wherein no specific prompts are employed to elicit thoughts from the Language Models (LLMs). Conversely, the CoT method guides the model through a step-by-step thought process. We incorporate our Re2 strategy into these baseline methods to assess its impact, denoted as Vanilla+Re2 and CoT+Re2. To avoid the impact of randomness introduced by the demonstrations in a few-shot setting, we assess our method in a zero-shot setting, following [9; 60; 15]. Additionally, for different tasks, we design answer-format instructions in prompts to regulate the structure of the final answer, facilitating precise answer extraction. Detailed information regarding the method prompts and answer-format instructions can be found in the paper's Appendix. Moreover, we investigate the effectiveness of employing the re-reading mechanism in conjunction with various thought-eliciting prompting strategies, as detailed in Section 4.4 of this paper. Our decoding strategy involves using greedy decoding with a temperature setting of 0, as well as self-consistency prompting with a temperature setting of 0.7. For these experiments, we employ two
\begin{table}
\begin{tabular}{c l c c c c c c c} \hline \hline
**LLMs** & **Methods** & **GSM** & **SVAMP** & **ASDIV** & **AUQA** & **MultiArith** & **SinlegEQ** & **AddSub** \\ \hline \multirow{6}{*}{davinci-003} & Vanilla & 19.48 & 67.60 & 69.00 & 28.74 & 31.33 & 86.22 & 89.87 \\ & Vanilla+Re2 & **24.79** & **70.90** & **71.20** & **30.31** & **42.33** & **87.20** & **92.15** \\ & & \(\uparrow\) 5.31 & \(\uparrow\) 3.30 & \(\uparrow\) 2.20 & \(\uparrow\) 1.57 & \(\uparrow\) 11.00 & \(\uparrow\) 0.98 & \(\uparrow\) 2.28 \\ \cline{2-10} & CoT & 58.98 & 78.30 & 77.60 & 40.55 & 89.33 & 92.32 & 91.39 \\ & CoT+Re2 & **61.64** & **81.00** & **78.60** & **44.49** & **93.33** & **93.31** & **91.65** \\ & & \(\uparrow\) 2.68 & \(\uparrow\) 2.70 & \(\uparrow\) 1.00 & \(\uparrow\) 3.94 & \(\uparrow\) 4.00 & \(\uparrow\) 0.99 & \(\uparrow\) 0.26 \\ \hline \multirow{6}{*}{ChatGPT} & Vanilla & 77.79 & 81.50 & 87.00 & **63.39** & **97.83** & **95.28** & **92.41** \\ & Vanilla+Re2 & **79.45** & **84.20** & **88.40** & 58.27 & 96.67 & 94.49 & 91.65 \\ \cline{1-1} & & \(\uparrow\) 1.66 & 2.70 & \(\uparrow\) 0.60 & \(\uparrow\) 5.12 & \(\downarrow\) 1.16 & \(\downarrow\) 0.79 & \(\downarrow\) 0.76 \\ \cline{1-1} \cline{2-10} & CoT & 78.77 & 78.70 & 85.60 & 55.91 & 95.50 & 93.70 & 88.61 \\ \cline{1-1} & CoT+Re2 & **80.59** & **80.00** & **86.00** & **59.06** & **96.50** & **95.28** & **89.87** \\ \cline{1-1} & & \(\uparrow\) 1.82 & \(\uparrow\) 1.30 & \(\uparrow\) 0.40 & \(\uparrow\) 3.15 & \(\uparrow\) 1.00 & \(\uparrow\) 1.58 & \(\uparrow\) 1.26 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation results on arithmetic reasoning benchmarks.
powerful backbones: ChatGPT (gpt-3.5-turbo-0613) [39] and davinci-003 (text-davinci-003)2, across all prompting methods, including Vanilla, CoT, Vanilla+Re2, and CoT+Re2.
Footnote 2: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
### Evaluation Results
Table 1 presents a comprehensive performance comparison between our method and existing zero-shot techniques on arithmetic reasoning datasets. Our analysis reveals a consistent enhancement in arithmetic reasoning attributed to Re-reading, clearly outperforming both chain-of-thought prompting and vanilla prompting on almost all benchmarks when employing the davinci-003 model.
Furthermore, when applied to ChatGPT, re-reading exhibits a substantial improvement in arithmetic reasoning performance on most datasets when combined with chain-of-thought prompting. For the vanilla-prompting strategy, however, our method results in a notable performance drop on several benchmarks, including AQUA, MultiArith, SinfgeEQ, and AddSub.
Without clear instruction (i.e., "let's think step by step") for the chain-of-through mindset as in the CoT-prompting strategy, some general dialogue-based LLMs (e.g., ChatGPT, Claude) will likely keep performing chain-reasoning towards a final answer instead of writing the answer directly. The kinds of reasoning chains depend heavily on certain mindsets existing in alignment data, including repetition of user task instruction [3]. The case in [3] proves that some existing LLMs have been trained to retell or paraphrase users' instructions to enhance the query-understanding capability, sharing a high-level inspiration with our method but leading to high learning costs to acquire this capability. As such, an overlay between the instruction-retell mindset and our re-reading strategy leads to more frequent repetition of users' instructions. As analyzed in the first part of SS4.4 (i.e., _Times of Question Reading_) and empirically verified in Table 3, repeating the question multiple times results in worse results, thus closely aligned and consistent with the experimental results here in Table 1. From another perspective, initial findings outlined in [9] suggest that during the instruction fine-tuning (IFT), ChatGPT was exposed to training samples containing CoT explanation, so the vanilla ChatGPT is prone to internalizing some of the CoTs and recalling the CoTs even without specific instructions. They [9] found that explicit CoT instructions sometimes yield worse results compared to using vanilla prompts with ChatGPT [9] As such, introducing our re-reading prompt may not align closely with the CoT recalling mechanism by more attention paid to the query itself, potentially causing distraction to the implicit CoT instruction. Although davinci-003 has also undergone IFT training, it's worth noting that the generated outputs of the vanilla davinci-003 tend to lack CoT explanations. In situations where explanations are absent, understanding the problem becomes even more crucial. Consequently, the adoption of the re-reading strategy has shown great potential in enhancing performance in this scenario.
Table 2 presents the evaluation results for both commonsense reasoning and symbolic reasoning. We can discern a generally consistent performance trend mirroring that of the arithmetic reasoning tasks,
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline \multirow{2}{*}{**LLMs**} & \multirow{2}{*}{**Methods**} & \multicolumn{4}{c}{**Commonsense**} & \multicolumn{3}{c}{**Symbolic**} \\ \cline{3-10} & & **CommonsenseQA** & **StrategyQA** & **ARC-e** & **ARC-c** & **ARC-t** & **Date** & **Coin** \\ \hline \multirow{4}{*}{davinci-003} & Vanilla & 74.20 & 59.74 & 84.81 & 72.01 & 80.58 & 40.92 & 49.80 \\ & Vanilla+Re2 & **76.99** & **59.91** & **88.22** & **75.68** & **84.07** & **42.01** & **52.40** \\ & & \(\uparrow\) 2.79 & \(\uparrow\) 0.17 & \(\uparrow\) 3.41 & \(\uparrow\) 3.67 & \(\uparrow\) 3.49 & \(\uparrow\) 1.09 & \(\uparrow\) 2.60 \\ \cline{2-10} & CoT & 71.66 & **67.55** & 85.69 & 73.21 & 81.57 & 46.07 & 95.60 \\ & CoT+Re2 & **73.05** & 66.24 & **87.84** & **76.02** & **83.94** & **52.57** & **99.60** \\ & & \(\uparrow\) 1.39 & \(\downarrow\) 1.31 & \(\uparrow\) 2.15 & \(\uparrow\) 2.81 & \(\uparrow\) 2.37 & \(\uparrow\) 6.50 & 4.00 \\ \hline \multirow{4}{*}{ChatGPT} & Vanilla & 76.66 & 62.36 & **94.32** & **85.41** & **91.37** & 47.43 & 52.00 \\ & Vanilla+Re2 & **78.38** & **66.99** & 93.81 & 83.19 & 90.30 & **47.97** & **57.20** \\ \cline{2-10} & & \(\uparrow\) 1.72 & \(\uparrow\) 4.63 & \(\downarrow\) 0.51 & \(\downarrow\) 2.22 & \(\downarrow\) 1.07 & \(\uparrow\) 0.54 & \(\uparrow\) 5.20 \\ \cline{2-10} & CoT & 69.94 & 67.82 & **93.35** & 83.53 & 90.11 & 43.63 & 88.80 \\ \cline{2-10} & CoT+Re2 & **71.66** & **69.34** & 93.14 & **84.47** & **90.27** & **47.15** & **95.20** \\ \cline{2-10} & & \(\uparrow\) 1.72 & \(\uparrow\) 1.52 & \(\downarrow\) 0.21 & \(\uparrow\) 0.94 & \(\uparrow\) 0.16 & \(\uparrow\) 3.52 & \(\uparrow\) 6.40 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation results on commonsense and symbolic reasoning benchmarks.
and notably, our re-reading approach exhibits enhanced robustness and substantial improvement, particularly on davinci-003 and ChatGPT with CoT method.
### Discussions
Times of Question ReadingWe delve deeper into the impact of the times of question re-reading on reasoning performance. Table 3 illustrates how the performance of two distinct language models evolves concerning various times of question re-reading. An overarching pattern emerges across all models: performance improves until the number of re-reads reaches 2 or 3, after which it begins to decline with further increases in question re-reading times. The potential reasons for inferior performance when reading the question multiple times are two-fold: i) repeating the question in a brute-force manner may interfere with the self-attention mechanism behind the LLMs, leading to over-weighed attention paid to the question only, and ii) repeating the question significantly increase the inconsistency of the LLMs between our inference and pretraining/alignment (intuitively in the learning corpora, we usually repeat a question twice to emphasize the key part, rather not more). It's noteworthy that reading the question two times tends to be optimal for accommodating most scenarios in our experiments, which is why we refer to this practice as "re-reading" in our paper.
Compatibility with Thought-Eliciting Prompt StrategiesCompared to previous methods attempting to elicit thoughts in the output from LLMs, our Re2 emphasizes bidirectional understanding of the input. Therefore, we are intrigued to explore whether the proposed re-reading mechanism is effective with various thought-eliciting prompting strategies, aside from CoT. To investigate this, we apply re-reading to two other recently introduced prompting methods, namely, plan-and-solve (PS) [60] and Program-Aided Language models (PAL) [17]. The former model devises a plan to divide the entire task into smaller subtasks, and then carries out the subtasks according to the plan, while the latter generates programs as the intermediate reasoning steps. We directly apply our re-reading to these two methods by making a simple alteration to the input, following the prompt in Equation 7. Table 4 presents the evaluation findings on the GSM benchmark. Our observations reveal a consistent trend, akin to what was observed with chain-of-thought prompting. These results suggest that the effectiveness of our re-reading mechanism generally extends across various prompting methodologies.
multiple answers, our re-reading mechanism still contributes to improvement on most scenarios, indicating its compatibility with the self-consistency approach.
Performance across Different Question Complexity.We further investigate the impact of input question complexity on the reasoning performance of both CoT and CoT with re-reading (referred to as CoT+RE2). In accordance with [16], we measure question complexity by referencing the reasoning step present in the ground-truth answer. Figure 2 illustrates how these models' performance evolves in response to varying question complexities. Our findings reveal a noticeable trend: the performance of all models generally diminishes as question complexity increases, suggesting that the current models still struggle with handling intricate queries. Notably, while employing a re-reading strategy leads to a slight drop in performance on less complex questions (<=3), the introduction of re-reading significantly enhances performance on more complex questions (e.g., those with a complexity level exceeding 5). This observation underscores the benefits of employing a re-reading strategy for improving question comprehension and reasoning capabilities over more complex questions.
The impact of different re-reading instructionsWe future conduct experiments to examine the influence of re-reading instructions within the context of chain-of-thought prompting. Specifically, we initiate the investigation by comparing various instructions for question re-reading. As depicted in P1 and P2 in Table 6, instruction P1, which includes the phrase "Read the question again:", exhibits superior performance compared to directly repeating the question twice (referred to as P0). These results suggest that providing more detailed re-reading instructions to the language models is advantageous. Subsequently, we explore the possibility of introducing re-reading for chain-of-thought instructions (referred to as "Let's think step by step"), as exemplified in P3 and P4. However, we observe that repeating the thinking process two times does not yield any discernible benefits. Since this aspect
is not the primary focus of this paper, we have deferred it to future research endeavors. It's noteworthy that, in general, question re-reading consistently improves reasoning performance compared to the standard chain-of-thought prompt without question re-reading (P0).
Case StudyWe end this section with a case study to show the effectiveness of our proposed re-reading prompting over the chain-of-thought. We choose two examples from GSM, and the results are listed in Table 7-8. It is evident that our method can better align the evidence in the question with the corresponding explanation hints. We can observe that CoT+Re2 tends to highlight the important evidences in the question before generating the explanation, for example, "_In the morning, she gives 15 cups of feed, and in the afternoon, she gives another 25. So..._" in Table 7 and "_The bonus is worth half a month's salary, which is..._" in Table 8. To further validate this observation, we calculated the n-gram recall between the output explanations and the input questions, as illustrated in Figure 2. The results indicate that Re2 indeed improves the n-gram (n=1,2,3,4) recall in the output explanations, underscoring how our method enhances the model's focus on the question during the reasoning process to a certain extent. Appendix provides more examples.
## 5 Conclusion and Future Works
In this paper, we have delved into the concept of Re2 prompting, specifically focusing on "re-reading" question. This method stands out as a straightforward and widely applicable approach to enhance the reasoning capabilities of language models. Notably, Re2 aids in fostering bidirectional comprehension of questions within the context of decoder-only causal language models. Crucially, it operates independently of other thought-eliciting prompting strategies and ensemble techniques. Our extensive experiments covered arithmetic, commonsense, and symbolic reasoning tasks. These experiments confirmed the effectiveness and versatility of Re2, with a particular emphasis on its performance when used in conjunction with other thought-eliciting prompting strategies, such as CoT. Our findings encourage the research community to focus on a deeper understanding of input questions, complementing the exploration of thought-eliciting prompting strategies.
Future WorkWhile our Re2 method exhibits commendable performance across a wide spectrum of tasks in the zero-shot setting, our ongoing and future research endeavors will aim to further extend its capabilities. This will involve: (1) Expanding the backbones utilized, which may encompass pretrained large language models such as Llama1 [56], Llama2 [57], and Falcon [2], as well as chat-based LLMs like Vicuna [10] and Llama2-Chat [57], across various model sizes. (2) Investigating the method's efficacy on more general prompting tasks and few-shot reasoning scenarios. (3) Exploring its applicability to multi-modal tasks, especially those involving interactive or pairwise combinations of image and text. These future research directions are aimed at deepening our understanding of Re2's potential and expanding its utility across a broader range of natural language understanding challenges.
## Acknowledgment
We express our sincere gratitude to Yao Fu for his constructive discussions and insightful comments.
**Q:** Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing seeds, mealworms and vegetables to help keep them healthy. She gives the chickens their feed in three separate meals. In the morning, she gives her flock of chickens 15 cups of feed. In the afternoon, she gives her chickens another 25 cups of feed. How many cups of feed does she need to give her chickens in the final meal of the day if the size of Wendi's flock is 20 chickens? (Answer: 20)
**Input:**
Q: Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing seeds, mealworms and vegetables to help keep them healthy. She gives the chickens their feed in three separate meals. In the morning, she gives her flock of chickens 15 cups of feed. In the afternoon, she gives her chickens another 25 cups of feed. How many cups of feed does she need to give her chickens in the final meal of the day if the size of Wendi's flock is 20 chickens?
**Input:**
Q: Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing seeds, mealworms and vegetables to help keep them healthy. She gives the chickens their feed in three separate meals. In the morning, she gives her flock of chickens 15 cups of feed. In the afternoon, she gives her chickens another 25 cups of feed. How many cups of feed does she need to give her chickens in the final meal of the day if the size of Wendi's flock is 20 chickens?
Your final answer should be a single numerical number, in the form \boxed{answer}, at the end of your response.
A: Let's think step by step.
**Output:**
First, let's calculate how much feed Wendi gives her chickens in the morning. She has 20 chickens, and she gives each chicken 3 cups of feed, so in total, she gives them 20 x 3 = 60 cups of feed in the morning.
Next, let's calculate how much feed Wendi gives her chickens in the afternoon. Again, she has 20 chickens, and she gives each chicken 3 cups of feed, so in total, she gives them 20 x 3 = 60 cups of feed in the afternoon.
To find out how much feed she needs to give her chickens in the final meal of the day, we need to subtract the total feed given in the morning and afternoon from the total feed needed for the day. The total feed needed for the day for 20 chickens is 20 x 3 x 3 = 180 cups of feed.
So, the feed needed in the final meal of the day is 180 - 60 - 60 = 60 cups of feed.
Therefore, Wendi needs to give her chickens 60 cups of feed in the final meal of the day.
Answer: 60
**Input:**
Q: Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing seeds, mealworms and vegetables to help keep them healthy. She gives the chickens their feed in three separate meals. In the morning, she gives her flock of chickens 15 cups of feed. In the afternoon, she gives her chickens another 25 cups of feed. How many cups of feed does she need
**Output:**
First, we need to determine the total amount of feed Wendi gives her chickens in the morning and afternoon. In the morning, she gives 15 cups of feed, and in the afternoon, she gives another 25 cups. So, the total amount of feed given in the morning and afternoon is 15 + 25 = 40 cups.
Next, we need to determine the total amount of feed needed for Wendi's flock of 20 chickens. Since each chicken requires 3 cups of feed, we multiply 3 by 20 to get 60 cups.
Finally, we subtract the total amount of feed given in the morning and afternoon from the total amount needed. 60 - 40 = 20
Therefore, Wendi needs to give her chickens 20 cups of feed in the final meal of the day.
Answer: 20
\begin{table}
\begin{tabular}{l} \hline \hline
**Q:** Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing seeds, mealworms and vegetables to help keep them healthy. She gives the chickens their feed in three separate meals. In the morning, she gives her flock of chickens 15 cups of feed. In the afternoon, she gives her chickens another 25 cups of feed. How many cups of feed does she need
**Output:**
First, we need to determine the total amount of feed Wendi gives her chickens in the morning and afternoon. In the morning, she gives 15 cups of feed, and in the afternoon, she gives another 25 cups. So, the total amount of feed given in the morning and afternoon is 15 + 25 = 40 cups.
Next, we need to determine the total amount of feed needed for Wendi’s flock of 20 chickens. Since each chicken requires 3 cups of feed, we multiply 3 by 20 to get 60 cups.
Finally, we subtract the total amount of feed given in the morning and afternoon from the total amount needed. 60 - 40 = 20
Therefore, Wendi needs to give her chickens 20 cups of feed in the final meal of the day.
Answer: 20
\begin{table}
\begin{tabular}{l} \hline \hline
**CoT+RE2** \\ \hline
**Input:**
Q: Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing seeds, mealworms and vegetables to help keep them healthy. She gives the chickens
their feed in three separate meals. In the morning, she gives her flock of chickens 15 cups of feed. In the afternoon, she gives her chickens another 25 cups of feed. How many cups of feed does she need
**to** give her chickens in the final meal of the day if the size of Wendi’s flock is 20 chickens?
A: Let’s think step by step.
**Output:**
First, we need to determine the total amount of feed Wendi gives her chickens in the morning and afternoon. In the morning, she gives 15 cups of feed, and in the afternoon, she gives another 25 cups. So, the total amount of feed given in the morning and afternoon is 15 + 25 = 40 cups.
Next, we need to determine the total amount of feed needed for Wendi’s flock of 20 chickens. Since each chicken requires 3 cups of feed, we multiply 3 by 20 to get 60 cups.
Finally, we subtract the total amount of feed given in the morning and afternoon from the total amount needed. 60 - 40 = 20
Therefore, Wendi needs to give her chickens 20 cups of feed in the final meal of the day.
Answer: 20
**Q:** Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year? (Answer: 262500)
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
Your final answer should be a single numerical number, in the form \(\backslash\)boxed{answer}, at the end of your response.
A: Let's think step by step.
**Output:**
First, let's calculate the 5% raise on Tim's $20000 monthly salary. To do this, we multiply $20000 by 5% (or 0.05):
$20000 * 0.05 = $1000
So, Tim's raise is $1000 per month.
Next, let's calculate the bonus worth half a month's salary. Half of $20000 is $10000, so Tim's bonus is $10000.
Now, let's calculate Tim's total monthly salary after the raise and bonus:
$20000 (original salary) + $1000 (raise) + $10000 (bonus) = $31000
Finally, let's calculate Tim's annual salary by multiplying his monthly salary by 12:
$31000 * 12 = $372000
Therefore, Tim will make $372000 in a year. So, the answer is \(\boxed{372000}\).
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
Read the question again: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
Your final answer should be a single numerical number, in the form \(\backslash\)boxed{answer}, at the end of your response.
A: Let's think step by step.
**Output:**
First, let's calculate the 5% raise on Tim's $20000 monthly salary.
The raise amount would be 5% of $20000, which is equal to 0.05 * $20000 = $1000.
So, Tim's new monthly salary would be $20000 + $1000 = $21000.
Next, let's calculate the bonus amount. The bonus is worth half a month's salary, which is equal to 0.5 * $21000 = $10500.
Now, to find out how much money Tim will make in a year, we need to calculate the total salary for 12 months and add the bonus.
Total salary for 12 months = $21000 * 12 = $252000.
So, Tim will make a total of $252000 + $10500 = $262500 in a year.
Therefore, Tim will make \(\boxed{262500}\) in a year.
\begin{table}
\begin{tabular}{l} \hline
**Q:** Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year? (Answer: 262500)
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year? (Answer: 262500)
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month’s salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $20000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month's salary. How much money will he make in a year?
**Count:**
Q: Tim gets a promotion that offers him a 5% raise on his $2000 a month salary. It also gives him a bonus worth half a month's salary. |
2309.10417 | Investigating the fluxes and physics potential of LHC neutrino
experiments | The initiation of a novel neutrino physics program at the Large Hadron
Collider (LHC) and the purpose-built Forward Physics Facility (FPF) proposal
have motivated studies exploring the discovery potential of these searches.
This requires resolving degeneracies between new predictions and uncertainties
in modeling neutrino production in the forward kinematic region. The present
work investigates a broad selection of existing predictions for the parent
hadron spectra at FASER$\nu$ and the FPF to parameterize expected correlations
in the neutrino spectra produced in their decays and to determine the highest
achievable precision for their observation based on Fisher information. This
allows for setting constraints on various physics processes within and beyond
the Standard Model, including neutrino non-standard interactions. We also
illustrate how combining multiple neutrino observables could lead to
experimental confirmation of the enhanced-strangeness scenario proposed to
resolve the cosmic-ray muon puzzle already during the ongoing LHC Run 3. | Felix Kling, Toni Mäkelä, Sebastian Trojanowski | 2023-09-19T08:29:34Z | http://arxiv.org/abs/2309.10417v2 | # Investigating the fluxes and physics potential of LHC neutrino experiments
###### Abstract
The initiation of a novel neutrino physics program at the Large Hadron Collider (LHC) and the purpose-built Forward Physics Facility (FPF) proposal have motivated studies exploring the discovery potential of these searches. This requires resolving degeneracies between new predictions and uncertainties in modeling neutrino production in the forward kinematic region. The present work investigates a broad selection of existing predictions for the parent hadron spectra at FASER\(\nu\) and the FPF to parameterize expected correlations in the neutrino spectra produced in their decays and to determine the highest achievable precision for their observation based on Fisher information. This allows for setting constraints on various physics processes within and beyond the Standard Model, including neutrino non-standard interactions. We also illustrate how combining multiple neutrino observables could lead to experimental confirmation of the enhanced-strangeness scenario proposed to resolve the cosmic-ray muon puzzle already during the ongoing LHC Run 3.
+
Footnote †: preprint: DESY-23-131
## I Introduction
The subtle role of neutrinos in the Standard Model (SM) constantly motivates measurements of their interactions across a broad energy spectrum, which also remains essential for testing beyond the Standard Model (BSM) scenarios, cf. Refs. [1; 2; 3] for reviews. The far-forward region of the Large Hadron Collider (LHC) is particularly suitable for such studies [4; 5; 6; 7; 8; 9; 10], as it offers a highly-collimated flux of the most energetic neutrinos ever produced in a laboratory setup. A new neutrino physics program has recently been initiated in this region with dedicated FASER [11; 12; 13; 14; 15] and SND\(\#\)LHC [16; 17] experiments. Strikingly, this has already led to the first observations of collider neutrinos [18; 19; 20]; see also Refs. [21; 22] for earlier analyses and discussion. The initial measurements pave the way for further studies during the ongoing LHC Run 3, and in the future high-luminosity LHC (HL-LHC) era in the proposed purpose-built Forward Physics Facility (FPF) [23; 24].
While neutrinos in the SM interact via electroweak gauge bosons, their studies can also indirectly teach us about QCD. This is due to their origin from decays of various mesons produced in hadronic collisions. Due to the uncertainties in modeling the parent hadron production at large pseudo-rapidities, various theoretical predictions currently differ by as much as an order of magnitude in the expected neutrino charged-current (CC) event rates in the far-forward region of the LHC. Reducing these uncertainties is among the primary goals of the new neutrino experimental program. This will have far-reaching consequences for our understanding of strong interactions, including parton distribution function (PDF) determination and non-perturbative effects, and also broad implications for astroparticle physics and BSM searches, cf. Refs. [23; 24; 25; 26].
The dominant impact of modeling the parent hadron production is also expected to generate notable correlations between neutrino spectra for different flavors and at specific energy ranges. For instance, charm hadron decays determine the forward tau neutrino flux and can contribute substantially to the high-energy part of the electron and muon neutrino spectrum [27]. In this study, we propose to utilize these expected correlations to improve the projected constraining power of the ongoing and future neutrino measurements at the LHC.
To this end, we construct effective parameterization of the far-forward neutrino spectra by interpolating between the leading predictions obtained based on distinct modeling of the hadron production.1 We combine observations of interactions for different neutrino flavors, energy, and pseudorapidity to determine the expected precision of such analyses using the Hessian-based approach, similar to
PDF fits [28]. According to the Cramer-Rao bound, this expected precision is given by the Fisher Information, which can be easily computed [29; 30]. Despite existing uncertainties, a multi-channel approach to studying \(\nu\)-induced events allows for identifying new effects that cannot be easily mimicked by leading SM predictions of the far-forward neutrino spectra or their combinations. This can be used to place strong constraints or discover such phenomena. We illustrate this for an enhanced strangeness production hypothesis with possible groundbreaking implications for cosmic-ray physics [31; 32; 33] and for BSM-induced neutrino non-standard interactions (NSI) that can also be probed this way at the LHC [34; 35; 36; 37].
The paper is organized as follows. In Sec. II, we discuss our modeling, and provide projected bounds on the far-forward neutrino spectra in Sec. III. Sec. IV is devoted to discussing applications of this methodology to constrain enhanced strangeness production and BSM operators describing neutrino NSI. We conclude in Sec. V. Further details about our statistical analysis are given in Appendix A.
## II Methodology
In our analysis, we first obtain a set of neutrino flux predictions to determine the energy and pseudo-rapidity distribution of far-forward neutrinos at the LHC. The latter distribution can be well described by the radial distribution of events away from the beam collision axis. These predictions are based on different Monte Carlo (MC) generators and other results in the literature, as discussed below. We then define a parameterized flux model, which is constructed from linear combinations of the individual predictions. Using this input, we estimate an expected number of neutrino CC scattering events in existing and proposed on-axis forward neutrino experiments at the LHC. We discuss the necessary ingredients of this analysis in this section. We then estimate how well the LHC neutrino experiments can constrain the flux model on a statistical level and present the results in Sec. III.
### Incident Neutrino Fluxes and Spectra
Neutrinos that can reach the far-forward detectors of our interest are produced most abundantly near the ATLAS Interaction Point (IP). The meson decays can be either prompt, e.g., for charm mesons, or displaced from the IP, like for charged pions and kaons. In the latter case, the impact of the LHC magnets and infrastructure must be considered in precise modeling. It effectively suppresses neutrino production at distances larger than about 100 m away from the \(pp\) collision point. Importantly, for LHC neutrino energies, \(E_{\nu}\sim\) few hundred GeV, and the distance between the IP and the detectors, \(L\sim\) few hundred meters, one expects a negligible impact from neutrino oscillations unless it is enhanced by BSM effects [13]. Hence, the measured neutrino spectra are directly inherited from the parent hadrons.
Various hadrons contribute to the total neutrino flux measured in the far-forward experiments, although the dominant contributions come from charged pions, kaons, D-mesons, and charmed baryons, cf. Ref. [27] for detailed discussion. The pion decays dominate the muon neutrino spectrum for energies up to a few hundred GeV, while electron neutrinos with these energies mostly come from kaon decays. Charm contributions might become important at larger energies above TeV and they also determine the tau neutrino flux. Given differences in modeling of the forward hadronic fluxes between charm and light mesons, i.e., pions and kaons, we treat both contributions separately in our analysis. Below, we briefly discuss the MC tools and predictions used in our study, cf. Table 1 for a summary.
**Light mesons (\(\pi\), \(K\)):** Light meson production in the forward kinematic region of the LHC cannot be described within perturbative QCD (pQCD). Instead, it is typically modeled using hadronic interaction models, many of which were originally designed for cosmic-ray physics. In our analysis, we employ several most commonly used and publicly available MC generators: EPOS-LHC [42], DPMJET 3.2019.1 [44; 45], QGSJET II-04 [49], and SIBYLL 2.3d [38; 40]. We follow their implementation in the CRMC package [53]. We additionally use light meson spectra predictions obtained with a new dedicated forward-physics Pythia 8.2 tune [51].
\begin{table}
\begin{tabular}{c|c||c|c} \hline \hline Light mesons (\(\pi\), \(K\)) & \multicolumn{2}{c|}{Charm hadrons (\(D\), \(\Lambda_{c}\))} \\ Name & Refs & Name & Refs \\ \hline SIBYLL 2.3d & [38; 39; 40; 41] & SIBYLL 2.3d & [38; 39; 40; 41] \\ EPOS-LHC & [42] & BKKS & [43] \\ DPMJET 3.2019.1 & [44; 45] & BDGJKR & [46; 47; 48] \\ QGSJET II-04 & [49] & BKSS \(k_{T}\) & [50] \\ Pythia 8.2 (forward) & [51] & MS \(k_{T}\) & [52] \\ \hline \hline \end{tabular}
\end{table}
Table 1: A list of Monte Carlo tools and predictions with references used to obtain far-forward neutrino spectra employed in our study. We treat pions, kaons, and charm hadrons separately in the statistical analysis. See the text for details.
Notably, these tools use different approaches to model forward hadron production, and their variation incorporates a variety of underlying physics effects, cf. Refs. [26; 54] for reviews. The corresponding predictions form an envelope around the LHCf data on neutral hadron spectra, although there remain sizable variations between them, cf. Refs. [55; 56; 57; 58] for comparison. The first forward muon [18; 19] and electron [20] neutrino data obtained during the current LHC Run 3 show a broad overall agreement with theoretical predictions that we use, albeit with large statistical uncertainties. We treat pions and kaons independently in our analysis. To study the robustness of our results, we have performed several numerical tests with a limited set of only three MC generators out of the list of five above and found similar bounds. However, we use the above complete MC generator list in the following.
**Charmed hadrons:**: Unlike light mesons, charm hadron production can also be described using pQCD. In addition, many of the above generators do not treat forward charm production, or it has not been validated and tuned to LHC data. For this reason, we model the charmed hadron spectra differently in our study. We consider predictions from SIBYLL 2.3d [39; 41] and, additionally, use several recent results prepared specifically for the far-forward neutrino searches at the LHC. We denote them in the following by acronyms: BDGJKR [46; 47; 48], BKRS [43], BKSS \(k_{T}\)[50], and MS \(k_{T}\)[52]. Forward charm production in SIBYLL is modeled phenomenologically by replacing the production of a strange pair \(s\bar{s}\) by a charm \(c\bar{c}\) pair with a small probability determined by fitting to the data [39]. Instead, the remaining predictions employ pQCD calculations of the charm production cross section. The next-to-leading order (NLO) results are used to obtain the BKRS and BDGJKR spectra within the collinear factorization approach. The former calculation uses POWHEG [59; 60; 61; 62] and the NNPDF3.1sx+LHCb set of parton distribution functions (PDFs) with \(\alpha_{s}=0.118\) at NLO+NLL\({}_{x}\) accuracy as input [63; 64]. The latter results, using the framework of Ref. [47], are obtained with the PROSA FFNS PDF [65] with renormalization and factorization scales proportional to transverse mass set by fitting to the LHCb data. The BDGJKR predictions include additional Gaussian \(k_{T}\) smearing introduced to mimic the effect of the intrinsic transverse momentum of initial state partons and soft gluon emissions. In contrast, the BKSS \(k_{T}\) and MS \(k_{T}\) model these effects within the hybrid \(k_{T}\) factorization approach [66; 67]. The Kutak-Sapeta gluon unintegrated PDF (uPDF) [68] is used in this case. An important effect on the forward charm hadron spectra is related to modeling hadronization and fragmentation. The BDGJKR and MS \(k_{T}\) results are based on applying the Peterson fragmentation function (FF) [69] by assigning a fraction of the momentum of the parent charm quark to the final-state hadron in the partonic center-of-mass frame and laboratory frame, respectively. We note, however, that this calculation neglects the impact of hadronization with beam remnants. Hence, in general, FFs are not expected to be applicable in forward collisions at the LHC, cf. section 6.2.2 in Ref. [24] for further discussion. In particular, using them implies that charm hadrons are always less energetic than charm quarks, which reduces the flux of high-energy neutrinos. In the MS \(k_{T}\) case, additional hadronization with beam remnants is also considered via a recombination formalism which is sizeable for \(D_{0}\) and \(D^{\pm}\) mesons but negligible for \(D_{s}\). This effect dominates at high energies and for forward rapidities. On the other hand, SIBYLL, BKRS, and BKSS \(k_{T}\) predictions rely on string fragmentation to include hadronization with beam remnants. The latter two results employ the string fragmentation model implemented in Pythia 8.2 [70].
### Neutrino Flux Parameterization
The forward hadron spectra predictions mentioned above are used to obtain neutrino spectra arising from the decays of the light mesons \(\pi^{\pm}\), \(K^{\pm}\), \(K^{0}_{L}\), \(K^{0}_{S}\), and the charmed hadrons \(D^{\pm}\), \(D^{0}\), \(\overline{D}^{0}\), \(D^{\pm}_{s}\), \(\Lambda^{\pm}_{c}\). To treat possible variations in the normalization and shape of the neutrino spectra, we take the actual spectra used in our analysis as an interpolation (or extrapolation) between these predictions. For simplicity, we neglect subdominant production modes of neutrinos in hyperon and B-meson decays, as well as secondary production modes in hadronic showers induced in the elements of the LHC infrastructure away from the ATLAS IP.
To rescale the flux components and to obtain the corresponding binned spectra, we define a model parametrizing the contributions of different predictions in a weighted sum, resulting in a total sample. The parent hadrons are divided into three classes: pions (\(\pi\)), kaons (\(K\)), and charmed hadrons (\(c\)), each with a dedicated weight in the sum. Then with \(p\in\{\pi,K,c\}\), we employ \(N_{p}\) predictions for the number of CC scattering events in the detector in a given energy and radial bin, \(G^{(p)}_{n\geq 0}\), by introducing \(N_{p}-1\)
nuisance parameters \(\lambda_{i\geq 1}^{(p)}\) to obtain the interpolated prediction with the following expression
\[m=\sum_{p\in\{\pi,K,c\}}\frac{1}{N_{p}}\left[G_{0}^{(p)}\left(1-\sum_{i=1}^{N_{p}- 1}\lambda_{i}^{(p)}\right)+\sum_{i=1}^{N_{p}-1}G_{i}^{(p)}\left(1+N_{p}\lambda_ {i}^{(p)}-\sum_{j=1}^{N_{p}-1}\lambda_{j}^{(p)}\right)\right]. \tag{1}\]
The model then reduces to the contribution of the \(i\geq 1\)th prediction \(G_{i}\) when \(\lambda_{i}=1,\lambda_{j\neq i}=0\), while \(\lambda_{i}=-1\)\(\forall\)\(i\) returns the spectrum of \(G_{0}\). Setting \(\lambda_{i}=0\)\(\forall\)\(i\) yields the average of all predictions, chosen as the baseline for the discussion below. Note that such a setting is not imperative for implementing the model calculation, and choosing the baseline as a general set of parameter values is also possible. In particular, we will discuss the result obtained for the SIBYLL baseline prediction in Sec. IV.1.
The effective description of the neutrino data obtained this way is characterized by 12 nuisance parameters, on top of additional free parameters that we introduce when constraining specific new effects discussed in Sec. IV. While future studies will keep refining the choice of the nuisance parameters in analyses of this kind, the present work is the first quantitative assessment of employing such parameterizations to study LHC neutrinos. These are introduced to relate far-forward neutrino data to fundamental hadronic physics, instead of treating neutrino spectra as fully uncorrelated.
We then perform a likelihood-based analysis and estimate a minimal variance of the model parameters via the Fisher information matrix, as dictated by the Cramer-Rao bound [29; 30]; see also Refs. [71; 72; 73] for similar discussions for other LHC data analyses. To this end, our procedure should reproduce the projected most robust bounds to be obtained thanks to the data gathered in considered experimental searches after profiling over nuisance parameters that represent theoretical uncertainties. At the same time, we also comment on expected deviations from this picture in the presence of finite efficiency factors affecting the measurements. The results are, eventually, translated into physically meaningful quantities for their interpretation. We provide more details about a statistical analysis in Appendix A.
In the following, we will focus on the constraints on the combined neutrino and antineutrino spectrum for each flavor, \(\nu_{\ell}+\bar{\nu}_{\ell}\). We note that the forward LHC detectors have capabilities to disentangle between neutrinos and antineutrinos, especially for \(\nu_{\mu}\). This allows for measuring their spectra separately. We leave the discussion about the potential consequences of such measurements for future studies while we concentrate in this analysis on the dominant impact of meson decays that can be well constrained by the combined spectra.
### Neutrino Detection
The collimated flux of high-energy forward neutrinos produced at the LHC can be detected in relatively small experiments that allow for detailed studies of neutrino interactions. We will illustrate the prospects of these searches for a selection of such ongoing and future proposed detectors.
**FASER\(\nu\):** Focusing first on the current LHC Run 3, we will study the projected capabilities of the FASER\(\nu\) emulsion detector [13; 14]. It consists of tungsten target material layers with a total mass of 1.1 ton. These are interleaved with emulsion films with the transverse size of 25 cm\(\times\)30 cm that store information about the tracks of charged particles produced in neutrino scatterings. High-energy muons produced this way can travel through the entire detector, and their momentum is measured in the FASER spectrometer placed downstream of the emulsion detector. The excellent spatial resolution of emulsion films allows for measuring \(\nu_{\tau}\)-induced tau lepton tracks with a few hundred GeV energy and, therefore, study \(\nu_{\tau}\) charged current (CC) interactions on an event-by-event basis.
The expected vertex detection efficiency of FASER\(\nu\) is of order 90% for the most energetic neutrinos produced at the LHC, while it decreases to about \((30\%-40\%)\) for \(E_{\nu}\sim 100~{}\mathrm{GeV}\). We implement it following Fig. 9 in Ref. [13]. We additionally employ a geometrical acceptance factor of 80% and lepton identification efficiencies of 86% for muons and 75% for taus following that study. We assume that electrons can be identified with nearly 100% detection efficiency in emulsion due to their expected showering. We note, however, that this identification might become more challenging at lower energies. In particular, in the current analysis, electron neutrino interactions in FASER\(\nu\) are studied only above 100 GeV energy. We include this effective cut when analyzing FASER
prospects for probing the cosmic-ray muon puzzle, as discussed in Sec. IV.1. Considering all the effects above, we estimate that, e.g., one can identify a CC scattering of the 1 TeV muon neutrino with more than 60% efficiency in FASER\(\nu\). In this analysis, we use 5 energy bins per decade in the likelihood analysis, which can reproduce expected 30% neutrino energy resolution in this detector [13]. We assume \(\mathcal{L}=150\) fb\({}^{-1}\) of integrated luminosity in LHC Run 3.
**FASER\(\nu\)2:**: The emulsion detector technology has also been proposed for the FASER\(\nu\)2 detector in the FPF. The assumed transverse size of \(40\) cm \(\times\) 40 cm and total tungsten mass of 20 tons, as well as larger integrated luminosity in the HL-LHC era, \(\mathcal{L}=3\) ab\({}^{-1}\), result in a significantly increased expected neutrino event statistics in this detector, up to 1M muon neutrino CC scatterings [23; 24]. The larger detector size of FASER\(\nu\)2 permits better event containment than in FASER\(\nu\). This results in an expected improvement in energy resolution. We, therefore, employ 10 bins per decade of the incident neutrino energy in this case. Similarly to FASER\(\nu\), the neutrino detection efficiency in FASER\(\nu\)2 will be flavor-dependent. Given the lack of detailed efficiency studies for FASER\(\nu\)2, we present the results below assuming 100% efficiency. However, we also comment on the impact of employing efficiency cuts similar to those discussed above for the currently operating FASER\(\nu\) detector.
**FLArE:**: We also present the results for the proposed FLArE detector [23; 24; 74] employing liquid argon (LAr) time-projection chamber (TPC) technology. FLArE will offer improved calorimetric capabilities and dynamical information about events to disentangle neutrino-induced signals from muon backgrounds. The outgoing muons from neutrino interactions can be measured with a dedicated muon tagger and with the help of the FASER2 spectrometer. Studying tau neutrinos might be more challenging in this case due to the expected lower spatial resolution of LArTPCs than in emulsion detectors. However, \(\nu_{\tau}\)-induced events can still be searched for as fluctuations over the expected backgrounds from other neutrino flavors. In the following, we assume \(1\) m \(\times\) 1 m transverse area and 10-ton fiducial mass of the LAr target in FLArE, and the integrated luminosity of \(\mathcal{L}=3\) ab\({}^{-1}\). We take 100% efficiency for neutrino detection in FLArE while commenting on the case with a decreased 50% efficiency.
All the detectors discussed above are centered around the beam-collision axis. Importantly, off-axis far-forward detectors have also been proposed, namely the SND@LHC [16; 17] and AdvSND [23; 24] experiments for the ongoing LHC Run 3 period and the HL-LHC era, respectively. These extend pseudo-rapidity coverage of far-forward searches at the LHC toward lower values of \(\eta\). In the following, we focus on the on-axis experiments and present representative results obtained for the ongoing measurements in FASER\(\nu\) and the proposed FASER\(\nu\)2 and FLArE searches. We note, however, that additional data gathered off-axis may further improve the projected constraints discussed below.
When modeling neutrino interactions in the detectors of our interest, we convolute the neutrino flux with the interaction cross-sections predicted by GENIE[75] as obtained in Ref. [13]. These results are based on a Bodek-Yang model used to describe deep inelastic scattering (DIS) events [76; 77]. The alternative NNFS\(\nu\) approach has been recently discussed in Ref. [78], which generally agrees with the Bodek-Yang model at TeV-scale energies, cf. also Refs. [79; 80] for other recent analyses. However, uncertainties in the predicted scattering cross section up to a few percent for \(E_{\nu}\sim\) TeV have been reported that are driven by PDF uncertainties [78]. This is not expected to significantly affect the interpretation of the results presented below for the ongoing FASER\(\nu\) measurements. On the other hand, improved sensitivity of the FPF experiments will allow us to reach the level of precision where PDF uncertainties are anticipated to become important. In fact, by using additional kinematic variables, the FPF is expected to constrain PDFs, especially for strange quarks [23; 24]. The proposed Electron-Ion Collider (EIC) will further improve relevant bounds on up and down quark PDFs [81]. The corresponding uncertainties should then be reduced during the FPF data-taking period. In the following, we focus on the dominant uncertainties affecting neutrino fluxes and spectra in the far-forward kinematic region of the LHC related to the differences in parent hadron spectra predictions. We leave the discussion of a joint fit considering both production and interaction rate uncertainties for the future.
## III Neutrino spectra and projected constraints
In the upper panels of Fig. 1, we illustrate single-differential neutrino energy distributions for CC scattering events in the FLArE detector using several combinations of the abovementioned MC predictions for parent meson spectra. We present the results for all three neutrino flavors. We denote different predictions by p\({}_{1}+\)p\({}_{2}\) in the plots, where p\({}_{i}\) stands for the prediction name, and \(i=1\) and 2
corresponds to light and charm hadron spectra, respectively. In each case, the plots show the combined neutrino and antineutrino spectra.
As can be seen, various predictions agree remarkably well for the electron and muon neutrinos with energies up to \(E_{\nu}\sim 300\ \mathrm{GeV}\). In this energy regime, an observed discrepancy between different MC results is about a factor of 2. This reflects a relatively better understanding of light meson spectra production in the far-forward region of the LHC, and these mesons dominate the \(\nu_{e}\) and \(\nu_{\mu}\) fluxes up to a few hundred GeV of energy. Instead, the larger the neutrino energy becomes, the uncertainties grow both for light mesons and especially for the possible charm hadron contributions. The latter also determine the \(\nu_{\tau}\) flux predictions over the entire energy range. The charm-induced spectra currently show an order-of-magnitude discrepancy between various predictions.
Focusing on the tau neutrino spectrum plot, we find that the lack of beam remnant induced effects in hadronization, e.g. the beam drag effect in modeling the \(D_{s}\)-meson production, suppresses the high-energy part of charm-induced neutrino spectra. This is evident when comparing the BDGJKR and MS\(k_{T}\) predictions with the BKRS and BKSS\(k_{T}\) results. We note that even though the high-energy part of the BKSS\({}_{\mathrm{T}}\) spectrum is suppressed by considering gluon saturation, this prediction remains the most optimistic in terms of the expected number of \(\nu\)-induced events in the detector. The difference between this prediction and the least optimistic MS\(k_{T}\) result is the largest for the most energetic tau neutrinos with \(E_{\nu_{\tau}}\sim\mathrm{few}\ \mathrm{TeV}\). Furthermore, we have verified that the uncertainties in the charm predictions also partially propagate to the high-energy part of \(\nu_{e}\) and \(\nu_{\mu}\) spectra, adding to uncertainties in determining light meson spectra.
We also show in the plots the baseline model prediction obtained as an average of all the considered predictions, assuming equal weights. In the bottom panels in Fig. 1, we assume that the baseline prediction correctly describes the data to be gathered in the FPF. The gray-shaded regions illustrate the projected statistical precision with which our flux model can be constrained at \(1\sigma\) level; see Appendix A for details of the statistical analysis.
The uncertainty bands found this way illustrate excellent precision in constraining the neutrino spectra in the FPF experiments. This is especially evident for muon neutrinos with energies \(100\ \mathrm{GeV}\lesssim E_{\nu_{\mu}}\lesssim 1\ \mathrm{TeV}\), as shown in the bottom central panel in the figure. Due to the largest expected event statistics, the projected bounds, in this case, are at the percent level. This translates into a narrow gray uncertainty band over the baseline neutrino spectrum in the central upper panel, which is barely visible in the plot. In particular, the FPF data will allow for differentiating between the baseline hypothesis and specific MC results presented in Fig. 1 with high precision.
Figure 1: In the upper panels, the colorful histograms correspond to different predictions of the combined energy distributions of neutrinos and antineutrinos interacting via CC scatterings in FLArE, as indicated in the plot. The left (central, right) panel corresponds to the electron (muon, tau) neutrinos. An average of the predictions employed in the analysis gives the baseline spectrum shown with a black solid line. The bottom panels illustrate the expected Cramer-Rao uncertainty bands (\(1\sigma\)) on the baseline spectrum as gray-shaded regions. The robustness of the obtained uncertainties against varying event statistics is shown with purple and green histograms, where the number of events is changed up and down by a factor of two.
Due to reduced event statistics, the uncertainty bands grow at the spectrum's low and high energy tails. The high-energy neutrinos with \(E_{\nu}\gtrsim\) a \(\mathrm{few}\times\mathrm{TeV}\) are more rarely produced at the LHC. Instead, low-energy neutrinos with \(E_{\nu}\lesssim 10\) GeV are produced more isotropically and often miss far-forward experiments. However, we find the projected uncertainty to be of order several percent between these two regimes. This remains at the level of PDF uncertainties affecting the neutrino DIS cross-section predictions, as discussed above. This happens also for the electron neutrinos, for which the expected number of events is only a factor of a few lower than for \(\nu_{\mu}\)s. We show the electron neutrino uncertainty bands in the bottom left panel.
The bottom right panel illustrates the results for the tau neutrinos. In this case, the projected uncertainties are larger but, remarkably, also stay below 5% for 100 GeV \(\lesssim E_{\nu_{\tau}}\lesssim 3\) TeV. At first, this result might seem odd, given significantly lower event statistics of \(\nu_{\tau}\)-induced events than for the other neutrino flavor. However, we note that the analysis for the tau neutrinos implicitly concerns the results obtained for both \(\nu_{e}\) and \(\nu_{\mu}\). This is because the spectra of these neutrinos are also affected by the forward charm production, especially in their high-energy tails. Possible enhanced production of charm hadrons is then strongly constrained in this energy regime by the electron and muon neutrino data, which then translates into stronger bounds on \(\nu_{\tau}\). Instead, in the low-energy part of the spectrum, below 100 GeV, both the tau neutrino flux is decreased, and the correlation with the electron and muon neutrino spectra is lost. As a result, the constraining power for \(\nu_{\tau}\) in this energy regime is significantly weaker.
We have also verified numerically that the expected uncertainty bands on the \(\nu_{\tau}\) energy spectrum depend only mildly on the choice of the baseline spectrum. For instance, after switching to the baseline spectrum defined as \(\mathtt{DPMJET}(\pi,K)\) + \(\mathtt{BKRS}(c)\) shown in red in Fig. 1, one finds reduced uncertainties, by up to a factor of 2, in some of the low-energy bins for \(E_{\nu_{\tau}}\lesssim 100\) GeV. The improvement in high-energy bins is, however, much smaller, even though the new baseline spectrum predicts a larger number of \(\nu_{\tau}\)-induced events up to \(E_{\nu_{\tau}}\sim\mathrm{TeV}\). This additionally illustrates that the high-energy tail of the tau neutrino spectrum is not only sensitive to the \(\nu_{\tau}\) spectrum, but the charm contribution to the spectra of other neutrino flavors strongly constrains it too. The latter constraining power is not significantly affected by changing the baseline spectrum. This is because \(\mathtt{DPMJET}\) predictions accidentally lie close to the average spectra for \(\nu_{e}\) and \(\nu_{\mu}\) over the entire energy range, as can be seen by comparing red and black histograms in the left and central upper panels of Fig. 1.
In Fig. 1, we also illustrate the expected uncertainty bands for each neutrino flavor that assume only 50% of event statistics. We show this with purple histograms in the bottom panels. As discussed above, this could correspond to a more realistic treatment of the neutrino detection efficiency factors in FLArE. Importantly, as can be seen, this has only a mild impact on the expected constraining power of this experiment. Similarly, we present the expected results for increased event statistics up to 200% of events with green histograms in the bottom panels. This could be due to increasing the fiducial volume of the detector. Again, the predicted impact on the neutrino spectrum uncertainty bands is relatively small. Hence, small variations in efficiency factors or detector sizes in the FPF are not expected to affect the neutrino physics program significantly.
However, adding spatial information about events can improve the neutrino spectrum uncertainty bands. This allows for constraining double-differential neutrino production cross section in the far-forward region of the LHC, which takes into account additional information about the pseudorapidity distribution on top of the previously discussed energy distribution. We illustrate this in Fig. 2, in which the spatial distribution of neutrino scattering events in FLArE is considered by virtue of radial bins, using the same baseline spectrum as considered in Fig. 1. In the upper panels, we show the neutrino interaction spectrum in three radial bins defined as \(R<0.1\) m, \(0.1\) m \(<R<0.25\) m, and \(R>0.25\) m, where \(R\) is the radial distance away from the beam collision axis. The detector is assumed to be centered around the beam collision axis (\(R=0\)), and the last radial bin extends to the edges of the detector transverse size defined by the square of size \(1\) m \(\times\) 1 m. The spectra are normalized to the bin area to illustrate better the concentration of neutrino-induced events around the beam collision axis.2
Footnote 2: In the analysis below, we also use radial bins for the other experiments that are defined as follows. For FASER\(\nu\), with the smallest transverse size, we use \(R<0.06\) m, \(0.06\) m \(<R<0.13\)m, and \(R>0.13\) m up to the edge of the detector. In the case of FASER\(\nu\)2, we define the bins differently: \(R<0.1\) m, \(0.1\) m \(<R<0.2\) m, and \(R>0.2\) m up to the edge of the detector.
As shown with solid black lines in the upper panels, the central parts of the detector (\(R<0.1\) m) can constrain well the most uncertain high-energy parts of the neutrino spectra. Instead, the outermost radial bin in this energy regime is characterized by more than an order of magnitude lower neutrino
flux per unit area, as shown with yellow solid lines. This is, however, compensated by a larger area of this radial bin when counting the total number of events. Hence, each radial bin has similar constraining power in our analysis in the high-energy tails of the distributions. Instead, neutrinos with lower energies, below a few hundred GeV, are dominantly constrained by the data gathered in the parts of the detector with a larger total transverse area. This is understood as their parent mesons are often less energetic and less forward-focused after production at the LHC.
Considering this spatial information further improves the FPF detectors' constraining power. We illustrate this in the bottom panels of Fig. 2. In the plots, gray-shaded regions correspond to the previously discussed results with only one radial bin. In this case, only a single-differential distribution in the energy of the neutrino production cross section is used to constrain neutrino spectra. Instead, red and purple lines in the plots show the results obtained for three or eight radial bins. As can be seen, adding spatial information reduces the uncertainties to the sub-percent level for the muon neutrinos with \(100~{}\mathrm{GeV}\lesssim E_{\nu_{\mu}}\lesssim\mathrm{TeV}\). A similar reduction is observed for the electron neutrinos. The improvement by up to a factor of a few in the expected uncertainty band is also found in the low- and high-energy tails of the respective neutrino spectra. Increasing the number of radial bins further does not substantially improve the uncertainty bands. This is due to reduced event statistics in each of the bins observed in this case.
The baseline spectrum uncertainty for \(\nu_{\tau}\)s is, similarly, reduced over the entire energy range by using spatial information. In particular, the low-energy tail of the spectrum obtained for \(E_{\nu_{\tau}}\sim\) a few tens of GeV can now be better constrained. Charm-induced neutrinos are characterized by a noticeably different pseudorapidity distribution than those produced in decays of light mesons. The latter tend to be more collimated around the beam collision axis, as dictated by their characteristic transverse momentum, \(p_{T}\sim m/p\), where \(m\) is the hadron mass and \(p\) is its total momentum. Therefore, including information about the double-differential distribution allows for better disentangling charm-induced excess of \(\nu_{e}\) and \(\nu_{\mu}\) scattering events over the dominant events associated with the neutrino production in light meson decays. The improved charm constraining power also reduces uncertainty bands on the \(\nu_{\tau}\) spectrum.
In Fig. 3, we show a comparison between the baseline neutrino spectra and uncertainty bands obtained for FLArE and FASER\(\nu 2\) in the FPF and the currently operating FASER\(\nu\) detector. As can be seen, the FPF experiments will offer more than two orders of magnitude larger neutrino event statistics than FASER\(\nu\). The highest number of events is expected for FASER\(\nu 2\), which, according to the current design, has a larger target mass by a factor of two than FLArE. Additional improvement comes from an increased tungsten density with respect to LAr. This allows for concentrating the
Figure 2: The upper panel illustrates the combined neutrino and antineutrino CC event scattering rates in FLArE, using the same baseline spectrum as Fig. 1. The results are shown for each neutrino flavor in three radial bins, as indicated in the plot. The spectra are divided by the corresponding bin area. The lower panel indicates the improvement in uncertainty obtained by combining the information from three (red) or eight (purple) radial bins.
target mass better around the beam collision axis, where high-energy neutrino flux is collimated. Because of the larger transverse size of FLArE, a peak of the expected neutrino spectrum in this detector is slightly shifted toward lower energies when compared to emulsion detectors.
The increased event statistics in the FPF detectors translate into significantly narrower uncertainty bands than for FASER\(\nu\), as shown in the bottom panels. These have been obtained assuming 3 radial bins for each detector. The relevant ranges of \(R\) have been changed for each detector, depending on its transverse size. Notably, the ongoing measurements in FASER\(\nu\) will be able to constrain the electron and muon neutrino spectra with \(\mathcal{O}(10\%)\) precision for the energy between a few hundred GeV and several TeV. However, the uncertainties in determining the tau neutrino flux will remain much larger. The FPF detectors are needed to reduce them to a few percent level.
## IV Physics applications
As discussed above, detailed information about the neutrino flavor, energy spectrum, and the spatial distribution of events in the detector will allow one to differentiate between various predictions. It can also be used to constrain other effects. Employing complete information about events allows for better identification of the unique impact of such phenomena on the far-forward neutrino data. We illustrate this below for two sample effects. One is related to proposed enhanced strangeness production in hadronic collisions at large energies and pseudorapidities. The other effect concerns potential NSI contributions to neutrino event rates in the far-forward neutrino experiments at the LHC.
### Enhanced Strangeness
Far-forward searches at the LHC are naturally connected to ultra-high energy cosmic-ray (UHECR) physics. This is due to the sensitivity of both physics programs to high-energy hadronic collisions and the importance of large pseudorapidity regimes of such interactions. We have already shown how LHC data can help differentiate between available MC generators that are also routinely used in modeling cosmic ray (CR) air showers to tune them better in the future. Here, we focus on the expected impact of these searches on explaining anomalies in cosmic-ray data.
A striking example of such anomaly is the so-called muon puzzle first observed in the Pierre Auger Observatory data [82; 83; 84]. Other experimental collaborations subsequently confirmed it, and the anomaly is currently considered to have a combined statistical significance of \(8\sigma\), cf. Ref. [54] for review. The anomaly is related to an apparent enhancement in muon rates at the level of a few tens of percent in hadronic components of CR-induced showers. This corresponds to high energies of the incident CR starting at \(E\sim 10^{8}\) GeV, which translates into \(\sqrt{s}\simeq\sqrt{2\,E\,m_{p}}\simeq 14\) TeV in the CM frame of the \(pp\) collision between the CR and proton in oxy
Figure 3: Similar to Fig. 1, but a comparison of the baseline neutrino CC scattering interaction rates obtained for FLArE, FASER\(\nu\)2, and FASER\(\nu\) are shown, assuming luminosities of 150fb\({}^{-1}\) for FASER\(\nu\) and 3ab\({}^{-1}\) for the remainder. In the bottom panels, relevant uncertainty bands are shown.
gen or nitrogen nuclei in the atmosphere. Notably, this is the energy scale characteristic for the LHC. The discrepancy between the observed and predicted muon rates grows higher with increasing energy. It has been shown that the dominant explanation of the anomaly is likely due to a reduced transfer of energy from a hadronic to an electromagnetic component of the shower, e.g., by suppressing the neutral pion production or decay rate in atmospheric air showers [85].
Among the models proposed to accommodate such an effect, particularly important is the enhanced strangeness hypothesis, in which suppressed pion to kaon production ratio in the final state of high-energy \(pp\) collisions is assumed, cf. Refs. [31; 32] for possible underlying mechanisms. In a simple phenomenological approach, this can be achieved by introducing a finite swapping probability that turns a fraction of pions into kaons. A detailed study of this effect has been performed in Ref. [33]. It has been shown that the relevant \(\pi\to K\) swapping fraction \(f_{s}\) at the level of a few tens of percent can explain the anomaly. To this end, and to be reconciled with other experimental data, the swapping probability should primarily affect high-energy collisions in the large pseudorapidity regime. Interestingly, hints of enhanced strangeness production have also been found in the mid-rapidity region in the ALICE data [86].
In the following, we will analyze a simple phenomenological model introduced in Ref. [33]. In this case, in the presence of the non-zero \(f_{s}\) parameter, the number of neutrinos produced from pion decays in the forward region of the LHC is reduced by a common energy-independent factor, \(N_{\pi\to\nu}\to(1-f_{s})\,N_{\pi\to\nu}\). Simultaneously, the number of neutrinos produced in kaon decays is increased as \(N_{K\to\nu}\to(1+6.6\,f_{s})N_{K\to\nu}\). Here, a numerical factor of 6.6 is related to a relative difference in the pion and kaon production rates at large pseudorapidities at the LHC. It has been determined numerically to reproduce best a complete treatment of the model, in which individual pions are changed into kaons in simulations of the forward neutrino spectra. The difference in the production rates of both mesons is due to their different masses and quark compositions. Additional effects considered in these simulations are due to finite kaon lifetimes and the change of \(\pi^{0}\) into \(K^{0}_{S,L}\). In the latter case, the neutrino can only be produced after the swapping, while the initial neutral pion would typically decay into two photons. Assuming SIBYLL as a baseline MC generator, it has been shown that introducing such a universal swapping fraction \(f_{s}\) for collisions characterized by projectile energies above PeV and pseudorapidities \(|\eta|>4\) in the CM frame in CR air shower simulations allows for fitting the muon data. This requires \(f_{s}\) to lie between about 0.3 and 0.8, where larger values are favored when the increasing primary energy is considered.
Such effects can be particularly prominent in the forward LHC neutrino data if they change \(\nu\) interaction rates in kinematic regions less affected by variations in MC predictions. We illustrate this for the enhanced strangeness effect in the upper left panel of Fig. 4 with two plots obtained for electron and muon neutrinos. In the plots, we present green histograms representing the expected neutrino CC event scattering rate in the FLArE detector obtained for SIBYLL and \(f_{s}=0.5\). This should be compared with black solid lines in the plots representing the baseline scenario obtained for \(f_{s}=0\). As can be seen, the enhanced strangeness production, in this case, would manifestly increase the electron neutrino event rates over the entire energy range, especially for \(E_{\nu_{e}}\lesssim 1~{}\text{TeV}\). This is due to the dominant \(\nu_{e}\) production mode in kaon decays. A similar enhancement is predicted for muon neutrinos above \(100~{}\text{GeV}\). Instead, for lower energies, one expects a decrease in the \(\nu_{\mu}\) event statistics, albeit this is a less significant effect driven by a reduced number of forward-going pions. Applying a non-zero swapping probability does not affect the tau neutrino spectrum. A combined impact of these modifications of the neutrino spectra measured in the far-forward region of the LHC provides a strong signature of this effect, which cannot be easily reproduced by changing and interpolating between various MC predictions in our analysis. To illustrate this, we have added yellow-shaded prediction envelopes in the plots around the baseline distributions that correspond to various MC results shown in Fig. 1.
We first note that essential bounds on the \(f_{s}\) parameter will be obtained thanks to the data gathered in FASER\(\nu\) during the ongoing LHC Run 3. Using the procedure outlined above, we have found that already within the next few years, FASER\(\nu\) will be able to constrain the enhanced strangeness hypothesis up to the level of \(f_{s}\simeq 0.013~{}(1\sigma)\) assuming SIBYLL as a baseline (measured) neutrino spectrum. These results only mildly depend on the precise choice of the baseline spectrum. In particular, we have also verified this for the spectra generated with the EPOS-LHC and QGSJET MC tools and found similar expected bounds at the level of \(f_{s}\simeq 0.013\) and \(0.012\), respectively. As discussed in Ref. [87], these MC generators predict either smaller or larger enhancement effects in the CR shower data. Notably, regardless of the precise choice of the generator, the constraining power of FASER\(\nu\) significantly exceeds the preferred value of \(f_{s}\sim(0.3-0.8)\) obtained by fitting the UHECR data.
This motivates studying potential discovery prospects in FASER\(\nu\). We have tested them assuming that the neutrino data gathered in FASER\(\nu\) will correspond to SIBYLL predictions enhanced by an additional impact of the non-zero \(f_{s}\) parameter. We find in this case that the unique features of this scenario differ from other SM predictions sufficiently strongly to allow for excluding the \(f_{s}=0\) hypothesis at the \(5\sigma\) level for the swapping fraction \(f_{s}=0.06\) or so. We recall that this result has been obtained by considering realistic FASER\(\nu\) efficiency factors, as discussed in Sec. II.3.
To obtain even more baseline-independent results, we similarly study the discovery prospects for FASER\(\nu\) focusing only on the muon neutrino data and electron neutrinos with energies in the range \(100~{}\mathrm{GeV}\lesssim E_{\nu_{e}}\lesssim 300~{}\mathrm{GeV}\). This excludes high-energy electron neutrinos and the tau neutrino data that are currently subject to the largest theoretical uncertainties based on various MC predictions, cf. Fig. 1 and yellow-shaded bands in the upper left panels of Fig. 4. After limiting the dataset for the enhanced strangeness analysis this way, we still find good discovery prospects in FASER\(\nu\). The \(f_{s}=0\) hypothesis will be then excluded at \(5\sigma\) for \(f_{s}\gtrsim 0.2\). This is driven by the low-energy part of the \(\nu_{e}\) spectrum, in which significant deviations from all the MC predictions are expected for \(f_{s}\) of order tens of percent. The capabilities of FASER\(\nu\) in probing this effect will be further enhanced by combining the data gathered by this detector and the SND@LHC experiment. We conclude that the ongoing far-forward neutrino physics program at the LHC will be able to decisively test benchmark models predicting a few tens of percent pion to kaon swapping fractions in forward collisions at the relevant energy and probe this solution to the CR muon puzzle.
While LHC Run 3 searches will already place
Figure 4: _Left:_ The top panels show the electron (left) and muon (right) neutrino CC event scattering rates in FLArE obtained using SIBYLL as the baseline MC generator and with three radial bins. The solid black histograms correspond to \(f_{s}=0\), while the dashed orange (blue, black) ones to \(f_{s}=0.01\), \(0.003\), \(0.001\). The latter remain barely distinguishable from the \(f_{s}=0\) baseline in the plots. These values of \(f_{s}\) roughly correspond to \(1\sigma\) exclusion bounds obtained for FASER\(\nu\) and FLARE with \(10\%\) or \(100\%\) of total data. For FASER\(\nu\), the efficiency factors arising from geometry, energy dependence, and charged lepton identification have been applied. The green histograms represent the \(f_{s}=0.5\) case, for which the cosmic-ray muon puzzle can be solved. The variations in the neutrino event rate due to different MC predictions from Fig. 1 are shown with yellow-shaded bands. The bottom panels zoom in on the uncertainty bands on the neutrino spectrum, shown as gray-shaded bands similar to Fig. 1. Expected deviations from the \(f_{s}=0\) case are also shown as colorful lines that correspond to the aforementioned exclusion bounds from FASER\(\nu\) and FLArE. _Right:_ The \(2\sigma\) constrained values (gray) for \(f_{s}\) obtained using FLARE and FASER\(\nu\), also demonstrating the effect of choosing different predictions as the baseline for the latter. These are compared to less constraining values obtained for the discovery potential at FASER\(\nu\) (turquoise), with and without the information on tau neutrinos and high-energy contributions to the \(\nu_{e}\) spectrum. Notably, all of the predicted constraints cover the \(0.3<f_{s}<0.8\) region shown in dark green, i.e., the values of \(f_{s}\) favored by the enhanced strangeness solution to the CR muon puzzle. The light green band extending to lower values of \(f_{s}\sim 0.005\) is added to indicate that the effect might manifest in a more subtle way in \(pp\) collisions at the LHC.
strong constraints on this scenario, it is also possible that the swapping probability might not be a constant factor. In particular, it can depend on the mass number of colliding nuclei and become more substantial for increasing \(A\), while it could be less pronounced in \(pp\) collisions [31, 33]. In addition, the impact of energy and pseudorapidity dependence of \(f_{s}\) on the CR data has recently been studied in Ref. [88]. It has been shown that introducing such dependence can, e.g., allow for solving the puzzle for the linearly increasing \(f_{s}\) parameter with growing energy. This would predict smaller values of \(f_{s}^{(\rm LHC)}\) at LHC energies, while the maximum value \(f_{s}^{(\rm max)}\) would still be large and substantially modify the kaon production rate at higher energies. In the example discussed therein, one can estimate \(f_{s}^{(\rm LHC)}\sim 0.005\) if \(f_{s}^{(\rm max)}\sim 0.5\) is assumed. The muon puzzle can still be solved in this case. It is then possible that only a more subtle impact of the enhanced strangeness scenario could be seen in \(pp\) collisions at the LHC.
Going beyond a few percent precision might be then crucial to probe this scenario in the far-forward LHC searches. This will be possible with the proposed FPF experiments. In the bottom left panels of Fig. 4, we show in gray the expected uncertainty bands on the electron and muon neutrino spectra in FLArE obtained similarly to Fig. 1. On top of this, we show the predicted deviations for the pion-to-kaon swapping probability of 1.3%, 0.43%, and 0.14%. These correspond to the FASER\(\nu\)\(1\sigma\) exclusion bound discussed above and to FLArE constraints obtained with either 10% of data or the full dataset. As can be seen, within the first one to two years of data taking, FLArE will surpass the ongoing LHC searches by a factor of a few in probing the \(f_{s}\) parameter. The improvement by about an order of magnitude in \(f_{s}\) is expected after the entire HL-LHC era such that sub-percent values of this parameter will be tested.
We summarize the expected bounds on \(f_{s}\) in the right panel of Fig. 4. In the plot, we indicate with a dark green color the preferred range of values of the \(f_{s}\) parameter that could explain the CR muon puzzle. We put it between 0.3 and 0.8 following the results present in Ref. [33] for concreteness. We also show in the plot an extended light green band towards lower values of \(f_{s}\sim 0.005\), which refers to the possible smaller magnitude of this effect in \(pp\) collisions at the LHC. On top of this, we show in turquoise the \(f_{s}\) ranges that can lead to discovery in the ongoing FASER\(\nu\) searches, based on either the full neutrino data or a limited dataset to \(\nu_{\mu}\) and low-energy electron neutrinos. We also present in the figure a set of gray-shaded exclusion bands at \(2\sigma\) obtained for FASER\(\nu\) with three different baseline MC generators and for FLArE with the entire or limited data sets, as discussed above. The proposed FLArE experiment will probe this scenario up to \(\mathcal{O}(0.1\%)\) level in \(f_{s}\), below which barely any effect on the \(pp\) final-state meson distribution is expected.
### Neutrino Charged Current Non-Standard Interactions
One of the major developments of the far-forward neutrino physics program at the LHC is the possibility of studying CC interactions of the tau neutrinos at the TeV energy scale on an event-by-event basis. This is thanks to the exceptional capabilities of the currently operating emulsion detectors that could be further improved in the future in the FPF experiments. Below, we discuss how these searches can help to constrain possible new physics contributions to high-energy neutrino interactions, cf. Refs. [34, 35, 36, 37, 38, 39, 100, 101] for other studies regarding far-forward neutrinos and new physics.
In the SM, the CC neutrino scatterings off nuclei are driven by the \(W\) boson exchange. BSM contributions that could modify these interaction rates are typically associated with new physics at the scale above the characteristic momentum transfer in neutrino interactions at the LHC, especially if they go beyond the SM-like V-A interactions that could be affected by pure neutrino-philic species, cf. Ref. [94] for sample such analysis for forward LHC searches. Therefore, a convenient way to describe such BSM-induced interactions is via an effective field theory (EFT) approach. The typical momentum transfer in CC DIS neutrino scatterings at the LHC, \(Q\sim\mathcal{O}(10\ \text{GeV})\), and we require the new physics scale to remain higher, \(\Lambda\gg Q\), for the validity of the EFT.
The sensitivity reach of FASER\(\nu\) to a number of such operators that could arise, e.g., within the framework of the weak EFT [102, 103, 104], has been studied in Ref. [36] and competitive exclusion bounds have been found for some of them, primarily related to \(\nu_{\tau}\)-like CC scattering signatures. Here, for illustration, we focus on two such right-handed operators that are described by the following Lagrangian
\[\mathcal{L} = -\frac{2\,V_{ud}}{v^{2}}\times(\bar{u}\gamma^{\kappa}P_{R}d)\times\] \[\left[\epsilon_{R}^{\mu\tau}\left(\bar{\ell}_{\mu}\gamma_{\kappa} P_{L}\nu_{\tau}\right)+\epsilon_{R}^{\tau e}\left(\bar{\ell}_{\tau}\gamma_{ \kappa}P_{L}\nu_{e}\right)\right],\]
where we use \(V_{ud}\) as the relevant entry of the Cabibbo-Kobayashi-Maskawa (CKM) matrix, \(v\simeq 246\ \text{GeV}\) is the SM Higgs vacuum expectation value, and \(\epsilon_{R}^{\alpha\beta}\) are the respective Wilson coefficients describing neutrino NSI.
The presence of neutrino NSI would affect both production and interaction rates of neutrinos. We follow the discussion of Ref. [36] and apply the neutrino detection and production coefficients modified by new physics contributions derived therein. In particular, it has been found that these coefficients are not expected to vary significantly with the incident neutrino energy in the range relevant to the far-forward LHC searches. Hence, they are not strongly sensitive to precise modeling of the neutrino energy spectrum. Still, new physics can lead to distinct features in the LHC data by modifying the spectra for only selected neutrino flavors and parent mesons.
We extend the previous analysis by including the modeling of MC prediction uncertainties, as discussed in Sec. II. The bounds presented below are obtained after profiling over all the nuisance parameters describing the neutrino spectra variations. These variations could a priori surpass the impact of neutrino NSI and should be considered in estimating new physics reach. As we present below, however, this effect does not significantly limit the sensitivity of the FPF experiments, at least for EFT operators selected in our analysis. In our analysis, we consider both the energy and spatial distribution of events in the detectors. For the latter distribution, we consider three radial bins for both FASER\(\nu\) and FASER\(\nu\)2. We focus on the emulsion detectors with the best capabilities to study \(\nu_{\tau}\) interactions.
We present the results of our analysis in Fig. 5. In the left panel, we show gray-shaded uncertainty bands on the electron, muon, and tau-neutrino CC scattering rates in FASER\(\nu\)2. In this case, no impact of new physics has been assumed. The baseline model is chosen to be an average of the predictions, similar to the results discussed in Sec. III. On top of this, we also present colorful lines representing predicted deviations from the baseline scenario due to the presence of neutrino NSI. These have been obtained by simultaneously changing both the Wilson coefficients mentioned above and the nuisance parameters describing MC variations. We subsequently profile over all the parameters besides either \(\epsilon_{R}^{\mu\tau}\) or \(\epsilon_{R}^{\tau e}\).
The former Wilson coefficient \(\epsilon_{R}^{\mu\tau}\) is related to the operator which couples the (charged) muon and tau neutrino. It primarily affects the neutrino production rate by inducing a non-zero branching fraction for the process, \(\pi\to\mu\nu_{\tau}\), which enhances the tau neutrino flux. This operator could also induce CC scatterings of the tau neutrinos leading to the final state muons, \(\nu_{\tau}N\to\mu X\). Such a process would reduce the number of events reconstructed as \(\nu_{\tau}\)-like CC scattering interactions, given the lack of the final-state tau lepton \(\tau\). However, the net impact on the \(\nu_{\tau}\) production rate, i.e., the increase of the \(\nu_{\tau}\) flux, is significantly more substantial. It is driven by a large flux of parent pions that, otherwise, never produce tau neutrinos.
On the other hand, the Wilson coefficient \(\epsilon_{R}^{\tau e}\) couples \(\nu_{e}\) and the tau lepton \(\tau\). In this case, the impact on the \(\nu_{\tau}\)-like detection rate is more significant, and it is determined by the NSI-induced CC electron neutrino scatterings, \(\nu_{e}N\to\tau X\), which mimic interactions of the tau neutrinos. The presence of this operator does not induce any additional significant production modes for the electron neutrinos together with the tau lepton. The dominant such modes would be associated with decays of charm hadrons and then related to operators involving quarks from the second generation.
The projected bounds on both the coefficients considered individually that we obtain at \(1\sigma\) and for FASER\(\nu\)2 read: \(|\epsilon_{R}^{\tau e}|<0.0158\) and \(|\epsilon_{R}^{\mu\tau}|<0.0034\). The resulting deviations from the baseline tau neutrino spectrum are at the \(\mathcal{O}(1\%)\) level for the \(\tau e\) operator, as shown with the purple line in the top left panel of Fig. 5. They do not depend significantly on the incident neutrino energy. This is because the corresponding impact of new physics on the tau neutrino detection rate only mildly depends on \(E_{\nu}\). Instead, in the \(\mu\tau\) case, the deviations from the baseline spectrum show clear energy dependence. Notably, in the SM, pion decay contribution to the muon neutrino far-forward spectrum at the LHC dominates at energies below a few hundred GeV. It is then this energy regime in which one expects the most significant enhanced production of \(\nu_{\tau}\)s from rare NSI-induced pion decays, which is the reason behind the observed enhanced effect.
We note that the observation of new physics in interactions at \(E_{\nu_{\tau}}\sim\) few tens of GeV could be affected by a decreasing vertex detection efficiency in emulsion for lowering energies [13]. In order to estimate the impact of this effect on our NSI results, we have additionally studied FASER\(\nu\)2 bounds after applying the relevant effect and lepton detection efficiency. To this end, we have employed the same efficiency functions as in FASER\(\nu\), cf. Sec. II.3 for discussion. The projected bounds found this way are about 20% less strong for the \(\tau e\) operator. The weakening of the predicted constraints is more pronounced for the \(\mu\tau\) operator. The excluded value grows by about 30%, as expected from a stronger energy dependence of the NSI effect in this case. In general, however, we find that both operators can be constrained well in FASER\(\nu\)2 even for decreasing detection efficiency at lower energies. The precise constraining power will be further sensitive to PDF uncertainties, as discussed in Sec. II.3.
In the central and bottom left panels of Fig. 5, we also show with colorful lines the expected NSI
driven deviations from the baseline CC scattering rates for the electron and muon neutrinos. As can be seen, these are significantly smaller than for the tau neutrinos. The observed difference is due to much larger expected scattering rates for \(\nu_{e}\) and \(\nu_{\mu}\) that are less sensitive to small variations in the number of events than \(\nu_{\tau}\)s. We note that the results of such analysis would be much different in the presence of non-negligible neutrino oscillations in long-baseline neutrino experiments. Instead, far-forward neutrino searches at the LHC combine capabilities of short-baseline neutrino experiments with the potential to detect \(\nu_{\tau}\)-induced CC scattering events directly.
The right panel of Fig. 5 corresponds to the results obtained after profiling over all the nuisance parameters but without profiling over both the Wilson coefficients. The projected bounds found this way are similar in constraining power to the ones discussed above. At 90% CL they read \(|\epsilon_{R}^{\tau e}|<0.026\) and \(|\epsilon_{R}^{\mu\tau}|<0.0057\). Both considered EFT operators affect the tau neutrino CC event scattering rate almost independently. We also confirm this by finding that the relevant information matrix is close to the diagonal. The expected constraining power of the far-forward neutrino physics program at the LHC can be compared with other searches. In the case of the \(\tau e\) operator, the dominant such bounds of \(|\epsilon_{R}^{\tau e}|<0.12\) at 90% CL have been derived in Ref. [105] based on past NOMAD constraints on \(\nu_{e}\) oscillations into \(\nu_{\tau}\)[106; 107]. The \(\mu\tau\) operator can be currently best constrained by using the ratio of pion decay widths to the electron and muon, \(\Gamma(\pi\to e\nu_{e})/\Gamma(\pi\to\mu\nu_{\mu})\)[108; 109]. The bounds derived this way are at the level of \(|\epsilon_{R}^{\mu\tau}|<0.071\) at 90% CL [36]. As can be seen in the right panel of Fig. 5, the projected FPF bounds can improve past limits by up to an order of magnitude and find new leading limits already with the first 10% of data. We additionally note that in the presence of multiple Wilson coefficients describing non-vanishing neutrino NSI, interesting cancellations can appear that might significantly weaken these bounds in fine-tuned scenarios [108]. In order to better resolve such issues, measuring the final-state neutrino flavor remains crucial, which further highlights the importance of neutrino NSI searches in the FPF experiments.
We also comment on the importance of using double differential distributions in these analyses. Given a relatively small transverse size of both FASER\(\nu\) and FASER\(\nu\)2, we find only mild improvement in using three radial bins over not considering the spatial distribution of events. However, going to larger pseudorapidity regimes could visibly strengthen the bounds. We have numerically studied this by extending the search to 1 m away from the beam-collision axis, i.e., to the distance characteristic for
Figure 5: _Left:_ The uncertainties for the neutrino CC event scattering rates at FASER\(\nu\)2, assuming 100% of the data collected and using three radial bins, along with the NSI parameters \(\epsilon_{re}\) and \(\epsilon_{\mu\tau}\) set to the obtained constraints. _Right:_ The projected FASER\(\nu\)2 constraints are compared to those obtainable using only 10% of the expected data and those attainable with 100% of the expected FASER\(\nu\) data. Current bounds on the respective Wilson coefficients are shown with gray-shaded bands.
FLArE. The proposed AdvSND detector could extend this coverage even further. Based on our analysis, we expect further \(\mathcal{O}(10\%)\) improvement in the NSI bounds on \(e_{R}^{nr}\) and \(e_{R}^{\tau e}\) from analyzing the data in the full pseudorapidity range of the FPF experiments.
Finally, it is instructive to comment on an approximate scale of heavy new physics species \(\Lambda\), which could be involved in generating the low-energy operators of our interest. This could be obtained by matching our operators to the SMEFT operators above the electroweak (EW) scale [110; 111; 102]. In this case, off-diagonal right-handed EFT operators receive only \(\Lambda^{-4}\) corrections [103]. The FASER\(\nu\)2 bounds found above could then be translated into about \(\Lambda=v/\epsilon^{1/4}\simeq 600\) GeV and 900 GeV at 90%CL for the \(\tau e\) and \(\mu\tau\) operators, respectively.
## V Conclusions
When estimating the discovery potential of a novel experimental program, it always remains crucial to properly consider possible Standard Model effects and related uncertainties that could mimic new phenomena. Breaking this degeneracy is also essential for understanding the expected impact of the recently started far-forward neutrino physics program at the LHC. In the current work, we have made an important step in this direction.
We have proposed parameterizing the expected neutrino spectra by combining the leading predictions based on various approaches to modeling forward parent hadron spectra. The parameterized flux model obtained this way is characterized by 12 nuisance parameters describing the variations in neutrino spectrum normalization and shape. Importantly, these variations take into account expected correlations between the neutrino spectra of different flavors. We then estimated how well the current and proposed forward LHC neutrino experiments can constrain this model. Our analysis considers information about the neutrino charged-current interaction rates for different flavors, energies, and pseudorapidities.
In particular, we have shown that the future Forward Physica Facility data will allow for constraining the LHC neutrino fluxes up to even a sub-percent level for \(\nu_{e}\) and \(\nu_{\mu}\), i.e., to precision at which additional PDF uncertainties affecting neutrino interaction rates become important. These will be reduced thanks to future EIC and FPF measurements. The FPF data will then allow for differentiating between various MC predictions with high precision. Instead, the expected uncertainty bands are of order few percent for the tau neutrinos.
The forward LHC neutrino data will also allow for further improving the tunes of the MC tools used to predict the parent hadron spectra. This will profoundly affect our understanding of cosmic-ray physics, including the possibility of solving the puzzling excess of the muon rate observed in CR-induced air showers at ultrahigh energies. We have analyzed a recently proposed solution to this problem based on the pion-to-kaon swapping among products of high-energy \(pp\) collisions at large pseudorapidities. Our study shows that the currently operating FASER\(\nu\) detector offers excellent capabilities to probe this scenario within the next few years of LHC Run 3. Future FPF searches could further improve relevant bounds on the swapping fraction up to sub-percent precision.
New physics contributions to neutrino interactions can also be probed this way. We have illustrated this for a \(\nu_{\tau}\)-like signature of CC interactions for TeV-scale energies of incident neutrinos. These can be measured on an event-by-event basis in the far-forward emulsion detectors at the LHC. We have tested a scenario in which two Wilson coefficients describing BSM right-handed couplings of quarks to charged leptons and neutrinos are varied simultaneously. We show that the unique effect of new physics can be identified by employing full forward LHC neutrino data to disentangle NSI from variations in MC predictions attributed to an insufficient understanding of the forward hadron production. We have shown that selected Wilson coefficients can be then constrained in the future FASER\(\nu\)2 detector with up to about an order of magnitude better precision than current bounds.
One can extend the current work to other physics analyses. This includes, i.a., specific effects predicted to modify neutrino production rates, e.g., intrinsic charm [112; 113] or gluon saturation at small \(x\)[114; 115] that will affect the charm-induced tau neutrino spectrum in the far-forward kinematic region. New physics could also non-trivially manifest itself in the LHC neutrino data if oscillations into sterile neutrinos are present [13], cf. also recent discussion about the discovery prospects for neutrino-modulino oscillations [116]. The onset of a new era of precision neutrino physics at the LHC offers exciting opportunities to improve our understanding of hadronic interactions and the physics of the most elusive among SM particles.
## Acknowledgements
We thank Weidong Bai, Atri Bhattacharya, Luca Buonocore, Yu Seon Jeong, Rafal Maciula, Mary Hall Reno, Luca Rottoli, Ina Sarcevic, Anna M.
Stasto, and Antoni Szczurek for helpful discussions and for sharing the files used to obtain the charm-induced neutrino spectra. We would like to thank Luis Anchordoqui, Akitaka Ariga, Tomoko Ariga, Anatoli Fedynitch, Max Fieg, Tanguy Pierog, Felix Riehn, Dennis Soldin for useful discussions and comments on the manuscript. We are grateful to the authors and maintainers of many open-source software packages, including Rivet[117, 118] and scikit-hep[119]. FK acknowledges support by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy - EXC 2121 Quantum Universe - 390833306. TM and ST are supported by the National Science Centre, Poland, research grant No. 2021/42/E/ST2/00031. ST is also supported by the grant "AstroCeNT: Particle Astrophysics Science and Technology Centre" carried out within the International Research Agendas programme of the Foundation for Polish Science financed by the European Union under the European Regional Development Fund. ST is additionally partly supported by the European Union's Horizon 2020 research and innovation program under grant agreement No 952480 (DarkWave).
## Appendix A Application of the Cramer-Rao bound to forward LHC neutrino measurements
As discussed in Sec. II.2, we interpolate between established predictions for the forward neutrino spectra to obtain the expected number of neutrino interaction events in each of the detectors considered in our study. Here, we discuss further steps of our statistical analysis.
The observables in the binned histogram analysis are the numbers of events \(n_{i}\) observed in each \(i\)th bin. The likelihood function is obtained as a product of the Poisson likelihoods for all bins
\[L(\text{data}|\text{model})=\prod_{\text{bins }i}\text{Pois}(n_{i}|N_{i})=\prod_{ \text{bins }i}\frac{N_{i}^{n_{i}}e^{-N_{i}}}{n_{i}!}, \tag{10}\]
where \(N_{i}\) is the expected number of events per bin in the model. In the following, we provide a function for the expected log-likelihood ratio \(\log r\), where the likelihood ratio with respect to the baseline model reads
\[r(\lambda^{\pi},\lambda^{K},\lambda^{c})=\frac{L(\text{expected data}|\lambda^{\pi},\lambda^{K},\lambda^{c})}{L(\text{expected data}|\lambda^{\pi}=0,\lambda^{K}=0,\lambda^{c}=0)} \tag{11}\]
with the expected data corresponding to \(\lambda^{\pi}=\lambda^{K}=\lambda^{c}=0\).
The expected likelihood ratio is approximated as
\[-2\log r=-\frac{d^{2}\log r}{d\lambda^{(i)}d\lambda^{(j)}}\Delta\lambda^{(i)} \Delta\lambda^{(j)}=I_{ij}\Delta\lambda^{(i)}\Delta\lambda^{(j)}, \tag{12}\]
where \((i),(j)\) run over all parent hadrons \(\pi,K,c\) for all generators, and \(I_{ij}\) are the components of the Fisher Information matrix. By the Cramer-Rao bound [29, 30], the smallest uncertainty achievable in the measurement is then obtained when the covariance matrix \(\text{C}_{ij}=I_{ij}^{-1}\). To avoid introducing additional numerical uncertainty in the computation of the Fisher information, the expected number of events per bin in the model is generalized into a real positive parameter in Eq. (10). The uncertainty bands for the neutrino spectra are obtained by solving for the eigenvalues and -vectors of the information matrix. The model is then varied from the baseline to the direction of each eigenvector individually, and the uncertainty in each bin is obtained as the square root of the quadratic sum of the differences of each variation to the baseline. When using multiple radial bins, the uncertainty \(\delta_{i}\) for each \(i\)-th radial bin is first computed in the aforementioned way. These are then combined as \(\delta_{\text{tot}}=\sqrt{\sum_{i}\delta_{i}^{2}}\left(\sum_{i}\delta_{i} \right)^{-1}\), separately for all energy bins, yielding the total uncertainty shown in the spectrum plots. In the present work, the uncertainties of all spectra are reported at the \(1\sigma\) level. Results corresponding to different statistical significance are also provided in selected cases in Sec. IV.
We use a profiling procedure amounting to a parallel projection of a generalized ellipsoid in the parameter space to estimate the constraints that can be obtained for a parameter used in the model computation. To profile over the \(n\)-th parameter in the information matrix \(I\), the \(n\)-th column (or row) of \(I\), with the \(n\)-th entry removed, is taken as the vector \(\mathbf{m}\) describing the mixing between the profiled parameter and the remainder. A reduced information matrix \(I^{\text{reduced}}\) is attained by removing the \(n\)-th column and row from \(I\), and the profiled information matrix is given by [72]
\[I^{\text{profiled}}=I^{\text{reduced}}-\mathbf{m}\otimes\mathbf{m}/I_{nn}. \tag{13}\]
The procedure is repeated to profile over multiple parameters, starting with the information matrix resulting from the previous step. By profiling over all but one parameter, the information matrix eventually reduces into a single entry \(a\), and the ultimate constraint for the remaining parameter is then obtained as \(a^{-1/2}\). |
2309.08258 | Perturbative Asymptotic Safety and Its Phenomenological Applications | Asymptotic safety is a remarkable example when fruitful ideas borrowed from
statistical physics proliferate to high-energy physics. The concept of
asymptotic safety is tightly connected to fixed points (FPs) of the
renormalization-group (RG) flow, and generalize well-known asymptotic freedom
to a scale-invariant ultraviolet completion with non-vanishing interactions. In
this review, we discuss the key ideas behind asymptotic safety, a mechanism for
achieving it, and the conditions it imposes on general gauge-Yukawa field
theories. We also pay special attention to possible phenomenological
applications and provide an overview of standard model (SM) extensions
potentially exhibiting asymptotic safety. | Alexander Bednyakov, Alfiia Mukhaeva | 2023-09-15T09:10:39Z | http://arxiv.org/abs/2309.08258v1 | # Perturbative Asymptotic Safety and Its Phenomenological Applications
###### Abstract
Asymptotic safety is a remarkable example when fruitful ideas borrowed from statistical physics proliferate to high-energy physics. The concept of asymptotic safety is tightly connected to fixed points (FFs) of the renormalization-group (RG) flow, and generalize well-known asymptotic freedom to a scale-invariant ultraviolet completion with non-vanishing interactions. In this review, we discuss the key ideas behind asymptotic safety, a mechanism for achieving it, and the conditions it imposes on general gauge-Yukawa field theories. We also pay special attention to possible phenomenological applications and provide an overview of standard model (SM) extensions potentially exhibiting asymptotic safety.
renormalization group; asymptotic safety; new physics 2023
## 1 Introduction
Today, we know two very successful theories of nature: The standard model (SM) and Einstein gravity. Both of them are thoroughly tested in various experiments. Despite the presence of some tensions, there is not really any conclusive indication that these theories are insufficient to describe the nature at scales at which we are currently testing them. However, neither of those two seems ultraviolet (UV) complete and can only be treated as effective field theories (EFT) valid at relatively low scales.
The SM, while being a formally renormalizable Quantum field theory (QFT), exhibits singularities in the far UV--Landau poles in scale-dependent couplings such as the Abelian hypercharge and that of the Higgs-Yukawa sector. It is quite interesting that the scale at which the SM itself breaks down is trans-Planckian (far above the Planck mass). Therefore, this fact opens up the possibility that the quantum gravity can provide the UV-extended or even completed standard model.
When considering Einstein gravity as EFT, the breakdown of predictivity is directly connected to the theory's perturbative non-renormalizability. This non-renormalizability necessitates the introduction of an infinite number of counter terms to absorb arising UV divergencies, each associated with its own, a priori arbitrary, coupling constant that should be fixed from the experiment. Consequently, the theory ends up with an infinitely large number of free parameters, which ultimately undermines predictivity.
The problem of the UV divergences and infinite number of free parameters of EFTs can be addressed in the context of Asymptotic Safety (AS). The idea of AS was proposed by S. Weinberg [1] as a way of making the four-dimensional theory of gravity non-perturbatively renormalizable in the late 1970s and is tightly connected to quantum version of scale invariance.
Generically, QFTs are not scale invariant, i.e., in the presence of quantum fluctuations, the scaling symmetry is broken and features a non-trivial renormalization group (RG) flow in a theory (coupling) space. This means that the couplings entering the QFT action become dependent on the energy or momentum scale and that effective dynamics changes as you
go from scale to scale. The initial condition corresponds to the bare (microscopic) action in the UV (defined with a certain cutoff \(\Lambda\)), and the flow towards the infrared (IR) gives rise to a trajectory in the coupling space.
In general, the theory space is infinitely dimensional and accounts for all possible operators that are compatible with symmetries of the action (since operators are built from quantum fields, there is a freedom in the basis choice of the latter that can be translated to the freedom in the coupling space. In what follows, we only consider essential couplings that can not be removed by field redefinitions (see, e.g, Ref. [2])). If the RG flow features a fixed point (FP) the scaling symmetry can be recovered at the quantum level, resulting, e.g., in the possibility to remove the UV cutoff (\(\Lambda\to\infty\)). For example, a well-known asymptotic freedom corresponds to a trivial restoration of scale symmetry in the sense that it switches off all interactions ( the so-called Gaussian FP) and, thus, removes the effect of quantum fluctuations completely. Another possibility is an interacting (or partially interacting) RG fixed point at finite values of the couplings. This latter case is utilised in the Asymptotic safety framework.
Near FPs, the operators entering the bare action can be ordered by the corresponding critical exponents, which, at the Gaussian FP, coincide with canonical dimensions of the couplings (see below). The (combinations of) operators with negative exponents are said to be irrelevant in the IR, since the corresponding couplings are attracted to the FP values as we decrease the scale. In this respect, we have a prediction in the IR, e.g., a fixed value, or, in a more general situation, a relation between certain couplings.
On the contrary, positive critical exponents give rise to directions in the coupling space that are repelled from the FP along the RG flow towards IR, and, thus, can not be predicted from the FP values; tiny deviations in the bare action can have drastic consequences in the IR. These, relevant, directions span what is called a UV-critical surface, and the number of independent directions constitutes the number of free parameters of the theory that should be eventually fixed from the experiment. Contrary to general EFT, in which the couplings of different operators are thought to be independent, in AS scenarios, physical trajectories are assumed to reside on this finite dimensional submanifold in the infinite dimensional theory space. As a consequence, the lower the dimensionality of the UV critical surface, the more predictive the theory is. Notably, all irrelevant couplings can deviate from the fixed point along the critical surface.
In this respect, one overcomes the issue with infinite number of free parameters. The problem of possible UV singularities is also addressed in this case, since by reversing the flow towards UV (corresponding to \(\Lambda\to\infty\)), one reaches the FP with finite values of all the couplings.
This is the essence of fundamental asymptotic safety. One can also envisage a non-fundamental AS, for which the bare action (at finite cutoff) is chosen (slightly) off the UV critical surface of considered FP. In this case, we can not safely extrapolate \(\Lambda\to\infty\) (unless we hit another FP), since the flow towards UV is repelled from the surface. However, in the IR, the couplings are attracted to the FP and we again have predictions at low scales.
While a non-perturbative determination of the RG flow is quite involved and usually based on the functional renormalization group (FRG), a remarkable progress is achieved in perturbative RG, in which the equations that drive the flow can be computed order-by-order in loop expansion around the Gaussian FP. As an example, we refer to the convenient possibility of extracting necessary equations in a general renormalizable quantum-field theory in \(d=4\) dimensions via various computer codes [3; 4; 5] that can combine old [6; 7; 8; 9] and new [10; 11; 12; 13; 14] results.
In this mini-review, we mainly rely on perturbative RG and consider particle-physic implications of AS. In spite of the fact that asymptotic safety was initially proposed to make quantum gravity self-consistent, we avoid this topic as much as possible in the review. Nevertheless, let us give some important comments on AS gravity.
At the end of the 1990s, M. Reuter and F. Saueressig [15; 16] considered a very simple gravitational Einstein-Hilbert action, that has only two operators parametrised by the
dimensionless Newton constant and vacuum energy. They found two types of fixed points. The first one is non-interacting (Gaussian) FP. The second one is the UV interactive FP, at which both gravitational constants are non-zero. This fixed point would actually correspond to high-energy regime so other gravitational interactions may become important and spoil the FP existence.
To address this issue, there has been a lot of activity and more elaborated calculations, which demonstrated that such a fixed point is not really an artifact of simplification. Even if we start to add more higher order operators to this action, such FP always persists, see, for example, Refs. [17; 18; 19; 20; 21; 22; 23; 24; 25; 26], and references therein.
However, there exist open questions, which are discussed in more detail in Ref. [27]. Among the issues are the background and gauge-fixing dependence of the results obtained in quantum gravity. The authors of Refs. [28; 29; 30] are making first steps in trying address some of these problems. Moreover, the renormalization procedure requires higher-order (in curvature) operators to be added to the Einstein-Hilbert action, giving rise to potentially ghost-like instabilities. The following more recent Refs. [31; 32; 33] demonstrate ways of constructing effective dynamics involving a bunch of these higher order terms, but nevertheless without any tachyonic instabilities. In addition, most of the computations are carried out with metrics having Euclidean signatures. Thus, an understanding of how that carries over to the Lorentzian signature is another really critical open issue, see, e.g., Ref. [34].
One can also ask an important question regarding the influence of matter on gravity in the context of asymptotic safety. It is known that even minimal coupling to the gravity of a self-interacting scalar field \(\phi\) can give rise to non-zero non-minimal interactions of the form \(\xi\phi^{2}R\) with curvature \(R\), when quantum corrections from matter fields are taken into account (see, e.g., Ref. [35]). This coupling seems to violate a strong equivalence principle but is very important for (Higgs) inflation scenarios such as that given, e.g., in Ref. [36]. A recent study of Ref. [37] considers the issue of obtaining correct values of the slow-roll parameters within AS in the SM-like models with scalars and fermions. While we appreciate the importance of these kind of studies, we also refrain from touching this topic in this review and will return back to particle physics.
In recent years, asymptotic safety has been quite extensively used when dealing with the triviality problem of the \(U(1)\) gauge couplings by making the latter reach the interactive fixed point at some scale [38]. Moreover, after the discovery of the Higgs boson [39; 40], we know that the standard model can consistently be extended up to the Planck scale [41; 42; 43; 44]. Subsequently, the interaction of the standard model with quantum fluctuations of gravity has also been actively studied in the framework of quantum field theory [45; 46; 47]. Progress in studying asymptotically safe theories has also been made in the context of supersymmetric models [48], conformal windows of parameters [49], and within the models possessing large particle multiplicities [50; 51; 52; 53; 54].
Recently, proposals have been put forward that connect asymptotic safety with flavour physics within and beyond the SM [55; 56]. Indeed, it has been demonstrated that AS models may be able to explain measurements in the flavour sector, in particular, with discrepancies with the SM predictions. Moreover, asymptotically safe SM extensions both with and without taking into account quantum gravity effects can explain the flavour pattern of the SM [56; 57; 58]. Altogether, asymptotically safe UV completions of the SM can present strong implications for flavour physics.
This paper is organised as follows. In Section 2, we introduce key ideas and notions of asymptotic safety. We consider the RG flow in a simple, yet general, gauge-Yukawa model, discuss the fixed points of the flow and enumerate different phases that can be achieved by varying the gauge group and matter-field representations in Section 3. We switch to realistic SM extensions in Section 4 and review some of BSM scenarios available on the market together with their phenomenological applications. When considering models with matter coupled to gravity in Section 5, we follow a pragmatic approach to gravity-induced
corrections and discuss how the ideas behind AS can enhance the predictive power of New Physics (NP). Our conclusions can be found in Section 6.
## 2 Asymptotic Safety in Gauge-Yukawa Theories
As a starting point, we consider the space of dimensionless couplings \(g_{i}\) that enter a general action of, not necessarily, a renormalizable theory in \(d\) space-time dimensions:
\[S=\int d^{d}x\mu^{d-\Delta_{i}}g^{j}O_{i}(x) \tag{1}\]
with \(O_{i}\) being a set of local operators with scaling dimensions \(\Delta_{i}\). The scale \(\mu\) denotes the RG scale. The RG flow is driven by beta functions and is described by first-order differential renormalization-group equations (RGE):
\[\partial_{t}\alpha_{i}=\beta_{i}(\alpha),\qquad\alpha_{i}\equiv\frac{g_{i}^{2 }}{16\pi^{2}},\quad t=\ln\mu, \tag{2}\]
where for convenience, we introduce \(\alpha_{i}\) for every \(g_{i}\). In perturbation theory, we have the following expansion
\[\beta_{i}(\alpha)=\beta_{i}^{(1)}+\beta_{i}^{(2)}+\ldots \tag{3}\]
with \(\beta_{i}^{(l)}\) corresponding to the \(l\)-loop correction. For given initial values \(\alpha_{i}(0)\) of the couplants, the flow towards infrared (IR) corresponds to \(t\to-\infty\), while in the limit \(t\to\infty\) we approach the UV region. As required by asymptotic safety, the \(\beta\)-functions of the theory represent a fixed point, i.e., some set of non-trivial coupling values \(\alpha^{*}\), for which all \(\beta\)-functions vanish. This condition can be expressed as
\[\beta_{i}(\alpha)|_{\alpha=\alpha^{*}}=0. \tag{4}\]
An RG trajectory that ends in the UV at such a fixed point corresponds to a UV-complete theory [59], which remains meaningful at all scales. Such RG trajectories give rise to a "fundamental" asymptotic safety. However, it is also worth considering a "non-fundamental" case arising when an FP is a saddle-point possessing both UV- and IR-attractive directions. In such a situation, it provides a UV completion only for some RG trajectories, while acting as an IR attractor for a more fundamental description [60].
When looking for fixed points, we will demand the following:
1. The coordinates must be physical, fulfilling \(\alpha^{*}\geq 0\);
2. Couplings must be perturbative (for more elaborate conditions of perturbativity, see, e.g., Ref. [46]), which requires \(\alpha^{*}\leq 1\).
The former condition reflects the fact that \(\alpha_{i}\) is a square of \(g_{i}\), while the latter allows one to choose weakly interacting fixed points that can potentially render the theory predictive at all scales. In a model with some external parameters, e.g., the number of colours \(N_{c}\) or field species \(N_{f}\), the solution \(\alpha_{i}=\alpha_{i}^{*}\) of (4) depends on these quantities and usually exists only for values lying in particular intervals ("windows").
In order to illustrate the instances in which a model can present such fixed points, we now study a simple renormalizable gauge-Yukawa theory in \(d=4\) dimensions containing one gauge (\(\alpha_{g}\)) and one Yukawa (\(\alpha_{y}\)) coupling. Following Refs. [61; 62] and related works, we use \(kmn\)-ordering corresponding to a \(k\)-loop RGE for the gauge, \(m\)-loop RGE for Yukawa, and \(n\)-loop RGE for scalar self-couplings, and consider, for simplicity, the 210-case.
Here, we should note about Weyl consistency conditions (WCC), which relate derivatives of beta functions [63]. They arise by considering a model on a curved (but fixed) background and performing Weyl rescalings of the metric. Due to the fact that two subsequent Weyl rescalings commute, it follows that \(\frac{\partial\beta^{i}}{\partial g_{j}}=\frac{\partial\beta^{j}}{\partial g_{i }}\). Herein, \(\beta^{i}=\chi^{ij}\beta_{j}\), where \(\chi^{ij}\) is a metric in the space of couplings that depends on the latter. An expres
sion for \(\chi^{ij}\) for gauge-Yukawa models in the 321-approximation has been derived in [64], while the 432-case in a general renormalizable field theory was considered in [11]. These conditions must be satisfied for the full RG flow and can be imposed on the perturbative expansion. It is worth mentioning that Ref. [49] discusses different ordering schemes for beta functions in the context of gauge-Yukawa theories (see also Section 3).
In the 210-approximation, the scalar self-interactions decouple and we can restrict ourselves to the \(\beta\)-functions
\[\beta_{g} =\alpha_{g}^{2}(-B+C\alpha_{g}-D\alpha_{y}), \tag{5}\] \[\beta_{y} =\alpha_{y}(E\alpha_{y}-F\alpha_{g}). \tag{6}\]
Here \(B,E,F\) are one-loop coefficients, while \(C,D\) come from two loops. While \(E\), (\(F\) and \(D\)) are assumed to be positive (non-negative), the signs of \(C\) and \(B\) depend on the specific particle content and symmetries of a theory.
For a single-gauge group with \(n_{f}\) charged Weyl (\(\kappa=1/2\)) or Dirac (\(\kappa=1\)) fermions, and \(n_{s}\) charged scalars, we can write [65; 66; 67]
\[B =2\bigg{[}\frac{11}{3}C_{A}-\frac{4}{3}\kappa(T_{f}n_{f})-\frac{1 }{6}(T_{s}n_{s})\bigg{]}, \tag{7}\] \[C =2\bigg{[}-\frac{34}{3}C_{A}^{2}+\kappa\bigg{(}4C_{f}+\frac{20}{ 3}C_{A}\bigg{)}(T_{f}n_{f})+\bigg{(}2C_{s}+\frac{1}{3}C_{A}\bigg{)}(T_{s}n_{s })\bigg{]}. \tag{8}\]
Here, \(C_{A}\) is the second Casimir for adjoint representation, while \(C_{R}\) and \(T_{R}\) refer to the quadratic Casimirs and the Dynkin index, respectively, for fermion (\(R=f\)) and scalar (\(R=s\)) representations. For \(SU(N)\) gauge theory with fermions in fundamental representation, we have \(C_{A}=N_{c}\), \(C_{f}=(N_{c}^{2}-1)/(2N_{c})\), and \(T_{f}=1/2\). Obviously, for Abelian gauge groups \(C_{A}=0\); thus, we always have \(B<0\) irrespectively of matter content, while for non-Abelian theories, negative contributions from charged fermions and scalars can be compensated by that of gauge field fluctuations.
In Ref. [62], Bond and Litim studied possible signs of \(B\) and \(C\). Utilizing the relation
\[C=\frac{2}{11}\Big{[}2\kappa\Big{(}11C_{f}+7C_{A}\Big{)}(n_{f}T_{f})+2(11C_{s }-C_{A})(n_{s}T_{s})-17C_{A}\cdot B\Big{]} \tag{9}\]
they demonstrated that for \(B\leq 0\) all the contributions are positive and render \(C>0\) irrespectively of matter representations, while for \(B>0\), the two-loop coefficient \(C\) can be both negative and positive. This information is crucial when studying the behaviour of the RG flow and the possibility of asymptotic safety in the gauge-Yukawa models.
Several types of fixed points exist for the system (5) and (6). Firstly, the Gaussian FP is given by
\[\alpha_{g}^{*}=\alpha_{y}^{*}=0, \tag{10}\]
and may present itself in different energy regimes (IR or UV). The second option is when Equations (5) and (6) admit a fixed point for which the Yukawa \(\alpha_{y}\) is asymptotically free (in the IR), but the gauge \(\alpha_{g}\) is interacting:
\[\alpha_{g}^{*}=\frac{B}{C},\qquad\alpha_{y}^{*}=0. \tag{11}\]
The above solution is known as the Caswell-Banks-Zaks (BZ) FP [68; 69]. It requires \(B/C>0\) in order to be physical and for \(B/C<1\), it can be treated in perturbation theory.
Finally, the system develops another type of FP, where both couplings are non-vanishing. This is the gauge-Yukawa (GY) FP, which is characterised by the coordinates
\[\alpha_{g}^{*}=\frac{B}{C^{\prime}},\qquad\alpha_{y}^{*}=\frac{F}{E}\alpha_{ g}^{*}=\frac{FB}{EC^{\prime}}, \tag{12}\]
where the coefficient
\[C^{\prime}=C-\frac{DF}{E}\leq C \tag{13}\]
can take either sign, so that the fixed point can be physical for both \(B<0\) and \(B>0\).
When examining the fixed points of RGEs, an important question is whether FPs can be reached in the UV or IR, and in which particular directions in theory space it is possible. In what follows, we characterize the directions in the coupling space as (IR) relevant if they allow one to reach the fixed point in the UV, and as (IR) irrelevant if they draw couplings away from FP with the increase in the RG scale (Obviously, the IR-irrelevant directions correspond to the UV relevant ones and vice versa). Thus, the notion of relevant or irrelevant we employ refers to the orientation of RG flow direction with respect to a particular fixed point.
If we want to observe how the couplings flow around a given fixed point, we should expand the \(\beta\)-functions in its vicinity, which leads to the linearised flow for \(\delta_{i}=\alpha_{i}-\alpha_{i}^{*}\)
\[\partial_{t}\delta_{i}=\partial_{i}\beta_{i}(\alpha^{*})\delta_{j}+O(\delta^{2 })\equiv-\omega_{ji}\delta_{j}+O(\delta^{2}) \tag{14}\]
The stability matrix \(\omega_{ij}\) is given by the first derivatives of beta-functions and is not necessary symmetric. The eigenvalues \(\theta_{k}\) of \(\omega\) and the corresponding left eigenvectors \(c_{i}^{(k)}\),
\[c_{i}^{(k)}\omega_{ij}=\omega_{k}c_{j}^{(k)}, \tag{15}\]
can be used to solve the linearised RGE (14) in the form
\[(\alpha_{i}(\mu)-\alpha_{*})=\sum_{k}c_{i}^{(k)}\left(\frac{\mu}{\mu_{0}} \right)^{-\theta_{k}}\!\!c_{(k)}^{\ j}(\alpha_{j}(\mu_{0})-\alpha_{j}^{*}), \tag{16}\]
where the flow "starts" from scale \(\mu_{0}\), and we assume that the matrix \(c_{i}^{(k)}\) is not degenerate; thus, it can be inverted to give \(c_{(k)}^{\ i}\). Equation (16) encapsulates the features of the _powerlaw-like_ flow around a fixed point. The eigenvalues \(\theta_{k}\) play a role of critical exponents of the RG flow, and their sign determines whether the corresponding eigendirections \(\delta_{i}\propto c_{i}^{(k)}\) drives \(\alpha_{i}\) away from or closer to the fixed point. More explicitly, the fixed point can only be reached in the UV (\(\mu\gg\mu_{0}\)) if at least one of the eigenvalues is positive. On the contrary, for \(\mu\ll\mu_{0}\), the difference \(\delta_{i}\) increases for \(\theta_{k}>0\). Thus, if we are interested in the flow towards IR, the eigenvectors associated with positive (negative) eigenvalues correspond to relevant (irrelevant) IR directions. Finally, eigenvalues may be encountered that vanish exactly. The directions associated with them are called marginal, and do not change the flow near the fixed point at the first order. However, at higher orders, they may bring couplings to the UV fixed point, in which case they are marginally IR-irrelevant, or away from it, when they are marginally (IR) relevant.
In the following, we briefly discuss the phase diagram for weakly coupled gauge-Yukawa theories. There are four different cases: In addition to the Gaussian fixed point, gauge theories either display none, the Banks-Zaks, gauge-Yukawa, or the Banks-Zaks and gauge-Yukawa fixed points, depending on the values for \(B\), \(C\), and \(C^{\prime}\). For conve
nience, we summarise here explicit expressions for the stability matrices, the corresponding eigenvalues and left eigenvectors for the BZ FP:
\[\omega_{BZ} =-\frac{B^{2}}{C}\begin{pmatrix}1&-\frac{D}{C}\\ 0&-\frac{F}{B}\end{pmatrix}, \tag{17}\] \[\theta_{BZ}^{-} =-\frac{B^{2}}{C},\quad\theta_{BZ}^{+}=F\frac{B}{C},\] (18) \[c_{BZ}^{-} =[1,0],\quad c_{BZ}^{+}=\left[\frac{D}{C},1+\frac{F}{B}\right] \tag{19}\]
and the GY FP:
\[\omega_{\text{GY}} =-\frac{B^{2}}{C^{\prime}}\begin{pmatrix}\frac{C}{C^{\prime}}& \frac{F}{F}\left(1-\frac{C}{C^{\prime}}\right)\\ -\frac{F}{E}\frac{F}{B}&\frac{F}{B}\end{pmatrix}, \tag{20}\] \[\theta_{\text{GY}}^{\pm} =-\frac{B^{2}}{2C^{\prime}}\begin{pmatrix}F&\frac{F}{B}+\frac{C}{ C^{\prime}}\pm\sqrt{\left(\frac{F}{B}+\frac{C}{C^{\prime}}\right)^{2}-4 \frac{F}{B}}\\ \end{pmatrix},\] (21) \[c_{\text{GY}}^{\pm} =\left[\frac{EB}{F^{2}}\left(\frac{F}{B}-\frac{C}{C^{\prime}}\mp \sqrt{\left(\frac{F}{B}+\frac{C}{C^{\prime}}\right)^{2}-4\frac{F}{B}}\right),2 \right]. \tag{22}\]
From the previous discussion, one can observe that BZ FP can exist only for \(B>0\). As a consequence, \(\theta_{\text{GZ}}^{-}\) corresponds to the IR-attractive direction, while \(\theta_{BZ}^{+}>0\) and is relevant in IR. For the perturbative GY fixed point, \(\theta_{\text{GY}}^{-}\) also gives rise to the IR-irrelevant direction, while \(\theta_{\text{GY}}^{+}>0\) is IR-relevant only for \(B<0\). The different phase diagrams are presented qualitatively in Figure 1, projected onto the (gauge, Yukawa) plane. In the following, we provide short comments to Figure 1[62]:
1. For \(B>0\) and \(C<0\), there is no weakly coupled interacting fixed points. At weak coupling, the phase diagram exhibits only asymptotic freedom and a Gaussian UV FP. The set of UV free trajectories emerging from it is indicated by the red shaded region. Its upper bound is indicated by the Yukawa nullcline \((E\alpha_{y}=F\alpha_{g})\), which also plays the role of an infrared attractor since below it the sign of \(\beta_{y}\) (6) is negative and controlled by gauge field fluctuations. UV-free trajectories start near Gaussian FP and continue into the strong coupling region, where the theory is expected to exhibit confinement and chiral symmetry breaking, or perhaps a strongly coupled IR-fixed point. One can observe that no trajectories have been found above the Yukawa nullcline that can reach Gaussian FP in the UV. On such trajectories, the theory technically loses asymptotic freedom. Then, the predictivity is restricted to a finite UV scale, unless there is a strongly coupled UV-fixed point somewhere in this region.
2. For \(B>0\) and \(C>0>C^{\prime}\), the theory additionally develops a Banks-Zaks FP that turns out to be perturbative if \(B/C\) is sufficiently small. The Banks-Zaks fixed points are always weakly IR-attractive \(\theta_{BZ}^{-}<0\) in the gauge and strongly IR-repulsive (\(\theta_{BZ}^{+}>0\)) in the Yukawa direction. The first one is due to (5) and follows from the asymptotic freedom, while the second one is from (6). Moreover, near the BZ point (and at weak coupling), the flow is parametrically slower in the gauge direction than in the \(y\) direction. As a consequence, BZ FP and the Yukawa nullcline play the role of a strong infrared attractive funnel for all flow trajectories emerging from the Gaussian UV FP. This translates into low-energy relations between the Yukawa and the gauge coupling (at weak coupling), irrespective of their UV initial conditions.
3. The \(B>0\) and \(C>C^{\prime}>0\) case gives rise to a fully interacting gauge-Yukawa fixed point in addition to Banks-Zaks FP. The main new effect in theories with \(C^{\prime}>0\) as compared to theories with \(C^{\prime}<0\) is the funneling of flow trajectories in the IR direction of the attractive Yukawa nullcline stops, terminating at the interacting IR-fixed point
(12). Moreover, the GY point is indeed attractive both in the gauge direction and in the Yukawa direction (\(\theta_{GY}^{\pm}<0\)). (d) For \(B<0\) and \(C^{\prime}<0\)[61], we observe that there is no asymptotic freedom, and the Gaussian FP has become an IR-fixed point. The Yukawa interaction has transformed the positive two-loop coefficient \(C>0\) effectively to \(C^{\prime}<0\), which allows us to create an interacting gauge-Yukawa fixed point (12). This fixed point does show IR -attractive (\(\theta_{GY}^{-}\) \(<\) 0) and repulsive (\(\theta_{GY}^{+}>0\)) directions (see blue and red vectors in Figure 1). The former is a consequence of the IR-attractive nature of the Yukawa nullcline, and the latter is due to the infrared freedom of the gauge coupling. The GY FP in this case can be qualified as an asymptotically safe fixed point, since there are two UV finite trajectories emerging from it. The trajectory that connects GY FP with the Gaussian one in the infrared remains perturbative at all scales. The RG flow in the opposite direction leads to the strong coupling when perturbative analysis can not be trusted and should be supplemented by other considerations. Away from the Yukawa nullcline, no trajectories are found that can reach the GY FP in the UV. On such trajectories, the theory technically loses fundamental asymptotic safety and can only be considered as an effective description. Nevertheless, it has limited predictability (a relation between couplings in the IR due to attraction to the nullcline).
Figure 1: Phase diagrams of gauge–Yukawa theories. RG flow is towards the IR. Gaussian (G), Banks–Zaks (BZ), and gauge–Yukawa (GY) fixed points are indicated. We also demonstrate IR-relevant (irrelevant) eigendirections for BZ (19) and GY (22) FP in red (blue) colour. Shaded areas correspond to UV-complete regions. Adopted from Ref. [62].
## 3 A Toy Model towards Asymptotic Safety
The authors of Refs. [61; 70] considered a particular realization of the case with \(B<0\) and \(C^{\prime}<0\) and demonstrated that the asymptotic safety of gauge-Yukawa theories can be realised under strict perturbative control in models with singlet scalar, vector-like fermions, and non-Abelian gauge fields. In this section, we review this setup (Litim-Saninno model) with its features, and further motivate its role in constructing SM extensions.
As the starting point, both of these papers introduce an \(SU(N_{c})\) gauge theory with \(N_{F}\) generations of vector-like fermions \(\psi_{i}\). Since vector-like fermions do not contribute to chiral anomalies, their gauge-group representations can be chosen arbitrarily. In what follows, we assume that \(\psi_{i}\) transform in the fundamental representation under \(SU(N_{c})\). The spectrum of the model also includes \(N_{F}\times N_{F}\) complex scalars \(S_{ij}\) that are singlets under the \(SU(N_{c})\) symmetry. The model is described by renormalizable interactions
\[\mathcal{L}_{AS}=\operatorname{Tr}[\bar{y}i\hat{D}\psi]+\operatorname{Tr}[( \partial_{\mu}S)^{\dagger}(\partial_{\mu}S)]-y\operatorname{Tr}[\bar{\psi}_{L }S\psi_{R}+h.c.]-V(S), \tag{23}\]
where \(\hat{D}=\gamma^{\mu}D_{\mu}\) with \(D_{\mu}\) being covariant derivative, and the traces are over gauge and flavour indices. The scalar potential includes single-trace (\(u\)) and double-trace (\(v\)) interactions:
\[V(S)=u\operatorname{Tr}[S^{\dagger}SS^{\dagger}S]+v\operatorname{Tr}[(S^{ \dagger}S)]^{2}. \tag{24}\]
The key feature of the Lagrangian (23) is the presence of the Yukawa \(y\) coupling, which is required to arrange the interacting UV fixed points. It should be noted that here we neglect all possible mass terms and trilinear scalar interactions. It is also worth pointing out that the single coupling \(y\) in Equation (23) does not account for the most general form of Yukawa interactions. Indeed, the flavour structure of the model allows one to write
\[y_{ijkl}\bar{\psi}_{Li}S_{jk}\bar{\psi}_{Rl} \tag{25}\]
with indices of the tensor coupling \(y_{ijkl}\), each taking values \(i,j,k,l=1\dots N_{F}\). However, we can drastically reduce the number of parameters by utilizing flavour symmetries. In the absence of all Yukawa terms, the Lagrangian \(\mathcal{L}_{AS}\) respects the following global flavour symmetry
\[U(N_{F})^{2}_{\psi} =U(N_{F})_{\psi_{L}}\otimes U(N_{F})_{\psi_{R}}\] \[U(N_{F})^{2}_{S} =U(N_{F})_{S_{L}}\otimes U(N_{F})_{S_{R}}, \tag{26}\]
corresponding to independent unitary rotations of \(\psi_{L,R}\) under \(U(N_{F})_{\psi_{L,R}}\), and bi-unitary transformations of matrix scalar fields under \(U(N_{F})_{S_{L}}\otimes U(N_{F})_{S_{R}}\). The Yukawa coupling \(y\) breaks (26) down to \(U(N_{F})^{2}\), with \(U(N_{F})^{2}_{S}\) identified with \(U(N_{F})^{2}_{\psi}\). Obviously, the coupling of the form (25) completely destroys the flavour symmetry. As a consequence, we can restrict ourselves to Equation (23) by demanding that the theory should respect \(U(N_{F})^{2}_{\psi}\).
The crucial fact that was used in the analysis of the model (23) is that its \(\beta\)-functions give rise to a gauge-Yukawa FP, with is perturbative in the Veneziano [71] limit. The latter consists of taking \(N_{F},N_{c}\to\infty\) simultaneously, while keeping the ratio \(N_{F}/N_{c}\) fixed. To observe the effect of this approximation on \(\beta\)-functions, let us rewrite them in terms of a small parameter
\[\epsilon=\frac{N_{F}}{N_{c}}-\frac{11}{2}. \tag{27}\]
For \(\epsilon>0\), the screening due to fermions dominates the antiscreening of the gauge degrees of freedom (resulting in \(B<0\)), while for \(\epsilon<0\), the opposite happens (\(B>0\)). Expansion in powers of \(\epsilon\) indicate that the gauge-Yukawa fixed point and its critical exponents stay perturbative as long as \(\epsilon\) remains small; see details in Refs. [61; 70]. Here, it is suffice to note that in theories containing non-Abelian gauge interactions together
with fermionic and scalar matter, large-\(N\) methods confirm the viability of ultraviolet gauge-Yukawa fixed points.
In the Veneziano limit, the fixed-point values are controlled by \(\epsilon\) and remain perturbative for \(\epsilon\ll 1\). For large \(N_{c}\), AS is achieved in appropriately rescaled couplings
\[\tilde{\alpha}_{g}=\frac{N_{c}g^{2}}{(4\pi)^{2}},\quad\tilde{\alpha}_{y}=\frac {N_{c}y^{2}}{(4\pi)^{2}}. \tag{28}\]
The beta-functions for (28) in the 210-scheme have the form (5) and (6) with
\[B =-\frac{4}{3}\epsilon, C =25+\frac{26}{3}\epsilon, C^{\prime} =-\frac{2(57-46\epsilon-8\epsilon^{2})}{3(13+2\epsilon)}, \tag{29}\] \[D =\frac{1}{2}(11+2\epsilon)^{2}, E =13+2\epsilon, F =6, \tag{30}\]
where we neglected \(\mathcal{O}(1/N_{c}^{2})\) terms (the corrections are studied in Ref. [72]) in the limit \(N_{c}\to\infty\). For \(\epsilon>0\), the one- and two-loop gauge contributions to \(\beta_{\tilde{\alpha}_{g}}\) are positive; thus, the Gaussian FP is IR-attractive in the gauge direction, and there is no Banks-Zaks fixed point (\(B<0,C>0\)). However, the model features interacting GY FP [61]
\[\tilde{\alpha}_{g} =\frac{2\epsilon(13+2\epsilon)}{57-46\epsilon-8\epsilon^{2}}= \frac{26}{57}\epsilon+\mathcal{O}(\epsilon^{2}),\] \[\tilde{\alpha}_{y} =\frac{12\epsilon}{57-46\epsilon-8\epsilon^{2}}=\frac{4\epsilon }{9}+\mathcal{O}(\epsilon^{2}). \tag{31}\]
To the leading order in \(\epsilon\), the critical exponents are given by [49]
\[\theta_{GY}^{+}=\frac{104}{171}\epsilon^{2},\qquad\theta_{GY}^{-}=-\frac{52}{ 19}\epsilon, \tag{32}\]
which corresponds to one IR-repulsive and one IR-attractive direction. The latter fixes the Yukawa coupling at all scales in terms of the gauge coupling (or vice-versa). In other words, the value of one of the couplings in terms of the other is a prediction of the setting.
In Figure 2, we demonstrate the flow towards the IR from the fixed point in Equation (31) for a particular value of \(\epsilon\). Since \(\theta_{GY}^{+}\sim\epsilon^{2}\) and \(\theta_{GY}^{-}\sim\epsilon\), for \(\epsilon\ll 1\), the flow features one strongly IR-attractive (driven by \(\theta_{GY}^{-}\)) and one weakly IR-repulsive (corresponding to \(\theta_{GY}^{+}\)) direction.
As discussed earlier, there are two UV-complete (fixed-point) trajectories (red lines in Figure 2) that originate from gauge-Yukawa FP: One ends at the Gaussian FP in the IR, while the other flows to infinity. Initial conditions in the UV away from the GY FP result in trajectories that are indistinguishably close to the fixed-point trajectories in the IR. In Figure 2, one can observe two of them (dashed green lines) that start (green dots) below and above the blue curve. The latter separates the regions of weakly and strongly coupled theories in the IR.
In Ref. [61], the authors also studied the model at the next order consistent with WCC (321-approximation) and took into account the quartic scalar self-interactions. However, in a more careful study [49], it was argued that instead of a \((n+1,n,n-1)\)-approximation, one has to use \((n+1,n,n)\) beta functions to completely determine FPS together with critical exponents up to the order \(\mathcal{O}(\epsilon^{n})\). In what follows, we consider the 322 case. For the large-\(N_{F}\)-rescaled scalar couplings
\[\tilde{\alpha}_{u}=\frac{N_{F}u}{(4\pi)^{2}},\quad\tilde{\alpha}_{v}=\frac{N_ {F}^{2}v}{(4\pi)^{2}} \tag{33}\]
the two-loop beta functions are given by:
\[\beta^{(1)}_{\tilde{a}_{u}} =-\bar{\alpha}_{y}^{2}(11+2\epsilon)+4\bar{\alpha}_{u}(\bar{\alpha} _{y}+2\bar{\alpha}_{u}), \tag{34}\] \[\beta^{(2)}_{\tilde{a}_{u}} =-24\bar{\alpha}_{u}^{3}-16\bar{\alpha}_{u}^{2}\bar{\alpha}_{y}+10 \bar{\alpha}_{u}\bar{\alpha}_{y}\bar{\alpha}_{y}\] \[-(11+2\epsilon)\Big{(}2\bar{\alpha}_{y}\bar{\alpha}_{y}^{2}+3\bar {\alpha}_{u}\bar{\alpha}_{y}^{2}-(11+2\epsilon)\bar{\alpha}_{y}^{3}\Big{)},\] (35) \[\beta^{(1)}_{\tilde{a}_{v}} =12\bar{\alpha}_{u}^{2}+4\bar{\alpha}_{v}(\bar{\alpha}_{v}+4\bar {\alpha}_{u}+\bar{\alpha}_{y}),\] (36) \[\beta^{(2)}_{\tilde{a}_{v}} =-8\bar{\alpha}_{u}^{2}(12\bar{\alpha}_{u}+5\bar{\alpha}_{v})+10 \bar{\alpha}_{y}\bar{\alpha}_{y}\bar{\alpha}_{y}-8(\bar{\alpha}_{u}+\bar{ \alpha}_{v})(3\bar{\alpha}_{u}+\bar{\alpha}_{v})\bar{\alpha}_{y}\] \[+(11+2\epsilon)\Big{(}\bar{\alpha}_{y}^{2}(4\bar{\alpha}_{u}-3 \bar{\alpha}_{v})+\bar{\alpha}_{y}^{3}\Big{)}. \tag{37}\]
The gauge beta functions are extended to three loops and that of the Yukawa coupling to two loops, where there is also a contribution due to \(\bar{\alpha}_{u}\). One can observe that the double-trace coupling \(\bar{\alpha}_{v}\) decouples from the gauge-Yukawa RGE at this order. In the Veneziano limit, the two-loop corrections \(\beta^{(2)}_{\tilde{a}_{y}}\) to the running Yukawa coupling and the three-loop contributions \(\beta^{(3)}_{\tilde{a}_{y}}\) to the gauge interaction can be cast into the following form [61]
\[\frac{\beta^{(2)}_{\tilde{a}_{y}}}{\tilde{a}_{y}} =\frac{20\epsilon-93}{6}\tilde{\alpha}_{g}^{2}+(49+8\epsilon)\bar{ \alpha}_{g}\bar{\alpha}_{y}-\frac{11+2\epsilon}{8}\Big{[}(35+2\epsilon)\bar{ \alpha}_{y}^{2}+32\bar{\alpha}_{y}\bar{\alpha}_{u}\Big{]}, \tag{38}\] \[\frac{\beta^{(3)}_{\tilde{a}_{g}}}{\tilde{\alpha}_{g}^{2}} =\bigg{[}\frac{701}{6}+\frac{53}{3}\epsilon-\frac{112}{27} \epsilon^{2}\bigg{]}\bar{\alpha}_{g}^{2}+\frac{(11+2\epsilon)^{2}}{4}\bigg{[} (20+3\epsilon)\bar{\alpha}_{y}^{2}-\frac{27}{2}\bar{\alpha}_{g}\bar{\alpha}_{ y}\bigg{]} \tag{39}\]
To find FPs as series in \(\epsilon\), one can introduce an anzats (in the 322-approximation)
\[\alpha_{i}^{*}=c_{i}^{(1)}\epsilon+c_{i}^{(2)}\epsilon^{2} \tag{40}\]
and solve for \(c_{i}^{(1,2)}\). The system of equations \(\beta_{i}(\alpha^{*})=0\) admits a joint, asymptotically safe interacting fixed point with \(\bar{\alpha}_{u}>0\), \(\bar{\alpha}_{v}<0\), and with \(\bar{\alpha}_{h}+\bar{\alpha}_{v}>0\), indicating that at the fixed point, the scalar potential is bounded from below [61]. The coefficients of (40) are given by (\(X\equiv\sqrt{20+6\sqrt{23}}\))
\[c_{g}^{(1)} =+\frac{26}{57}, c_{g}^{(2)} =\frac{23\Big{(}75245-13068\sqrt{23}\Big{)}}{370386}, \tag{41}\] \[c_{y}^{(1)} =+\frac{4}{19}, c_{y}^{(2)} =\frac{43549-6900\sqrt{23}}{20577},\] (42) \[c_{u}^{(1)} =+\frac{1}{19}\Big{(}\sqrt{23}-1\Big{)}, c_{u}^{(2)} =\frac{365825\sqrt{23}-1476577}{631028},\] (43) \[c_{v}^{(1)} =-\frac{1}{19}\Big{(}2\sqrt{23}-X\Big{)}, c_{v}^{(2)} =-\frac{33533}{6859X}-\frac{321665}{13718\sqrt{23}}+\frac{452563}{ 13718\sqrt{23}X}+\frac{27248}{6859} \tag{44}\]
and result in [49]
\[\tilde{\pi}_{g}^{*} =0.45614\epsilon+0.780755\epsilon^{2}, \tag{45}\] \[\tilde{\pi}_{y}^{*} =0.210526\epsilon+0.508226\epsilon^{2},\] (46) \[\tilde{\pi}_{u}^{*} =0.199781\epsilon+0.440326\epsilon^{2},\] (47) \[\tilde{\pi}_{v}^{*} =-0.13725\epsilon-0.631784\epsilon^{2}. \tag{48}\]
The corresponding critical exponents can be written as [49]
\[\theta_{1} =+\frac{104}{171}e^{2}-\frac{2296}{3249}e^{3} = \phantom{-}0.60819e^{2}-0.70668e^{3}, \tag{49}\] \[\theta_{2} =-\frac{52}{19}e+\frac{22783308\sqrt{23}-136601719}{4094823}e^{2} = -2.73684e-6.67594e^{2},\] (50) \[\theta_{3} =-X\Bigg{[}\frac{8}{19}e-\frac{2(9153184\sqrt{23}-45155739)}{168799 9}e^{2}\Bigg{]} = -2.94059e-1.04147e^{2},\] (51) \[\theta_{4} =-\frac{16\sqrt{23}}{19}e+\frac{4(255832864-68248487\sqrt{23})}{31 393643}e^{2} = -4.03859e-9.10699e^{2}. \tag{52}\]
One can observe that for \(\epsilon>0\), the scalar couplings are irrelevant, and again the full model only features one free parameter.
One important question is related to the range of possible values for \(\epsilon\), for which the solution for FP can be trusted (UV conformal window) [49; 61]. Limits can arise from the requirements that the theory is weakly coupled \(|\dot{\pi}^{*}|<1\), the vacuum is stable, and the eigendirection corresponding to the exponent \(\theta_{1}\) remains relevant (vanishing of \(\theta_{1}\) indicates a collision of the UV FP with a non-perturbative IR FP studied in Ref. [72]) (\(\theta_{1}>0\)). A careful analysis carried out in the 322 approximation, which utilizes the partial information on subleading coefficients, gives rise to [49]
\[0<\epsilon<e_{max}\approx 0.09...0.13 \tag{53}\]
Recently, there appeared a study [72], which extends [49] and takes into account finite-\(N_{c}\) corrections to the Veneziano limit. The authors multiplied the expansion coefficients \(c_{i}^{(1,2)}\) (40) by functions \(f_{i}^{(1,2)}(N_{c})\) that tend to 1 in the limit \(N_{c}\rightarrow\infty\), and provided semi-analytical results for these factors. The expressions for critical exponents were also modified appropriately, and \(\epsilon_{max}\) was promoted to a function of \(N_{c}\). Based on such corrections, the authors of Ref. [72] concluded that the bound (53) is lowered for finite
Figure 2: An example of the RG flow for the Litim–Saninno model in the Veneziano limit. The fixed point is given in Equation (31). The trajectories lying away from the Yukawa nullcline (red line) are rapidly attracted to the latter. For the flow originating below the blue line, the theory remains weakly coupled. In the opposite case, the theory becomes strongly coupled in the IR.
\(N_{c}\). Nevertheless, the decrease in the conformal-window size turns out to be moderate (see Figure 3).
Before switching to more realistic models, let us mention here another limit, which is the large-\(N_{F}\) but finite \(N_{c}\), and formally corresponds to \(\epsilon\gg 1\). In this case, matter-field fluctuations dominate and have to be re-summed to all orders [73; 74; 75]. The studies suggest the existence of an UV Banks-Zaks FP due to a negative singularity of the re-summed beta function. Since this FP may be an artifact of the large-\(N_{F}\) expansion, we do not consider this limit here but refer, e.g., to Refs. [51; 53; 72; 76; 77; 78] for more detail and discussion.
## 4 SM-like Models with Flavour Portals
As can be observed, the SM itself has many similarities with the models of Equation
(23). It is a gauge theory, with Abelian and non-Abelian gauge groups, and it contains fermionic and scalar fields with Yukawa interactions. Therefore, it is natural to wonder whether the SM has ultraviolet fixed points that make it asymptotically safe. However, it is clear that the SM does not exhibit asymptotic safety in the UV, as its \(U(1)_{Y}\) coupling hits a Landau pole [79] and the Higgs quartic encounters stability problems (see, e.g., Refs. [42; 80]). Thus, in the SM, the Yukawa couplings are not able to reach a fixed point for all gauge couplings. Nevertheless, it may be possible to make it asymptotically safe if the SM is extended. This can be conducted by including new Yukawa interactions that can provide UV FPs for the SM gauge couplings. For example, the new states coupled to the SM through either gauge or Yukawa interactions will eventually modify RGEs of the SM couplings. Thus, minimal extension of the SM and assumptions that the vector-like fermions are charged under the SM gauge group may improve the situation.
Therefore, in the following, we will consider interactions which act as portals between the SM and BSM sector that previously were explored in Ref. [70] and subsequent works [46; 47; 55; 81]. The motivation for this is twofold: Firstly, Yukawa interactions with SM fields is interesting from a phenomenological point of view since it is testable at current
Figure 3: Conformal window \(0<\epsilon<\epsilon_{max}(N_{c})\) (from the analysis of subleading terms in beta functions) in the 321 and 322 approximation. Solid curves correspond to \(e_{max}(N_{c})\), while the dashed ones \(\epsilon_{max}(N_{c}\to\infty)\). Blue dots indicate integer values of \(N_{F}\). Adopted from Ref. [72].
experiments. Secondly, the interplay between the SM and BSM fields through Yukawa interactions can provide valuable insights into the flavour sector and its connection to UV completions of the theory. These interactions can play a crucial role in determining the masses and mixing patterns of fermions, such as quarks and leptons, giving rise to observable effects.
Let us start from Ref. [46], which explored a large class of models based on the SM matter with \(SU(3)_{c}\times SU(2)_{L}\times U(1)_{Y}\) gauge interactions. The authors retain only the top Yukawa coupling together with the Higgs quartic self-interaction and introduce \(N_{F}\) families of vector-like fermions \(\psi\) minimally coupled to the SM gauge group and \(N_{F}\times N_{F}\) generations of scalars \(S_{ij}\). These scalars are assumed to be singlets of the SM group. If we introduce a BSM sector that is charged under \(U(1)_{Y}\), it will cause modifications in the \(\beta\)-function, allowing us to address the Landau-pole problem that arises in the running of the hypercharge coupling. The Lagrangian characterising this minimal BSM extension is
\[\mathcal{L}_{SM,AS}=\mathcal{L}_{SM}+\mathcal{L}_{AS}. \tag{54}\]
The authors of Ref. [46] considered 378,000 models with varying numbers of vector-like fermions in different gauge group representations. They conducted a thorough investigation to identify stable, yet perturbative, fixed points within a wide range of parameters corresponding to the number of vector-like fermions and their \(SU(3)_{c}\times SU(2)_{L}\times U(1)_{Y}\) quantum numbers. At the end, the authors conclude that the imposed perturbativity conditions are very restrictive. They were not able to find any choice for the group representations and/or number of generations of the vector-like fermions that would make SM reliably asymptotically safe. However, this does not mean that it is definitely not possible to make SM completion asymptotically safe. This implies that if there is an extension of the standard model with AS, it must be different from the models considered by the authors. Otherwise, the fixed point of the model would be beyond the scope of perturbation theory.
Subsequent Refs. [47; 55] extended the previous study [46] and considered the role of quartic self-interactions of the scalars \(S_{ij}\) as well as _portal_ Yukawa and Higgs couplings between SM and BSM. The renormalizable Lagrangian of the models is given by
\[\mathcal{L}=\mathcal{L}_{SM,AS}+\mathcal{L}_{mix}-V(H,S), \tag{55}\]
where the scalar potential fulfills
\[V(H,S)=\delta\operatorname{Tr}[S^{\dagger}S]H^{\dagger}H, \tag{56}\]
and \(\mathcal{L}_{mix}\) contains Yukawa interactions between BSM and SM matter.
When speaking about "portals", we usually distinguish the following cases. If new particles only couple to the SM gauge fields, we have a "gauge portal" (and the models with Lagrangian (54) are of this type). One may also introduce new interactions involving the Higgs and the BSM fields such as new Yukawas ("Yukawa portal") or new quartics ("Higgs portal"). For example, the main effect of gauge portals arises through modifications of the RG-running of the SM interactions due to \(\mathcal{L}\subset\bar{\psi}iD\psi\), where \(\psi\) is again a BSM fermion in a non-trivial representation under the SM gauge group. Yukawa portals arise when the Higgs \(H\) couples directly to a BSM fermion \(\psi\) and a SM fermion \(f_{SM}\): \(\mathcal{L}\subset\kappa\bar{\psi}Hf_{SM}\). The Yukawa portals not only involve new SM charge carriers, but also new interactions controlled by \(\kappa\). The new Yukawa coupling contributes to the running of the Higgs quartic and, thus, influence the vacuum stabilization. Finally, Higgs portals arise when the Higgs \(H\) couples to the BSM scalar \(S\) through a portal coupling, as in Equation (56). The inclusion of this new interaction has the advantage of enhancing vacuum stability by contributing positively to the running of the Higgs quartic at a one-loop level.
In Ref. [47], six viable models (A-F) motivated by asymptotic safety were considered. Imposing the condition that at least one Yukawa coupling between the lepton fields \(L\), \(E\) and the vectorlike fermions should be present yields only a few versions for the rep
resentations that can take the \(\psi\) fields. For example, if \(\psi_{i}\) are singlets with respect to \(SU(3)_{c}\times SU(2)_{L}\) and have \(Y=-1\) hypercharge, their portal interactions \(\mathcal{L}_{mix}\) can be written in the form (model A):
\[\mathcal{L}_{mix}=\kappa LH\psi_{R}+\kappa^{\prime}E\bar{S}^{\dagger}\psi_{L}. \tag{57}\]
The new Yukawa couplings in \(\mathcal{L}_{mix}\) can involve either the SM Higgs or the \(S\), and are denoted by \(\kappa\) and \(\kappa^{\prime}\) in each case, respectively. The models A-F (55) are distinguished solely by the electroweak charges of the vector-like fermions and the allowed Yukawa couplings; see more detail in Refs. [47; 55; 81].
#### Portals at Work
The authors of Refs. [47; 55; 81] found FPs of the \(\beta\)-functions in the above-mentioned models and explored whether matching to the SM at low energies is possible. They considered constraints from the known values of \(U(1)_{Y}\times SU(2)_{L}\times SU(3)_{C}\) gauge couplings \(g_{l}\) (\(l=1,2,3\)), the top and bottom Yukawa interactions \(y_{t,b}\), and the Higgs quartic \(\lambda\). The SM initial conditions (central values) were applied at the reference scale \(\mu_{0}=1\) TeV.
From Figure 4, we can observe that the SM couplings run slowly. Refs. [47; 81] have integrated the SM RGE from the TeV scale up to the hypercharge Landau pole. The Higgs quartic changes sign \(\sim\)\(10^{10}\) GeV, triggering a well-known vacuum (meta)stability issue. Instead of stopping the flow at this scale, the authors extended it to the trans-Planckian region ignoring quantum gravity effects, and found that the Higgs becomes seemingly stable again \(\sim\)\(10^{10}M_{Pl}\). The vacuum becomes fully unstable at higher scales \(\sim\)\(10^{23}M_{Pl}\). Thus, their conclusion is that additional mechanisms must be introduced to stabilize the vacuum, either at the Planck scale (such as from higher dimensional operators, or full quantum gravity) or below it, e.g., by new particles or interactions.
In the following, we present several examples for model A with a different size for Yukawa couplings \(\alpha_{y}\) at the reference (matching) scale, since these interactions play a crucial role in avoiding Landau poles and stabilizing RG flows. We indicate scenarios with or without portal coupling \(\alpha_{\delta}\), \(\alpha_{\kappa,\kappa^{\prime}}\) effects in Figure 5.
In addition to studying a vacuum-stability issue, the authors of Refs. [47; 55; 81] raised some phenomenological questions, such as the production and decay of BSM particles, fermion mixing, anomalous magnetic moments (\(g-2\)), effects from scalar mixing, and possible chiral enhancement. They also highlighted signatures at proton-proton and lepton
Figure 4: The figure shows the SM 3-loop running of the Higgs quartic, top Yukawa, and gauge couplings above TeV energies. The vacuum stability is compromised (\(\mu\sim 10^{10}\) GeV) before reaching the Planck scale (center gray band). However, if we disregard quantum gravity effects, the hypercharge coupling can counteract this instability (\(\mu\sim 10^{29}\) GeV) and restore stability before perturbativity, stability, and predictivity are ultimately lost at a Landau pole (\(\mu\sim 10^{41}\) GeV). Bands indicate a \(1\sigma\) uncertainty in the top pole mass. The picture is taken from Ref. [47].
colliders and prospects to detect NP in electric dipole moments or charged lepton-flavour-violating (LFV)-type processes.
Let us provide some detail of such phenomenological implications. Ref. [55] considers NP contributions to the muon and electron anomalous magnetic moments. For example, the following two types of Yukawa interactions are introduced
\[\mathcal{L}^{singlet} =-\kappa\bar{L}H\psi_{R}-\kappa^{\prime}\bar{E}S^{\dagger}\psi_{L} -y\bar{\psi}_{L}S\psi_{R}+h.c., \tag{58}\] \[\mathcal{L}^{doublet} =-\kappa\bar{E}H^{\dagger}\psi_{L}-\kappa^{\prime}\bar{L}S\psi_{R }-y\bar{\psi}_{L}S\psi_{R}+h.c., \tag{59}\]
depending on the fact whether \(N_{F}=3\) vectorlike fermions \(\psi_{L,R}\) are singlets (corresponding to model A) or doublets under \(SU(2)_{L}\) (model C). The scalar potential is the same as in Equation (55). Figure 6 demonstrates the relevant leading loop effects due to the new Yukawa \(\kappa,\kappa^{\prime}\) and scalar \(\delta\) couplings with the additional assumption that due to \(S=\langle S\rangle+s\), fermion fields \(\psi\) acquire mass \(m_{f}\). Each lepton flavour \(l=e,\mu,\tau\) receives a contribution from scalar-fermion loops of the BSM with a chiral flip on the lepton line induced; see Figure 6a). It scales quadratically with the lepton mass [55]
\[\Delta a_{l}=\frac{N_{F}\kappa^{\prime 2}}{96\pi^{2}}\frac{m_{l}^{2}}{m_{f}^{2 }}f_{1}\left(\frac{m_{5}^{2}}{m_{f}^{2}}\right), \tag{60}\]
where \(m_{S}\) is the mass of BSM scalar, and \(N_{F}\) originates from the summation over flavours in the loop in Figure 6a). The function \(f_{1}(t)=(2t^{3}+3t^{2}-6t^{2}\ln t-6t+1)/(t-1)^{4}\) satisfies \(f_{1}(t)>0\) for any \(t\geq 0\); thus, the contribution (60) is positive and dominant for \(a_{\mu}\). The corrections due to \(Z\) and \(W\) loops are suppressed parametrically [47].
If one takes into account Higgs portal coupling \(\delta\), there are chirally enhanced contributions, which are linear in the lepton mass (see Figure 6b)). The latter can account for possible deviations in the electron \(g-2\) via
\[\Delta a_{e}=\frac{m_{e}}{m_{f}}\frac{\kappa\kappa^{\prime}\sin 2\theta}{32\pi^{ 2}}\left(f_{2}\left(\frac{m_{e}^{2}}{m_{f}^{2}}\right)-f_{2}\left(\frac{m_{h} ^{2}}{m_{f}^{2}}\right)\right)+\frac{m_{e}^{2}}{m_{\mu}^{2}}\Delta a_{\mu}, \tag{61}\]
where \(m_{h}\) is the SM Higgs mass. The loop function reads \(f_{2}(t)=(3t^{2}-2t^{2}\ln t-4t+1)/(1-t)^{3}\). The last term accounts for an additional contribution due to Equation (60). The mixing angle \(\theta\) between the scalar \(s_{ll}\) and the physical Higgs \(h\) is proportional to \(\delta\)[55]
\[\tan 2\theta=\frac{\delta}{\sqrt{\lambda(u+v)}}\frac{m_{h}}{m_{s}}\left(1+O \frac{m_{h}^{2}}{m_{s}^{2}}\right). \tag{62}\]
In summary, the authors concluded that the Yukawa couplings which mix the SM and BSM matter together with a Higgs portal coupling (58) and (59) can generate minimal (60) and chirally enhanced (61) contributions, which may account for measurements of the muon and electron anomalous magnetic moments. Moreover, as a bonus, they obtained a stable Higgs potential and well-behaved running couplings up to the Planck scale. In addition, a prediction for the deviation of the tau anomalous magnetic moment from its standard model value was provided.
In Ref. [47], the tree-level BSM particle production at hadron and lepton colliders was discussed in the context of the above-mentioned models. The corresponding diagrams are shown in Figure 7.
Due to the fact that fermions are assumed to be colourless, the pair production in \(pp\) collisions is limited to quark-antiquark fusion to electroweak gauge bosons, as illustrated in the left upper diagram. There is also a possible single production through the Yukawa portal interaction with the \(s\)-channel Higgs (right upper diagram). In lepton-lepton (\(ll\)) collisions, \(\psi\) can be produced through the \(t\)-channel Higgs or \(S\), either in pairs (as in the left lower diagram) or singly (as in the right lower diagram).
Eventually, Refs. [47; 55; 81] conclude that the SM extensions with vectorlike fermions are particularly efficient for eliminating the instability of the SM vacuum. This is related to the fact that the gauge portal mechanism enhances the Higgs quartic naturally [47]. Further directions towards stability arise in extensions with additional Yukawa/Higgs portals [47; 81] and anomaly-free gauge interactions [82]. Moreover, models with the flavour non-diagonal Yukawas or gauge couplings give rise to NP flavour transitions [81; 82], allowing for the alleviation of flavour anomalies. Thus, it would seem interesting to further explore the potential of models inspired by asymptotic safety for flavour and particle physics. However, despite the many successes of these models, there are still scenarios that suffer from Landau poles in the UV. Therefore, in the next section, we will consider another approach to the AS extension of SM, which takes gravity into account.
## 5 Models with Gravity and Matter
The further construction of asymptotically safe models based on SM extensions can be conducted by adding quantum gravity effects. The AS gravity is a powerful way for a Wilsonian description of the fundamental nature of quantum field theories. In the trans-Planckian regime, it has been proposed [15; 16; 34; 83] that the quantum fluctuations of the metric field can give rise to an interactive fixed point in the RG flow of the effective action for gravity, which includes the cosmological constant and the Ricci scalar (Einstein-Hilbert truncation). The question related to the persistence of the gravity FP upon the inclusion of gravitational effective operators of an increasing mass dimension was considered in Refs. [17; 18; 19; 20; 21; 22; 23; 24; 25], and a positive result was obtained.
The gravity + SM UV fixed point can improve the high-energy behavior of the hypercharge gauge coupling [38; 84; 85], while \(SU(3)_{c}\times SU(2)_{L}\) gauge couplings remain asymptotically free [86; 87; 88].
The presence of interacting UV FPs in such a setup may lead to important consequences for its predictivity at low energy, i.e., the actual number of free parameters in the theory can be effectively decreased. For example, one can try to predict the ratio of top and bottom
Figure 6: Leading loop contributions to \(\Delta a_{l}\) (\(l=\epsilon,\mu,\tau\)). **(a)** BSM scalar–fermion-loops with a lepton chiral flip (cross on solid line), and **(b)** chirally enhanced contributions through scalar mixing (cross on dashed line), provided the vacuum exception value \(\langle S\rangle\neq 0\), and a BSM fermion \(\psi_{l}\) chiral flip (cross on solid line).
Figure 7: Pair-production of vector-like fermions \(\psi\) at \(pp\) and \(ll\) colliders, with \(f\) indicating SM quarks or leptons. Dashed, solid and wavy lines correspond to scalar, fermion, and vector fields, respectively.
masses [89], together with the Cabibbo-Kobayashi-Maskawa [57], and Pontecorvo-Maki-Nakagawa-Sakata [58] matrix elements.
Moreover, AS gravity coupled to the SM demonstrated an early phenomenological achievement by revealing the emergence of an infrared attractive fixed point in the beta function of the Higgs quartic coupling. This finding allowed for a reasonably accurate estimation of the mass of the Higgs boson [90] years prior to its detection at the LHC. As for the recent explorations, there were predictions for the relic abundance of dark matter [91; 92], and analyses of gauged baryon \(B\) number [93; 94], as well as axion models [95].
It is fair to say that it is very hard to explicitly calculate the quantum-gravity contribution to the matter beta functions of the SM. The pioneering paper by Robinson and Wilzcek [96] was criticised by subsequent works (see., e.g., Refs. [97; 98] and references therein). However, instead of computing these contributions from first principles, some recent studies [57; 89] have used an efficient approach based on a parametric description of AS gravitational interactions with matter. This phenomenological approach allows one to "guess" the strength of the gravitational impact on matter beta functions. The method is based on the assumption that the fixed points of the matter sector should not contradict the low-scale SM phenomenology. The same approach has been used to improve the predictivity of some New Physics models, for which only incomplete information about their masses and couplings can be obtained experimentally (see, e.g., Refs. [56; 58; 99; 100; 101]).
It is generally believed that gravity-induced corrections to matter beta functions are linear in the matter couplings. The phenomenological approach boils down to the following modification of the beta functions of the gauge, Yukawa, and quartic system
\[\beta_{g} =\beta_{g}^{SM+NP}-gf_{g},\] \[\beta_{y} =\beta_{y}^{SM+NP}-yf_{y},\] \[\beta_{\lambda} =\beta_{\lambda}^{SM+NP}-\lambda f_{\lambda}, \tag{63}\]
i.e., we parameterize the effects of gravitational interactions with effective couplings \(f_{g}\), \(f_{y}\) and \(f_{\lambda}\). These terms exhibit universality in that gravity does not differentiate between different types of matter interactions (gauge, Yukawa, scalar quartic, etc.), but instead is blind to their internal symmetries. Note also that in Equation (63), we disregard any potential quantum gravity effects that are proportional to higher powers in the matter couplings. In the context of complete AS, \(f_{g}\), \(f_{y}\) and \(f_{\lambda}\) should be eventually determined from the gravitational dynamics [102; 103].
It should be noted that the aforementioned heuristic approach is based on several simplifying approximations. The parameters \(f_{g}\), \(f_{y}\) and \(f_{\lambda}\) are treated as constants above the arbitrary chosen scale near the Planck mass \(M_{Pl}=10^{19}\) GeV (trans-Planckian region), and are set to zero below (in the sub-Planckian region). In other words, gravity contributions decouple instantaneously at around \(M_{Pl}\).
### A Model with Trans-Planckian Asymptotic Safety
Let us demonstrate the method by applying it to a concrete example [101]. As in all previous cases described in Section 4, the authors of Ref. [101] extended the particle content of the SM by a set of heavy scalar and fermion fields. They add two pairs of fermions and one complex scalar field, belonging to different representations of the \(SU(2)_{L}\) group. The NP Lagrangian can be written in terms of Weyl spinors as
\[\mathcal{L}_{NP}\supset-(Y_{\rm R}\mu_{\rm R}E^{\prime}S+Y_{\rm L}F^{\prime}S ^{\dagger}l_{\mu}+Y_{1}EH^{\dagger}F+Y_{2}F^{\prime}HE^{\prime}+h.c.)-V(H,S), \tag{64}\]
where \(H\) is the Higgs boson doublet, \(l_{l}=(\nu_{L,l},e_{L,l})^{T}\), \(E,F\) is two pairs of left-chiral fermion multiplets, and \(E^{\prime},F^{\prime}\) is their chiral conjugate. The potential \(V(H,S)\) includes quartic self-interactions of \(H\) and \(S\) and a portal coupling similar to that given in Equation (56).
While the authors of Ref. [101] consider twelve different charge assignments for the NP fields, we restrict ourselves to the following quantum numbers for new fermions and scalars, charged under the \(SU(2)_{L}\times U(1)_{Y}\):
\[S(\mathbf{1},0),\qquad E(\mathbf{1},1),\qquad F(\mathbf{2},-1/2). \tag{65}\]
Given (65), we can derive one-loop beta functions for the hypercharge \(g_{Y}\), strong \(g_{3}\) and weak \(g_{2}\) gauge couplings that have the following form near the Planck scale:
\[\frac{dg_{Y}}{dt} =\frac{53}{6}\frac{g_{Y}^{3}}{16\pi^{2}}-f_{g}g_{Y}, \tag{66}\] \[\frac{dg_{2}}{dt} =-\frac{5}{2}\frac{g_{2}^{3}}{16\pi^{2}}-f_{g}g_{2},\] (67) \[\frac{dg_{3}}{dt} =-7\frac{g_{3}^{3}}{16\pi^{2}}-f_{g}g_{3}. \tag{68}\]
To proceed further, one makes the first fundamental assumption: the couplings of the Lagrangian (64) to the gravitational field in the trans-Planckian UV give rise to interactive fixed points. Furthermore, the fixed-point values associated with the irrelevant directions offer a distinct set of boundary conditions at the Planck scale for the gauge-Yukawa system.
Since we know the measured value of the hypercharge gauge coupling (see, e.g., Refs. [42; 44]) at the electroweak scale, it is possible to run it up to the Planck scale with one-loop SM RGE to obtain \(g_{Y}(M_{Pl})\). At the Planck scale, we apply the first fundamental assumption and treat \(g_{Y}(M_{Pl})\) as the fixed-point value:
\[g_{Y}^{*}=g_{Y}(M_{Pl})=0.54. \tag{69}\]
Given \(g_{Y}^{*}\), we can determine the value of gravity parameter \(f_{g}\) due to the one-loop relation \(g_{Y}^{*}=4\pi\sqrt{\frac{6f_{g}}{53}}\):
\[f_{g}=0.016. \tag{70}\]
To agree with the low-energy phenomenology, the non-Abelian gauge couplings are assumed to be asymptotically free:
\[g_{2}^{*}=0,\qquad g_{3}^{*}=0. \tag{71}\]
Both \(g_{2}\) and \(g_{3}\) are free parameters of the theory, since they correspond to relevant directions in the couplings space. On the contrary, \(g_{Y}\) corresponds to an irrelevant direction in the coupling space. It is worth stressing again that the \(f_{g}\) value will be the same for all gauge interactions of the model, since we use the second very important fundamental assumption about the universality of gravity. If that is true, we can immediately read off the FP values of all (additional) gauge couplings. After that, we can run the system down to low energies and read the values of the gauge couplings at the low scale. This demonstrates how asymptotic safety predictions work.
In the same manner, we can find the second quantum gravity parameter, \(f_{y}\). The latter can be fixed if a UV interactive FP point is determined by one of the SM Yukawa couplings, for example, \(y_{t}\). Therefore, from the beta-function zeroes for \(y_{t}\) and \(Y_{1}\) (under the assumption that \(Y_{2}^{*}=0\)), one can derive:
\[\begin{cases}\frac{9}{2}{y_{t}}^{*2}-\frac{17}{12}g_{Y}^{*2}+{Y_{1}^{*}}^{2}= 16\pi^{2}f_{y},\\ 3{y_{t}}^{*2}+\frac{5}{2}{Y_{1}^{*}}^{2}-\frac{15}{4}g_{Y}^{*2}=16\pi^{2}f_{y}.\end{cases} \tag{72}\]
Solving this equation with respect to \(y_{t}^{*2}\), we obtain
\[\frac{33}{4}y_{t}^{*2}+\frac{5}{24}g_{Y}^{*2}=24\pi^{2}f_{y}, \tag{73}\]
and after substitution of the FP expression for \(g_{Y}^{*}\), we determine
\[y_{t}^{*}=4\pi\frac{\sqrt{-5f_{g}+318f_{y}}}{\sqrt{1749}}=0.41. \tag{74}\]
Hence, \(f_{y}\) is trivially found if we match the flow of the top Yukawa coupling towards the experimentally measured top quark mass
\[f_{y}=0.006. \tag{75}\]
For the remaining SM couplings, we have
\[y_{b}^{*}=0,\qquad y_{\mu}^{*}=0, \tag{76}\]
which are associated with relevant directions. In the BSM sector, the authors [101] selected the following fixed point:
\[Y_{1}^{*}=4\pi\frac{\sqrt{101f_{g}+106f_{y}}}{\sqrt{583}}=0.78, \qquad Y_{2}^{*}=0, \tag{77}\] \[Y_{L}^{*}=2\pi\frac{\sqrt{-18f_{g}+53f_{y}}}{\sqrt{53}}=0.15, \qquad Y_{R}^{*}=4\pi\frac{\sqrt{90f_{g}+53f_{y}}}{\sqrt{53}}=1.15, \tag{78}\]
as required for an NP contribution to \(\Delta a_{\mu}\) consistent with the measured value. It should be noted that alternative fixed-point structures can also lead to phenomenological predictions for \(\Delta a_{\mu}\).
In Figure 8, we illustrate the sub-Planckian flow of the parameters of the system for the discussed model.
Figure 8: RG flow of the gauge and Yukawa couplings from the Planck scale down to the reference phenomenological energies of 2 TeV. Above the Planck scale, couplings stabilise and no longer change. The initial values on the Planck scale correspond to the fixed point values.
### Phenomenological Implications of Trans-Planckian Asymptotic Safety
Let us now provide a brief review of some phenomenological implications of the model [101], together with other possible BSM setups [56; 58; 104]. Here, it should be noted that the following references utilize the Lagrangians in the form (55) and (64); however, with a different kind of particle content. For example, some models contain neutrinos, leptoquarks, or an additional \(U(1)^{\prime}\) gauge \(Z^{\prime}\)-boson, etc.
First of all, Ref. [56] used an asymptotic safety paradigm to derive predictions for the mass of scalar leptoquarks as solutions to the experimental anomalies noted in recent years in \(b\to s\) and \(b\to c\) transitions. Using the previously described methods, they found low-energy predictions for the new Yukawa couplings. Then, they combined these predictions with the expectations for the Wilson coefficients in weak EFT extracted from global fits to the full set of \(b\to s\) and \(b\to c\) transition data. After that, they matched those two types of information, and obtained a quite precise determination for the \(SU(2)_{L}\)-triplet leptoquark mass at 4-7 TeV from the data on \(b\to s\) transitions. These values are too large to be in reach of the high-luminosity LHC. However, according to the most conservative estimates, they are within the early reach of a 100 TeV hadron collider. As for the additional signatures, \(BR(K_{L}\to\mu\mu)\) or \(D_{0}\to\mu\mu\) require significant increases in the experimental sensitivity with respect to the current bounds.
However, when they applied these methods to the charged-current \(b\to c\) anomalies (a different model with a \(SU(2)_{L}\)-singlet leptoquark), there arose additional complications due to some tension with low-energy constraints on the fermion masses. Nevertheless, the authors [56] claim that predicted values of mass and Yukawa couplings for the leptoquarks are at the very edge of the current LHC bounds and well within the reach of 300 fb\({}^{-1}\)-integrated luminosity.
Returning back to our example (65), the authors [101] combine the information extracted from the fixed-point UV analysis and bounds from dark matter and collider searches, the measurements of \(\Delta(g-2)_{\mu}\), and the experimental data on \(h\to\mu^{+}\mu^{-}\) signal strength. These combinations allow them to constrain the favored regions of the parameter space. They found that these results allowed them to pinpoint the mass of the scalar quite precisely, which reads \(m_{S}\sim\) 100-800 GeV. For other considered models, they obtained the bounds \(m_{S}\sim\) 100-430 GeV or \(m_{S}\sim\) 100-146 GeV. In addition, a strong hierarchy in the fermion spectrum was predicted; the lightest fermion needs to be close in mass to the scalar, while the mass of the heavier fermion is determined by \(\Delta(g-2)_{\mu}\) and should be around 5-80 TeV in the model considered in this review, and 200-400 GeV or 100-300 GeV for other models. They also found a model with a large region of available parameter space that can be consistent with a TeV-scale dark-matter particle similar to the supersymmetric higgsino.
In Ref. [58], the authors considered the SM extended by right-handed neutrinos and investigate the possibility to generate a strong hierarchy in the Yukawa couplings via interplay between the IR-fixed point with zero neutrino Yukawa \(y_{\nu}\) and an UV FP having \(y_{\nu}\neq 0\). They have found the allowed parameter space where Dirac-type neutrino masses can be generated naturally due to the dynamical mechanism. These solutions support the normal mass ordering and are consistent with the current experimental constraints on the mixing parameters. However, it was stressed that due to the "blindness" of gravity, the mixing itself is not a prediction of the fixed-point analysis, as it is associated with relevant directions. In addition, a second scenario was considered, in which sterile right-handed neutrinos constitute a light (sub-MeV) dark matter component of the Universe. In this study, the authors have demonstrated that within the framework of asymptotic safety, the dynamical mechanism naturally produces Yukawa couplings that are consistent with the expected abundance for sterile neutrino dark matter. To achieve additional fixed points in the UV regime, which ensures the completeness of the theory, the introduction of an Abelian gauge interaction and a mirror Yukawa interaction with heavy particles is necessary. In summary, the mechanism proposed in this study offers a UV-complete, generic, and flexible enough solution that can be applied to other models of new physics with feeble Yukawa interactions.
In Ref. [104], the authors analyzed two SM extensions with additional \(Z^{\prime}\) boson, vector-like fermions and an SM scalar singlet in the spectrum. Considering the framework of trans-Planckian asymptotic safety, they provide a solution to the flavour anomalies in the \(b\to s\mu\mu\) transitions. During the exploration, a fairly precise constraint on the Abelian kinetic mixing \(\epsilon\), the NP Yukawa couplings and scalar quartic couplings were derived. After that, viable mass ranges compatible with \(b\to s\mu\mu\) anomalies were extracted and the complete parameter space was subjected to the bounds from the direct production of vectorlike heavy quarks and leptons at the LHC. As a result, the authors identified the parameter space excluded at the 95% C.L., and computed the projections for the planned increase in luminosity in future runs.
As one can oberve, trans-Planckian AS can provide reach phenomenology; we think that the list of possible implications is still far from complete.
### On Robustness of Predictions
Recently, there appeared a study [105] in which the authors evaluate the precision of the obtained predictions (by the predictive power of models, the authors mean that given the electroweak values for the SM couplings, it is possible to predict the low-energy values for the NP couplings) in asymptotically safe gravity-matter models. As it was mentioned earlier, the usual assumptions in such kinds of analyses are the following: (1) The matter beta functions are computed at one loop; (2) the Planck scale is set arbitrarily at \(M_{Pl}=10^{19}\) GeV; (3) \(f_{S}\) and \(f_{y}\) are constants above the Planck scale and are zero below the scale.
The authors drop these assumptions one-by-one and provide estimates of the associated uncertainties. In their exploration, they consider gauged \((B-L)\) and leptoquark SM extensions. This is motivated by the fact that the first type of models has an additional gauge group, for which they seek a prediction for NP gauge couplings. In the second scenario, the key prediction from asymptotic safety is the strength of the NP Yukawa interaction of the scalar leptoquark with the SM fermions.
To check the robustness of predictions against high-order corrections to beta-functions, which in the case of gauge couplings can be cast in the following form
\[\partial_{t}g_{Y}=\frac{1}{16\pi^{2}}(b_{Y}+\Pi_{n\geq 2}^{(Y)})g_{Y}^{3}-f_{ S}g_{Y}, \tag{79}\]
where \(\Pi_{n\geq 2}^{(Y)}\) collectively denote high-order corrections to one-loop coefficient \(b_{Y}\). The same equations can be written for other gauge couplings. We omit them and discuss only the main idea. Under the assumption that there is an FP at the Planck scale, one derives the \(n\)-loop expression for \(f_{S}\):
\[f_{S}(n\ loops)\sim\frac{\big{[}g_{Y}^{*}(n\ loops)\big{]}^{2}}{16\pi^{2}}(b_{ Y}+\Pi_{n\geq 2}^{(Y)}(g_{i}^{*})). \tag{80}\]
The authors [105] introduce the ratios of the NP gauge couplings \(g_{i}\) and the SM \(g_{Y}\)\(r_{S_{i}}^{*}\equiv\frac{g_{i}}{g_{Y}}\) that do not depend explicitly on the value of \(f_{S}\). This allows one to estimate the uncertainties of low-energy predictions by comparing \(r_{S_{i}}^{*}\) computed in different loop orders by studying
\[\frac{\delta r_{S_{i}}^{*}}{r_{S_{i}}^{*}}=\frac{r_{S_{i}}^{*}(2\ loops)-r_{S_{ i}}^{*}(1\ loop)}{r_{S_{i}}^{*}(1\ loop)}. \tag{81}\]
For simplicity, authors retain only the two-loop corrections and quantify the error at the percent level. A similar but slightly more involved procedure can be conducted for the case of Yukawa couplings. However, in this case, the uncertainty is not negligible, and in the model with scalar leptoquarks can reach tens of the percent [105].
Let us now comment on the arbitrariness related to the position of the Planck scale at which the sub-Planckian RGEs are matched to trans-Planckian ones. In this respect, the
authors consider what happens if gravity decouples from the matter RGEs sharply at a scale that differs from \(10^{19}\) GeV by a few orders of magnitude.
When evaluating the effect of the Planck-scale position on the predictions for gauge couplings, one should keep in mind that this uncertainty is effectively equivalent to the uncertainty of the FP value of the hypercharge gauge coupling, \(g_{Y}^{*}\), and hence \(f_{g}\). From the fact that a ratio of the gauge couplings is considered, it is easy to deduce that the forward-backward moving of the Planck scale does not affect the predicted \(r_{g_{i}}^{*}\) ratios at the one-loop level, since the dependence of \(f_{g}\) cancels out. In spite of the fact that at higher loops this feature is not preserved, the influence of the Planck scale position remains negligibly small in this case, as well as \(O(0.01\%)\).
On the contrary, the Yukawa couplings depend explicitly on the fixed-point values of the Abelian gauge couplings, which enter the beta functions. Thus, changing the position of the Planck scale will alter the prediction for the Yukawa couplings even at one loop. The authors [105] estimate the uncertainty by considering the ratios of the FP couplings to the reference Yukawa (usually chosen to be that of the top quark)
\[\frac{\delta r_{y_{i}}^{*}}{r_{y_{i}}^{*}}=\frac{r_{y_{i}}^{*}(M_{Pl}\neq 10^{ 19}\;GeV)-r_{y_{i}}^{*}(M_{Pl}=10^{19}\;GeV)}{r_{y_{i}}^{*}(M_{Pl}=10^{19}\;GeV)}. \tag{82}\]
Here, the index \(i\) in \(r_{y_{i}}\) labels the set of ratios of Yukawa couplings in the Lagrangian. They summarize their findings for two different values of the Planck scale \(M_{Pl}=10^{16}\) GeV and \(M_{Pl}=10^{20}\) GeV, and conclude that the uncertainties do not exceed 10%.
Finally, to address the issue of the potentially scale-dependent \(f_{g,y}\), one can study how coupling ratios evolve with scales. At one loop, the gauge coupling ratios turn out to be RG-invariant and, thus, are not affected by the variation of the form of \(f_{g}(t)\). On the contrary, \(t\)-dependence of \(f_{g,y}(t)\) can impact the running of Yukawa ratios starting from one loop. However, authors conclude that the flow of the ratio \(y_{i}(t)/y_{j}(t)\) remains fairly stable throughout. Moreover, in this case, the possibility of determining the actual value of the Yukawa couplings at a fixed point is lost. Nevertheless, authors stated that in that range of variability of gravitational parameters, which can be realistically expected in the FRG framework, the obtained uncertainties are moderate.
## 6 Conclusions
Asymptotically safe models are of a significant theoretical interest when seeking a comprehensive understanding of fundamental quantum field theories. In our mini-review, we tried to discuss the spectacular progress of the last few years. We started from a general description of the concept and considered simple gauge theories, then we switched to more realistic SM extensions with additional fields, portals, and gravity. A non-exhaustive list of BSM scenarios briefly reviewed here provides just an impression on the asymptotic safety paradigm's prolificity.
Let us mention that here we do not touch upon many things, including an important subject regarding asymptotic safety in models with supersymmetry. The latter imposes a relation between the bosonic and fermionic sectors of a theory and can further restrict model building (see, e.g., Refs. [106; 107; 108; 109], for detail).
When discussing gravity coupled to matter fields, we intentionally ignore many issues and difficulties reviewed, e.g., in Ref. [27]. Nevertheless, we are absolutely sure that a new asymptotically safe view on high-energy particle physics seems exciting and potentially useful to explore.
All the authors contributed equally to all the parts of this work. All authors have read and agreed to the text of the manuscript.
This research received no external funding.
**Institutional Review Board Statement:** Not applicable.
**Informed Consent Statement:** Not applicable.
**Data Availability Statement:** Not applicable.
**Acknowledgments:** We thank A. Baushev, I. Buchbinder, D. Fursaev, G. Kalagov, N. Lebedev, and I. Pirozhenko for fruitful discussions.
**Conflicts of Interest:** The authors declare no conflict of interest.
|
2301.00026 | Killing Horizons Decohere Quantum Superpositions | We recently showed that if a massive (or charged) body is put in a quantum
spatial superposition, the mere presence of a black hole in its vicinity will
eventually decohere the superposition. In this paper we show that, more
generally, decoherence of stationary superpositions will occur in any spacetime
with a Killing horizon. This occurs because, in effect, the long-range field of
the body is registered on the Killing horizon which, we show, necessitates a
flux of "soft horizon gravitons/photons" through the horizon. The Killing
horizon thereby harvests "which path" information of quantum superpositions and
will decohere any quantum superposition in a finite time. It is particularly
instructive to analyze the case of a uniformly accelerating body in a quantum
superposition in flat spacetime. As we show, from the Rindler perspective the
superposition is decohered by "soft gravitons/photons" that propagate through
the Rindler horizon with negligible (Rindler) energy. We show that this
decoherence effect is distinct from--and larger than--the decoherence resulting
from the presence of Unruh radiation. We further show that from the inertial
perspective, the decoherence is due to the radiation of high frequency
(inertial) gravitons/photons to null infinity. (The notion of gravitons/photons
that propagate through the Rindler horizon is the same notion as that of
gravitons/photons that propagate to null infinity.) We also analyze the
decoherence of a spatial superposition due to the presence of a cosmological
horizon in de Sitter spacetime. We provide estimates of the decoherence time
for such quantum superpositions in both the Rindler and cosmological cases.
Although we explicitly treat the case of spacetime dimension $d=4$, our
analysis applies to any dimension $d \geq 4$. | Daine L. Danielson, Gautam Satishchandran, Robert M. Wald | 2022-12-30T19:00:06Z | http://arxiv.org/abs/2301.00026v2 | # Killing Horizons Decohere Quantum Superpositions
###### Abstract
We recently showed that if a massive (or charged) body is put in a quantum spatial superposition, the mere presence of a black hole in its vicinity will eventually decohere the superposition. In this paper we show that, more generally, decoherence of stationary superpositions will occur in any spacetime with a Killing horizon. This occurs because, in effect, the long-range field of the body is registered on the Killing horizon which, we show, necessitates a flux of "soft horizon gravitons/photons" through the horizon. The Killing horizon thereby harvests "which path" information of quantum superpositions and will decohere any quantum superposition in a finite time. It is particularly instructive to analyze the case of a uniformly accelerating body in a quantum superposition in flat spacetime. As we show, from the Rindler perspective the superposition is decohered by "soft gravitons/photons" that propagate through the Rindler horizon with negligible (Rindler) energy. We show that this decoherence effect is distinct from--and larger than--the decoherence resulting from the presence of Unruh radiation. We further show that from the inertial perspective, the decoherence is due to the radiation of high frequency (inertial) gravitons/photons to null infinity. (The notion of gravitons/photons that propagate through the Rindler horizon is the same notion as that of gravitons/photons that propagate to null infinity.) We also analyze the decoherence of a spatial superposition due to the presence of a cosmological horizon in de Sitter spacetime. We provide estimates of the decoherence time for such quantum superpositions in both the Rindler and cosmological cases.
## I Introduction
Consider a stationary spacetime in which an experimentalist, Alice, is present. Alice's lab is stationary, and she has control of a charged or massive body (hereinafter referred to as a "particle"). She sends her particle through a Stern-Gerlach apparatus or other device that puts her particle in a quantum superposition of two spatially separated states1. She keeps these spatially separated components stationary for a time \(T\) and then recombines them. Will Alice be able to maintain the coherence of these components, so that, when recombined, the final state of her particle will be pure--or will decoherence have occurred, so that the final state of her particle will be mixed?
Footnote 1: Quantum spatial superpositions of massive bodies have been of recent interest in both theoretical as well as proposed experimental probes of fundamental properties of quantum gravity, e.g., [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13].
Ordinarily, any decoherence effects will be dominated by "environmental influences," i.e., additional degrees of freedom present in Alice's lab that interact with her particle. We assume that Alice has perfect control of her laboratory and its environment so that there is no decoherence from ordinary environmental effects. However, for a charged or massive particle, Alice cannot perfectly control the electromagnetic or gravitational field, since her particle acts as a source for these fields and some radiation will be emitted during the portions of her experiment where she separates and recombines her particle. Nevertheless, in Minkowski spacetime, if her lab is stationary in the ordinary, inertial sense, she can perform her experiment in a sufficiently adiabatic manner that negligible decohering radiation is emitted. In principle, she can keep the particle separated for an arbitrarily long time \(T\) and still maintain coherence when the components are recombined.
In a recent paper [14], we showed that the above situation changes dramatically if a black hole is present in the spacetime--even though the experiment is carried out entirely in the black hole's exterior. In effect, a black hole horizon harvests "which path" information about any quantum superposition in its exterior, via the long-range fields sourced by the superposed matter. We showed that this results in the unavoidable radiation of entangling "soft photons or gravitons" through the horizon that carry the "which path" information into the black hole. Consequently, the mere presence of the black hole implies a fundamental rate of decoherence on the quantum superposition2. Although the rate of decoherence will be small if the black hole is far away, the coherence decays exponentially in the time, \(T\), that the spatial superposition is maintained. Thus, in any spacetime with a black hole, there will be essentially complete decoherence within a
finite time3.
Footnote 3: This maximal coherence time for superpositions in the exterior can be much smaller than the evaporation time of the black hole.
The purpose of this paper is to generalize the results of [14] to spacetimes with Killing horizons, i.e., spacetimes with a Killing vector field such that there is a null surface to which the Killing field is normal (see, e.g., [15] for a discussion of properties of Killing horizons). The event horizon of a stationary black hole is a Killing horizon [16; 17; 18], so spacetimes with Killing horizons encompass the case of stationary spacetimes that contain black holes. However, there are many cases of interest where Killing horizons are present without the presence of black holes. One such case is that of Minkowski spacetime, where the Rindler horizon is a Killing horizon with respect to the Lorentz boost Killing field. Another such case is de Sitter spacetime, where the cosmological horizon is a Killing horizon. We will show that in these cases, a spatial superposition that is kept stationary (with respect to the symmetry generating the Killing horizon) will decohere in a manner similar to the black hole case. We will also provide an estimate of the maximum amount of time during which coherence can be maintained.
The case of the Rindler horizon is particularly instructive. The relevant symmetry here is that of Lorentz boosts, so Alice's lab will be "stationary" if it is uniformly accelerating. Our analysis based upon radiation through the Rindler horizon shows that decoherence of a uniformly accelerating spatially separated superposition occurs because of the emission of "soft" (i.e., very low frequency) gravitons or photons, where the frequency is defined relative to an affine parameter on the Rindler horizon. As we shall show, the decoherence effect of this radiation of soft gravitons or photons is distinct from the (smaller) decoherence effect resulting from the presence of Unruh radiation. To gain further insight, we also analyze the decohering radiation in the electromagnetic case from the inertial point of view, using the Lienard-Wiechert solution to determine the radiation at future null infinity. As we shall show, the decohering photons are of high frequency at null infinity.
In sec. 2 we provide a general discussion of the decoherence of a quantum superposition due to radiation in a stationary spacetime. In sec. 3 we consider the decoherence of a uniformly accelerating superposition, analyzing it from both the Rindler and Minkowski viewpoints. We also show that this decoherence is distinct from (and larger than) the decoherence effects due to the presence of Unruh radiation. In sec. 4 we analyze the decoherence in de Sitter spacetime associated with the cosmological horizon. We will work in Planck units where \(G=c=\hbar=k_{\rm B}=1\) and, in electromagnetic formulas, we also put \(\epsilon_{0}=1\), but we will restore these constants in our formulas that give estimates for decoherence times. Lower case Latin indices represent abstract spacetime indices. Upper case Latin indices from the early alphabet correspond to spatial indices on horizons or null infinity.
## 2 Decoherence due to radiation in a stationary spacetime
In this section, we will give a general analysis of the decoherence of a spatial superposition in a stationary spacetime due to emission of radiation by the body. Our analysis applies both to the decoherence of a charged body due to emission of electromagnetic radiation and to the decoherence of a gravitating body due to emission of linearized gravitational radiation. The analyses of these two cases are very closely parallel. In order to avoid repetition, we will analyze only the electromagnetic case in detail, but near the end of this section, we will state the corresponding results in the linearized gravitational case, which can be obtained straightforwardly by replacing the vector potential \(A_{a}\) with the perturbed metric \(h_{ab}\), the charge-current \(j_{a}\) with the stress-energy \(T_{ab}\), etc.
Consider a charged particle4 in a stationary spacetime. We assume that the particle is initially in a stationary state. The particle is then put through a Stern-Gerlach (or other) apparatus, resulting in it being in a superposition state5
Footnote 4: As already indicated above, the “particle” need not be an elementary particle but could be a “nanoparticle” or any other body whose only relevant degree of freedom for our analysis is its center of mass.
Footnote 5: For simplicity, we have assumed that we have a 50-50 superposition of \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\), but this assumption is not necessary.
\[|\psi\rangle=\frac{1}{\sqrt{2}}\left(|\psi_{1}\rangle+|\psi_{2}\rangle\right) \tag{1}\]
where \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) are normalized states that are spatially separated after passing through the apparatus. The particle is then recombined via a reversing Stern-Gerlach (or other) apparatus and returns to a stationary state. We are particularly interested in the case where, between separation and recombination, \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) are kept stationary for a long period of time, \(T\), but we do not make any such assumption in this section. We wish to estimate how much decoherence due to emission of electromagnetic radiation will have occurred by the time of recombination6.
A key assumption that we shall make is that the fluctuations in the charge-current operator \(\mathbf{j}^{a}\) in the states \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) are negligibly small over the scales of interest so that we can treat the charge current in each of these states as \(c\)-number sources in Maxwell's equations, given by \(j_{1}^{a}=\langle\psi_{1}|\mathbf{j}^{a}|\psi_{1}\rangle\) and \(j_{2}^{a}=\langle\psi_{2}|\mathbf{j}^{a}|\psi_{2}\rangle\), respectively. In the initial and final stationary eras, \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) are assumed to coincide spatially (though they may differ in other characteristics, such as spin) so that \(j_{1}^{a}=j_{2}^{a}\) at very early and very late times.
In order to proceed further, we must specify the initial state of the electromagnetic field. Since, prior to going through the Stern-Gerlach apparatus, the charge is assumed to be stationary, at early times we may subtract the "Coulomb field" \(C_{a}^{\rm in}\) of the charge, i.e., at early times we may consider the electromagnetic field observable
\[\mathbf{A}_{a}^{\rm in}=\mathbf{A}_{a}-C_{a}^{\rm in}\mathbf{1} \tag{2.2}\]
where \(C_{a}^{\rm in}\) is the (assumed to be unique) stationary classical solution to Maxwell's equations with the early time stationary charged particle source \(j_{1}^{a}=j_{2}^{a}\) and \(\mathbf{A}_{a}\) is the vector potential operator. We need not assume any specific choice of gauge for \(\mathbf{A}_{a}^{\rm in}\). Then \(\mathbf{A}_{a}^{\rm in}\) satisfies the source-free Maxwell's equations at early times, and we may extend its definition to all times by requiring it to satisfy the source-free Maxwell equations everywhere.
The initial state of the electromagnetic field may be specified by giving the "radiation state" of \(\mathbf{A}_{a}^{\rm in}\). The choice of this state depends on the physical situation being considered. If the spacetime were globally stationary--i.e., if the stationary Killing field were everywhere timelike, so, in particular, there are no Killing horizons--it would be natural to assume that the initial state of the radiation is the stationary vacuum state, i.e., the ground state relative to the time translations. For the case of a black hole spacetime, it would be correspondingly natural to assume that the initial state of the radiation is that of the Unruh vacuum, since for a black hole formed by gravitational collapse, the state of a quantum field is expected to approach the Unruh vacuum after the black hole has "settled down" to a stationary state. For the case of Minkowski spacetime, we take the initial state of the radiation to be the ordinary (inertial) Minkowski vacuum. For de Sitter spacetime, we take the initial state of the radiation to be the de Sitter invariant vacuum7 for the electromagnetic field [20]. We denote the initial state of the radiation in all of the above cases by \(|\Psi_{0}\rangle\).
Footnote 7: A de Sitter invariant vacuum state does not exist for the massless scalar field [19] but such a state does exist for the electromagnetic field [20] and linearized gravitational field [21].
In each of the above cases, \(|\Psi_{0}\rangle\) is a pure, quasi-free (i.e., Gaussian) state. It follows (see, e.g., [22] or appendix A of [15]) that we can construct a one-particle Hilbert space \(\mathcal{H}_{\rm in}\) and corresponding Fock space \(\mathcal{F}(\mathcal{H}_{\rm in})\) wherein \(|\Psi_{0}\rangle\) plays the role of the vacuum state and the field operator \(\mathbf{A}_{a}^{\rm in}\) is represented on \(\mathcal{F}(\mathcal{H}_{\rm in})\) by
\[\mathbf{A}_{a}^{\rm in}(f^{a})=i\mathbf{a}(\overline{K\sigma_{f}})-i\mathbf{a}^{\dagger}( K\sigma_{f}). \tag{2.3}\]
Here \(f^{a}\) a divergence-free8 test function, \(\sigma_{f}\) denotes the advanced minus retarded solution to Maxwell's equations with source \(f^{a}\), and \(K:S\rightarrow\mathcal{H}_{\rm in}\) denotes the map taking the space \(S\) of classical solutions to their representatives in the one-particle Hilbert space \(\mathcal{H}_{\rm in}\). The commutator of the creation and annihilation operators in eq. (2.3) is given by
Footnote 8: Restriction of the smearing to divergence-free test functions is necessary and sufficient to eliminate the gauge dependence of \(\mathbf{A}_{a}^{\rm in}\) (see, e.g., P.101 of [22]).
\[[\mathbf{a}(\overline{K\sigma_{f}}),\mathbf{a}^{\dagger}(K\sigma_{g})]=\langle K\sigma _{f}|K\sigma_{g}\rangle\,\mathbf{1}. \tag{2.4}\]
where \(\langle K\sigma_{f}|K\sigma_{g}\rangle\) is the inner product on \(\mathcal{H}_{\rm in}\), which is given by a natural generalization of the Klein-Gordon inner product to electromagnetic fields.
For the case of a globally stationary spacetime in the stationary vacuum state, \(K\sigma_{f}\) corresponds to taking the positive frequency part of \(\sigma_{f}\) with respect to the time translations generating the stationary symmetry. For the case of a stationary black hole in the Unruh vacuum state, \(K\sigma_{f}\) corresponds to taking the positive frequency part of \(\sigma_{f}\) with respect to affine time on the past horizon and with respect to Killing time at past null infinity. For Minkowski spacetime in the inertial Minkowski vacuum, \(K\sigma_{f}\) corresponds to taking the positive frequency part of \(\sigma_{f}\) with respect to inertial time translations. Equivalently, \(K\sigma_{f}\), in this case, corresponds to the solution obtained by taking the positive frequency part of the restriction of \(\sigma_{f}\) to any null hyperplane \(\mathcal{N}\) (i.e., any Rindler horizon) with respect to an affine parametrization of the null geodesics generating \(\mathcal{N}\). For de Sitter spacetime in the de Sitter invariant vacuum, \(K\sigma_{f}\) corresponds to the solution obtained by taking the positive frequency part of the restriction of \(\sigma_{f}\) to any cosmological horizon with respect to an affine parametrization of the null geodesics generating that horizon.
Under the above assumption that the charge-current of \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) can be treated as \(c\)-number sources, the electromagnetic field \(\mathbf{A}_{i,a}\) in the presence of the charge in state \(|\psi_{i}\rangle\) for \(i=1,2\) is given in terms of the source free field \(\mathbf{A}_{a}^{\rm in}\) by [23]
\[\mathbf{A}_{i,a}=\mathbf{A}_{a}^{\rm in}+G_{a}^{\rm ret}(j_{i}^{b})\mathbf{1} \tag{2.5}\]
where \(G_{a}^{\rm ret}(j_{i}^{b})\) denotes the classical retarded solution for source \(j_{i}^{b}\). In particular, since the field \(\mathbf{A}_{a}^{\rm in}\) is in state \(|\Psi_{0}\rangle\), the correlation functions of the electromagnetic field
\(\mathbf{A}_{i,a}\) for \(|\psi_{i}\rangle\) are given by9
Footnote 9: It is understood that each of the \(x_{k}\) variables should be smeared with a divergence-free test vector field \(f^{a}_{k}\).
\[\langle\mathbf{A}_{i,a_{1}}(x_{1})\ldots\mathbf{A}_{i,a_{n}}(x_{n})\rangle\] \[\qquad=\langle\Psi_{0}|\left[\mathbf{A}_{a_{1}}^{\rm in}(x_{1})+G_{a_{ 1}}^{\rm ret}(j_{1}^{b})(x_{1})\mathbf{1})\right]\] \[\qquad\qquad\ldots\left[\mathbf{A}_{a_{n}}^{\rm in}(x_{n})+G_{a_{n}}^ {\rm ret}(j_{1}^{b})(x_{n})\mathbf{1})\right]|\Psi_{0}\rangle. \tag{6}\]
Equation (6) is valid at all times. However, at late times--i.e., to the future of any Cauchy surface \(\Sigma\) corresponding to the time at which recombination has occurred--we can again subtract off the common stationary Coulomb field, \(C_{a}^{\rm out}\), of \(j_{1}^{a}=j_{2}^{a}\) to obtain the source-free field10\(\mathbf{A}_{i,a}^{\rm out}\) that describes the radiation at late times for the states \(|\psi_{i}\rangle\),
Footnote 10: Note that \(\mathbf{A}_{a}^{\rm in}\) did not have a subscript “\(i\)” whereas \(\mathbf{A}_{i,a}\) and \(\mathbf{A}_{i,a}^{\rm out}\) do carry such subscripts. This is a consequence of the fact that we are working in the “in” representation—i.e., the Heisenberg representation on the Hilbert space \(\mathcal{F}(\mathcal{H}_{\rm in})\)—so \(\mathbf{A}_{a}^{\rm in}\) does not depend on the sources, but the other fields do.
\[\mathbf{A}_{i,a}^{\rm out}=\mathbf{A}_{i,a}-C_{a}^{\rm out}\mathbf{1}\,. \tag{7}\]
By eq. (6), at late times, the correlation functions of \(\mathbf{A}_{a}^{\rm out}\) are given by
\[\langle\mathbf{A}_{i,a_{1}}^{\rm out}(x_{1})\ldots\mathbf{A}_{i,a_{n}}^{ \rm out}(x_{n})\rangle\] \[\qquad=\langle\Psi_{0}|\left[\mathbf{A}_{a_{1}}^{\rm in}(x_{1})+ \mathcal{A}_{i,a_{1}}(x_{1})\mathbf{1})\right]\] \[\qquad\qquad\ldots\left[\mathbf{A}_{a_{n}}^{\rm in}(x_{n})+\mathcal{ A}_{i,a_{n}}(x_{n})\mathbf{1})\right]|\Psi_{0}\rangle \tag{8}\]
where
\[\mathcal{A}_{i,a}=G_{a}^{\rm ret}(j_{i}^{b})-C_{a}^{\rm out}. \tag{9}\]
Note that \(\mathcal{A}_{i,a}\) is a classical solution of the source-free Maxwell equations in the late-time region.
The correlation functions eq. (8) on any late-time Cauchy surface are precisely those of the coherent state
\[|\Psi_{i}\rangle=e^{-\frac{1}{2}\|K\mathcal{A}_{i}\|^{2}}\exp\left[\mathbf{a}^{ \dagger}(K\mathcal{A}_{i})\right]|\Psi_{0}\rangle\,, \tag{10}\]
where the norm is that of the one-particle inner product of eq. (4). Thus, the coherent state \(|\Psi_{1}\rangle\) describes the "out" radiation state corresponding to charged particle state \(|\psi_{1}\rangle\) and the coherent state \(|\Psi_{2}\rangle\) describes the "out" radiation state corresponding to charged particle state \(|\psi_{2}\rangle\). The joint "out" state, \(|\Upsilon\rangle\), of the particle-radiation system is given by
\[|\Upsilon\rangle=\frac{1}{\sqrt{2}}\left(|\psi_{1}\rangle\otimes|\Psi_{1} \rangle+|\psi_{2}\rangle\otimes|\Psi_{2}\rangle\right). \tag{11}\]
Therefore, the decoherence of \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) due to emission of electromagnetic radiation is given by
\[\mathscr{D}=1-|\,\langle\Psi_{1}|\Psi_{2}\rangle\,|. \tag{12}\]
We wish to evaluate \(\mathscr{D}\).
By the general formula for the inner product of coherent states, we have
\[|\,\langle\Psi_{1}|\Psi_{2}\rangle\,|=\exp\left[-\frac{1}{2}||K(\mathcal{A}_{ 1}-\mathcal{A}_{2})||^{2}\right]. \tag{13}\]
Now, in the late-time era, \(\mathcal{A}_{1,a}-\mathcal{A}_{2,a}\) is just the difference between the classical retarded solutions with sources \(j_{1}^{a}\) and \(j_{2}^{a}\),
\[\mathcal{A}_{1,a}-\mathcal{A}_{2,a}=G_{a}^{\rm ret}(j_{1}^{b})-G_{a}^{\rm ret }(j_{2}^{b})=G_{a}^{\rm ret}(j_{1}^{b}-j_{2}^{b}). \tag{14}\]
Consider the coherent state associated with \(G_{a}^{\rm ret}(j_{1}^{b}-j_{2}^{b})\) in the late-time era. We refer to photons in this state as _entangling photons_. By the general properties of coherent states, the expected number, \(\langle N\rangle\), of entangling photons is given by
\[\langle N\rangle\equiv||K\left[G^{\rm ret}(j_{1}-j_{2})\right]||^{2}. \tag{15}\]
Thus, we have
\[|\,\langle\Psi_{1}|\Psi_{2}\rangle\,|=\exp\left[-\frac{1}{2}\langle N\rangle\right] \tag{16}\]
so
\[\mathscr{D}=1-|\,\langle\Psi_{1}|\Psi_{2}\rangle\,|=1-\exp\left[-\frac{1}{2} \langle N\rangle\right] \tag{17}\]
and we see that the necessary and sufficient condition for significant decoherence (\(\mathscr{D}\sim 1\)) is \(\langle N\rangle\gtrsim 1\).
We summarize the results that we have obtained above as follows. Under the assumptions we have made above, in order to calculate the decoherence, \(\mathscr{D}\), of the particle due to radiation, we carry out the following steps:
1. We obtain the expected charge current, \(j_{1}^{a}\) and \(j_{2}^{a}\), for the particle in states \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) of the superposition.
2. We calculate the classical retarded solution, \(G_{a}^{\rm ret}(j_{1}^{b}-j_{2}^{b})\) for the difference of these charge currents, which is a source-free solution at late times, since \(j_{1}^{a}=j_{2}^{a}\) at late times.
3. We calculate the one-particle state \(KG^{\rm ret}(j_{1}-j_{2})\) corresponding to \(G_{a}^{\rm ret}(j_{1}^{b}-j_{2}^{b})\) at late times. In the various cases, this corresponds to the following: (i) For a globally stationary spacetime initially in the stationary vacuum state, this one-particle state is the positive frequency part of the solution with respect to the time translations generating the stationary symmetry. (ii) For the case of a stationary black hole initially in the Unruh vacuum, the one-particle state is the positive frequency part of the solution with respect to affine time on the past horizon and with respect to Killing time at past null infinity. (iii) For Minkowski spacetime initially in the Minkowski vacuum, the one-particle state is the positive frequency part of the solution with
respect to inertial time or, equivalently, the positive frequency part with respect to affine time on any Rindler horizon. (iv) For de Sitter spacetime initially in the de Sitter invariant vacuum, the one-particle state is the positive frequency part of the solution with respect to affine time on any cosmological horizon.
4. We compute the squared norm, \(\|K[G^{\rm ret}(j_{1}-j_{2})]\|^{2}\), of this one-particle state at late times. This quantity is equal to the expected number of entangling photons, \(\langle N\rangle\). The decoherence due to radiation is then given by \[\mathscr{D}=1-\exp\left[-\frac{1}{2}\|K\left[G^{\rm ret}(j_{1}-j_{2})\right] \|^{2}\right].\] (2.18)
As previously stated, the above analysis extends straightforwardly to the linearized gravitational case, where the perturbed metric, \(h_{ab}\), is treated as a linear quantum field propagating in the background classical stationary spacetime. To compute the decoherence due to gravitational radiation in this case, we carry out the above steps, replacing \(A_{a}\) by \(h_{ab}\) and the charge-current \(j^{a}\) by the stress-energy tensor \(T_{ab}\). The retarded solution \(G^{\rm ret}_{a}(j^{b})\) for Maxwell's equations is replaced by the retarded solution \(G^{\rm ret}_{ab}(T_{cd})\) for the linearized Einstein equation. The map \(K:S\rightarrow\mathcal{H}_{\rm in}\) is again obtained as in item (3) above and the inner product on \(\mathcal{H}_{\rm in}\) is again given by a natural generalization of the Klein-Gordon inner product to linearized gravitational fields. The decoherence due to gravitational radiation is then given by the analog of eq. (2.18).
The above analysis applies for any motion of the components of Alice's superposition. We are primarily interested in the case where, during a time interval \(T_{1}\), Alice puts a particle of charge \(q\) (or mass \(m\)) into a spatial superposition, where the distance between the components of the particle wavefunction is \(d\). She then keeps this superposition stationary in her lab for a time \(T\). Finally, she recombines her particle over a time interval \(T_{2}\).
In Minkowski spacetime in the case where Alice's lab is inertial, \(G^{\rm ret}_{a}(j^{b}_{1}-j^{b}_{2})\) will be nonzero at null infinity only at the retarded times corresponding to the time intervals \(T_{1}\) and \(T_{2}\). A rough estimate of the number of entangling photons was obtained in [3] using the Larmor formula for radiation in these eras, which, in natural units, yields
\[\langle N\rangle\sim\frac{q^{2}d^{2}}{[\min(T_{1},T_{2})]^{2}}\quad(\text{ Minkowski, EM}). \tag{2.19}\]
The corresponding result in the linearized gravitational case is [3]
\[\langle N\rangle\sim\frac{m^{2}d^{4}}{[\min(T_{1},T_{2})]^{4}}\quad(\text{ Minkowski, GR}). \tag{2.20}\]
Therefore, if Alice recombines her particle sufficiently slowly that \(T_{1},T_{2}\gg qd\) in the electromagnetic case or \(T_{1},T_{2}\gg md^{2}\) in the gravitational case, then she can maintain the quantum coherence of her particle. In particular, Alice can keep the components of her particle separated for as long a time \(T\) as she likes without destruction of the coherence.
As shown in [14], the situation is quite different if a black hole is present. In the electromagnetic case, even if \(T_{1},T_{2}\gg qd\) so that a negligible number of entangling photons is emitted to infinity, there will be entangling radiation emitted into the black hole. For large \(T\), the number of entangling photons increases with \(T\) as11
Footnote 11: In the analysis of [14], we used the fact that the Unruh vacuum is well approximated by the Hartle-Hawking vacuum at low frequencies near the horizon of the black hole.
\[\langle N\rangle\sim\frac{M^{3}q^{2}d^{2}}{D^{6}}T\qquad(\text{black hole, EM}) \tag{2.21}\]
where \(M\) is the mass of the black hole, \(D\) is the proper distance of Alice's lab from the horizon of the black hole, and we assume that \(D\gtrsim M\). The corresponding result in the linearized gravitational case is
\[\langle N\rangle\sim\frac{M^{5}m^{2}d^{4}}{D^{10}}T\qquad(\text{black hole, GR}). \tag{2.22}\]
Thus, the coherence of Alice's particle will always be destroyed within a finite time.
In the next two sections, we will apply the above analysis to the cases of Rindler spacetime and de Sitter spacetime. Although we will explicitly analyze only the Rindler and de Sitter cases, it will be clear from our analysis of the next two sections--as well as our analysis in [14]--that it can be applied to any Killing horizon, provided only that the initial "vacuum state" \(|\Psi_{0}\rangle\) of the electromagnetic and/or linearized gravitational field corresponds to one-particle states that are positive frequency with respect to affine time on the future Killing horizon.
## 3 Rindler horizons decohere quantum superpositions
We now consider the case of Minkowski spacetime with Alice's lab uniformly accelerating with acceleration \(a\). Specifically, we take Alice's lab to follow the orbit
\[t=\frac{1}{a}\sinh(a\tau),\qquad z=\frac{1}{a}\cosh(a\tau) \tag{3.1}\]
of the boost Killing field
\[b^{a}=a\bigg{[}z\bigg{(}\frac{\partial}{\partial t}\bigg{)}^{a}+t\bigg{(} \frac{\partial}{\partial z}\bigg{)}^{a}\bigg{]}. \tag{3.2}\]
Here we have normalized \(b^{a}\) such that \(b^{a}b_{a}=-1\) on the worldline of Alice's laboratory. Thus, \(b^{a}\) is the four-velocity of Alice's laboratory and \(\tau\) is the proper time in
her lab. We introduce the null coordinates
\[U\equiv t-z,\qquad V\equiv t+z \tag{3.3}\]
and the corresponding vector fields
\[n^{a}\equiv(\partial/\partial V)^{a},\qquad\ell^{a}\equiv(\partial/\partial U)^{ a}, \tag{3.4}\]
which are globally defined, future-directed null vector fields that satisfy \(\ell^{a}n_{a}=-1\). In terms of these coordinates, the Minkowski spacetime metric is
\[\eta=-dUdV+dx^{2}+dy^{2} \tag{3.5}\]
and the boost vector field is given by
\[b^{a}=a\big{[}-U\ell^{a}+Vn^{a}\big{]}. \tag{3.6}\]
The boost Killing field is null on the two "Rindler horizons," i.e., the two null planes \(U=0\) and \(V=0\), which divide Minkowski spacetime into four wedges. The orbits of the boost Killing field are future-directed and timelike within the "right Rindler wedge" \(\mathcal{W}_{\rm R}\) which is the region \(U<0\) and \(V>0\). Thus, the "right Rindler wedge" \(\mathcal{W}_{R}\)--where Alice performs her experiment--is a static, globally hyperbolic spacetime where the notion of "time translations" is defined by Lorentz boosts.
We refer to the null surface \(U=0\) as the future Rindler horizon and denote it as \(\mathscr{H}_{\rm R}^{+}\). On the region \(V>0\) of \(\mathscr{H}_{\rm R}^{+}\), it is useful to introduce the coordinate \(v\) by
\[V=V_{0}e^{av} \tag{3.7}\]
where \(V_{0}\) is an arbitrary constant. Then, for \(V>0\) on \(\mathscr{H}_{\rm R}^{+}\), we have
\[b^{a}\big{|}_{\mathscr{H}_{\rm R}^{+}}=aV\bigg{(}\frac{\partial}{\partial V} \bigg{)}^{a}\bigg{|}_{\mathscr{H}_{\rm R}^{+}}=\bigg{(}\frac{\partial}{ \partial v}\bigg{)}^{a}\bigg{|}_{\mathscr{H}_{\rm R}^{+}}\,. \tag{3.8}\]
Since \((\partial/\partial V)^{a}\) on the horizon is tangent to the affinely parameterized null geodesic generators of \(\mathscr{H}_{R}^{+}\), we refer to \(V\) as the "affine time" on \(\mathscr{H}_{\rm R}^{+}\), whereas we refer to \(v\) as the "boost Killing time" on \(\mathscr{H}_{\rm R}^{+}\).
### Decoherence Due to Radiation of Soft Photons/Gravitons Through the Rindler Horizon
We are now in position to apply the results of sec. 2 to the Rindler case. We will first analyze the electromagnetic case and then give the corresponding results in the gravitational case.
We assume that the electromagnetic field is initially in the Minkowski vacuum state. We assume that Alice possesses a charged particle that is initially stationary (with respect to the boost Killing field) in her (uniformly accelerating) lab. She then creates a quantum spatial superposition which is held stationary (with respect to the boost Killing field) for a proper time \(T\) and is then recombined. We wish to know the degree of decoherence of Alice's particle due to emission of radiation. We may directly apply the analysis of sec. 2 to answer this question.
The future Rindler horizon \(\mathscr{H}_{R}^{+}\) (\(U=0\)) does not meet the technical requirements of being a Cauchy surface for Minkowski spacetime, since there are inextendible timelike curves that remain in the past of \(\mathscr{H}_{R}^{+}\) as well as inextendible timelike curves that lie in the future of \(\mathscr{H}_{R}^{+}\). However, as argued in [24], it is effectively a Cauchy surface for determining evolution of solutions to the wave equation. This is most easily seen in the conformally completed spacetime, where \(\mathscr{H}_{R}^{+}\) is the past light cone of a point \(p\in\mathscr{I}^{+}\) except for the single generator that lies on \(\mathscr{I}^{+}\) and it also is the future light cone of a point on \(p^{\prime}\in\mathscr{I}^{-}\) except for the single generator that lies on \(\mathscr{I}^{-}\). Data on the full past light cone of \(p\) would determine a solution to the past of \(\mathscr{H}_{R}^{+}\) and data on the full future light cone of \(p^{\prime}\) would determine a solution to the future of \(\mathscr{H}_{R}^{+}\), thereby determining a solution everywhere in Minkowski spacetime. However, for solutions with appropriate decay, the data on the missing null geodesic generators of \(\mathscr{I}^{+}\) and \(\mathscr{I}^{-}\) can be determined by continuity from the data on \(\mathscr{H}_{R}^{+}\). Consequently, data on \(\mathscr{H}_{R}^{+}\) suffices to uniquely characterize solutions with appropriate decay. Consequently, the "out" states \(|\Psi_{1}\rangle\) and \(|\Psi_{2}\rangle\) of the radiation are completely determined by data on \(\mathscr{H}_{R}^{+}\). Note that this contrasts sharply with the black hole case, where one would need data on both the future event horizon and future null infinity to characterize the "out" state of radiation.
The decoherence of Alice's particle due to radiation is given by eq. (2.17). In order to evaluate this, we first consider a classical point charge of charge \(q\) in the "right Rindler wedge" \(\mathcal{W}_{\rm R}\) that is stationary with respect to the boost Killing field and lies at proper distance \(D\) from the bifurcation surface of the Rindler horizon. Such a charge will be uniformly accelerating with acceleration \(a\) given by
\[a=\frac{1}{D}\,. \tag{3.9}\]
The explicit solution for such a stationary charge in the Rindler wedge has long been known [25; 26; 27; 28; 29; 30]. The only nonvanishing component of the electromagnetic field in the region \(V>0\) of \(\mathscr{H}_{R}^{+}\) is
\[E_{U}\equiv F_{ab}\ell^{a}n^{b}=\frac{2a^{2}q}{\pi(1+a^{2}\rho^{2})^{2}} \tag{3.10}\]
where \(\rho^{2}\equiv x^{2}+y^{2}\). Electromagnetic radiation through the Rindler horizon is described by the pullback, \(E_{A}\), of the electric field \(E_{a}=F_{ab}n^{b}\) to \(\mathscr{H}_{\rm R}^{+}\), where the capital Latin indices from the early alphabet denote spatial components in the \(x\) and \(y\) directions. Since \(E_{A}=0\) on the horizon for a uniformly accelerated charge, one may say that a charge held stationary in Alice's lab does not produce any radiation as determined on \(\mathscr{H}_{\rm R}^{+}\)--even though a uniformly accelerated charge radiates (inertial) energy
to future null infinity12.
Footnote 12: A uniformly accelerating charge has a nonvanishing inertial energy current flux \(T_{ab}t^{a}\) through both \(\mathscr{H}_{\rm R}^{+}\) and \(\mathscr{I}^{+}\), where \(t^{a}\) denotes a Minkowski time translation. However, the flux of “boost energy” \(T_{ab}b^{a}\) vanishes at both \(\mathscr{H}_{\rm R}^{+}\) and \(\mathscr{I}^{+}\).
Now consider the case where the point charge is initially uniformly accelerating with acceleration \(a\) at a proper distance \(D=1/a\) from the bifurcation surface of the Rindler horizon. The charge is then moved in the \(z\)-direction to a different orbit of the same boost Killing field, so that it has uniform acceleration \(a^{\prime}\) and lies at proper distance \(D^{\prime}=1/a^{\prime}\) from the Rindler horizon. After the charge has reached its new location, the electric field on \(\mathscr{H}_{\rm R}^{+}\) is again given by eq. (3.10), but its value, \(E_{U}^{\prime}\), will be different from its value at early times. Maxwell's equations on \(\mathscr{H}_{\rm R}^{+}\) imply that
\[\mathcal{D}^{A}E_{A}=\partial_{V}E_{U} \tag{3.11}\]
where \(\mathcal{D}_{A}\) is the derivative operator on the \(\mathbb{R}^{2}\) cross-sections of the horizon and capital Latin indices from the early alphabet are raised and lowered with the metric, \(\delta_{AB}\), on the cross sections. Eq. (3.11) implies that \(E_{A}\neq 0\) whenever \(\partial_{V}E_{U}\neq 0\), so there will be radiation through the horizon as the charge is being moved. Most importantly, it implies that
\[\mathcal{D}^{A}\left(\int\limits_{-\infty}^{\infty}dVE_{A}\right)=\Delta E_{U} \tag{3.12}\]
where \(\Delta E_{U}=E_{U}^{\prime}-E_{U}\) is the change in the radial electric field between the charge at positions \(D^{\prime}\) and \(D\). Now, in a gauge where \(A_{a}n^{a}=0\) on the horizon, the transverse (i.e., \(x\)-\(y\)) components of the electric field are related to the corresponding components of the vector potential by
\[E_{A}=-\partial_{V}A_{A}. \tag{3.13}\]
Since the transverse components of the Coulomb field of a static charge vanish, we may replace the vector potential \(A_{A}\) by the "Coulomb subtracted" vector potential \(\mathcal{A}_{A}\) defined by eq.(2.9), so we have
\[E_{A}=-\partial_{V}\mathcal{A}_{A}. \tag{3.14}\]
It then follows immediately from eq. (3.12) that the difference, \(\Delta\mathcal{A}_{A}\), between the final and initial values of \(\mathcal{A}_{A}\) is given by
\[\mathcal{D}^{A}(\Delta\mathcal{A}_{A})=-\Delta E_{U} \tag{3.15}\]
independently of the manner in which the charge is moved from \(D\) to \(D^{\prime}\). Equation (3.15) is an exact mathematical analog of the electromagnetic memory effect at null infinity [31].
For the explicit solution eq. (3.10), we have
\[\Delta E_{U}\approx\frac{qda^{3}(1-a^{2}\rho^{2})}{(1+a^{2}\rho^{2})^{3}}. \tag{3.16}\]
where \(d=D^{\prime}-D\) and we have assumed that
\[d\ll D=\frac{1}{a}\,. \tag{3.17}\]
From eq. (3.15), we find that \(\Delta\mathcal{A}_{A}\) points in the \(\hat{\rho}\)-direction and has magnitude
\[|\Delta\mathcal{A}_{A}|=\Delta\mathcal{A}_{\rho}\approx\frac{qda^{4}\rho^{2}} {(1+a^{2}\rho^{2})^{2}}. \tag{3.18}\]
The key point is that even though \(E_{A}=0\) at both late and early times, \(\mathcal{A}_{A}\) does return to its original value at late times, and the change, \(\Delta\mathcal{A}_{A}\), in the vector potential between late and early times is determined only by the initial and final positions of the charge.
We now consider the quantized radiation through the horizon resulting from the displacement of the charge, assuming that, after the displacement, the charge is held at its new position, \(D^{\prime}\), forever. For the Fock space associated with the Minkowski vacuum state, the map \(K:S\to\mathcal{H}_{\rm in}\) that associates one-particle states to classical solutions is given by taking the positive frequency part of the classical solution with respect to inertial time, with the inner product on \(\mathcal{H}_{\rm in}\) given by the Klein-Gordon product. For the electromagnetic field on \(\mathscr{H}_{R}^{+}\) in a gauge where \(\mathcal{A}_{a}n^{a}\) on \(\mathscr{H}_{R}^{+}\), the "free data" on \(\mathscr{H}_{R}^{+}\) is the pull-back, \(\mathcal{A}_{A}\), of the vector potential. For two classical solutions with data \(\mathcal{A}_{1,A}\) and \(\mathcal{A}_{2,A}\) on \(\mathscr{H}_{R}^{+}\), the inner product of their corresponding one-particle states is given by [32; 15]
\[\langle K\mathcal{A}_{1}|\,K\mathcal{A}_{2}\rangle_{\mathscr{H}_{\rm R}^{+}}=2 \int\limits_{\mathbb{R}^{2}}dxdy\int\limits_{0}^{\infty}\frac{\omega d\omega}{ 2\pi}\delta^{AB}\widehat{\mathcal{A}_{1,A}}\hat{\mathcal{A}}_{2,B} \tag{3.19}\]
where \(\hat{\mathcal{A}}_{A}(\omega,x^{B})\) is the Fourier transform of \(\mathcal{A}_{A}(V,x^{B})\) with respect to the affine parameter \(V\). By the same reasoning as led to eq. (2.15), the expected number of photons on \(\mathscr{H}_{\rm R}^{+}\) in the coherent state associated to any classical solution \(\mathcal{A}_{A}\) is simply
\[\langle N\rangle=\|K\mathcal{A}\|_{\mathscr{H}_{\rm R}^{+}}^{2} \tag{3.20}\]
where the norm is defined by the inner product eq. (3.19). However, since \(\Delta\mathcal{A}_{A}\neq 0\), the Fourier transform, \(\hat{\mathcal{A}}_{A}(\omega,x^{B})\), of \(\mathcal{A}_{A}\) diverges as \(1/\omega\) as \(\omega\to 0\). It follows that the integrand of the expression for the norm given by the right side of eq. (3.19) also diverges as \(1/\omega\) as \(\omega\to 0\), so the integral is logarithmically divergent. Thus, \(\|K\mathcal{A}\|_{\mathscr{H}_{\rm R}^{+}}^{2}=\infty\). Therefore, if Alice displaces a charged particle to a different orbit of the boost Killing field and the particle remains on this new uniformly accelerated trajectory forever, an infinite number of "soft horizon
photons" will be radiated through the Rindler horizon regardless of how quickly or slowly this process is done. This is an exact mathematical analog of the infrared divergences that occur at null infinity in QED for processes with nonzero memory (see e.g., [33; 34; 35]).
Now suppose that Alice displaces the particle a \(z\)-distance \(d\ll D=1/a\) from \(D\) to \(D^{\prime}=D+d\) as above, but instead of leaving the particle at \(D^{\prime}\) forever, she leaves it there for proper time13\(T\) and then returns it to \(D\). In this case, the transverse components of the vector potential, \(\mathcal{A}_{A}\), return to their initial values at late times, so there is no "memory effect" at the horizon. Correspondingly, there are no infrared divergences in the expected number of photons that propagate through \(\mathscr{H}_{\text{R}}^{+}\). Nevertheless, if \(T\) is very large then the expected number of photons \(\langle N\rangle\) will be correspondingly large. To see this, we note that if, for convenience, we work in a gauge where \(\mathcal{A}_{A}=0\) initially, then during the era at which the particle is at \(D^{\prime}\), \(\mathcal{A}_{A}\) will be given by the right side of eq. (3.18). If we keep the manner in which the particle is moved from \(D\) to \(D^{\prime}\) as well as from \(D^{\prime}\) to \(D\) fixed but take \(T\) to be very large, the asymptotic behavior of the norm eq. (3.19) will be dominated by the low-frequency contribution from the era of time \(T\) that the particle is displaced. The logarithmic divergence at \(\omega=0\) that would occur if the particle remained at \(D^{\prime}\) forever is now effectively cut off at frequency \(\omega\sim 1/V\), where \(V\) denotes the affine time duration on the horizon \(\mathscr{H}_{\text{R}}^{+}\) over which the particle remains at \(D^{\prime}\). We obtain
Footnote 13: We have normalized the boost Killing field \(b^{a}\) so that Killing time equals proper time on the orbit at \(D\) with acceleration \(a\). Since we assume \(d=D^{\prime}-D\ll D\), Killing time and proper time are also (nearly) equal on the orbit at \(D^{\prime}\). Thus, \(T\) is also the elapsed Killing time that Alice keeps the particle at \(D^{\prime}\).
\[\langle N\rangle=||K\mathcal{A}||^{2}_{\mathscr{H}_{\text{R}}}\sim q^{2}d^{2}a ^{2}\ln\left(\frac{V}{\min[V_{1},V_{2}]}\right) \tag{3.21}\]
where \(V_{1},V_{2}\ll V\) are the durations of affine time over which the particle is displaced from \(D\) to \(D^{\prime}\) and from \(D^{\prime}\) back to \(D\), so that \(1/\text{min}[V_{1},V_{2}]\) provides an effective high-frequency cutoff. However, the affine time \(V\) on the horizon is related to boost Killing time on the horizon by
\[V=V_{0}\exp(av) \tag{3.22}\]
and the boost Killing time \(v\) corresponds to the proper time \(T\) in Alice's lab. Thus, we obtain
\[\langle N\rangle\sim q^{2}d^{2}a^{3}T\qquad\text{(Rindler, EM)}\,. \tag{3.23}\]
Therefore, no matter how slowly the particle is displaced, it is forced to radiate a number of "soft Rindler horizon photons" through the Rindler horizon that is proportional to the time \(T\) that the particle remains on the displaced trajectory.
We are now in a position to fully analyze Alice's experiment. Alice's lab is uniformly accelerating with acceleration \(a\) in Minkowski spacetime. She puts her particle of charge \(q\) into a superposition of states separated by \(z\)-distance \(d\ll 1/a\) and keeps these components stationary in her lab for a proper time \(T\). She then recombines the components and determines their coherence14. By the analysis of sec. 2, the decoherence is given by eq. (2.18). However, for large \(T\), the calculation of \(||K\left[G^{\text{ret}}(j_{1}-j_{2})\right]||^{2}\) corresponds precisely to the calculation we have given above of the number of photons radiated through the Rindler horizon when a charge is displaced for a time \(T\). Thus, we obtain
Footnote 14: The coherence can be determined as described in footnote 6.
\[||K\left[G^{\text{ret}}(j_{1}-j_{2})\right]||^{2}\sim q^{2}d^{2}a^{3}T. \tag{3.24}\]
In other words, for large \(T\), Alice's superposition will decohere due to radiation of "soft Rindler horizon photons," as
\[\mathscr{D}=1-\exp(-\Gamma_{\text{rad}}T) \tag{3.25}\]
where the "decoherence rate" \(\Gamma_{\text{rad}}\), is given by,
\[\Gamma_{\text{rad}}=q^{2}d^{2}a^{3}. \tag{3.26}\]
Thus, restoring the constants \(c\), \(\hbar\), and \(\epsilon_{0}\), Alice's particle will decohere within a time
\[T_{\text{D}} \sim\frac{\epsilon_{0}\hbar\epsilon^{6}}{a^{3}q^{2}d^{2}}\qquad \text{(Rindler, EM)} \tag{3.27}\] \[\sim 10^{33}\text{ years }\left(\frac{\text{g}}{a}\right)^{3}\cdot \left(\frac{\text{e}}{q}\right)^{2}\cdot\left(\frac{\text{m}}{d}\right)^{2}. \tag{3.28}\]
Thus, if Alice's lab uniformly accelerates at one \(g\) in flat spacetime and she separates an electron into two components one meter apart, she would not be able to maintain coherence of the electron for more than \(10^{33}\) years.
A similar analysis holds in the gravitational case15 where Alice separates a massive body with mass \(m\) across a distance \(d\) and maintains this superposition for a time \(T\). In the gravitational case, the "electric part" of the perturbed Weyl tensor \(E_{ab}=C_{acbd}n^{c}n^{d}\) plays an analogous role to the electric field \(E_{a}\) in the electromagnetic version of the gedankenexperiment. For a uniformly accelerating point mass, the only non-vanishing component of the electric part of the Weyl tensor on \(\mathscr{H}_{\text{R}}^{+}\) is \(E_{UU}=C_{acbd}\ell^{a}n^{c}\ell^{b}n^{d}\).
Footnote 15: In the gravitational case, additional stress-energy will be needed to keep Alice’s particle in uniform acceleration. We will ignore the gravitational effects of this additional stress-energy.
Gravitational radiation on the horizon is described by the pullback, \(E_{AB}\), of \(E_{ab}\), which vanishes for the static point mass. However, the process of quasistatically moving the static point mass involves a change in \(E_{UU}\) on \(\mathscr{H}_{\text{R}}^{+}\). The (once-contracted) Bianchi identity on the
horizon yields
\[\mathcal{D}^{A}E_{AB}=\partial_{V}E_{UB},\qquad\mathcal{D}^{A}E_{UA}=\partial_{V}E _{UU} \tag{3.29}\]
which implies that
\[\mathcal{D}^{A}\mathcal{D}^{B}E_{AB}=\partial_{V}^{2}E_{UU} \tag{3.30}\]
which is closely analogous to eq. (3.11). As in the electromagnetic case, if a uniformly accelerating point mass is quasistatically moved there is necessarily gravitational radiation through \(\mathscr{H}_{\rm R}^{+}\).
To determine the number of "Rindler horizon gravitons" emitted we quantize the linearized gravitational field. For a metric perturbation \(h_{ab}\) in a gauge where \(h_{ab}n^{a}=0\) and \(\delta^{AB}h_{AB}=0\), the "free data" on \(\mathscr{H}_{\rm R}^{+}\) is \(h_{AB}\). A "particle" in the standard Fock space associated to the Poincare invariant vacuum is then a positive frequency solution with respect to affine parameter \(V\) and the inner product on the one-particle Hilbert space is given by a direct analog of eq. (3.19) with the vector potential \(\mathcal{A}_{A}\) replaced with the metric perturbation \(h_{AB}\), namely
\[\left\langle Kh_{1}\right|Kh_{2}\rangle_{\mathscr{H}_{\rm R}^{+}}=\frac{1}{8} \int\limits_{\mathbb{R}^{2}}dxdy\int\limits_{0}^{\infty}\frac{\omega d\omega} {2\pi}\delta^{AB}\delta^{CD}\overline{\hat{h}_{1,AC}}\hat{h}_{2,BD}. \tag{3.31}\]
Finally, \(E_{AB}\) is related to the metric perturbation \(h_{AB}\) by
\[E_{AB}=-\frac{1}{2}\partial_{V}^{2}h_{AB}\,. \tag{3.32}\]
Equations (3.30) and (3.32) directly imply that a permanent change, \(\Delta E_{UU}\neq 0\), in the \(U\)-\(U\) component of the electric part of the Weyl tensor on \(\mathscr{H}_{\rm R}^{+}\) implies a permanent change, \(\Delta h_{AB}\neq 0\), in the perturbed metric on \(\mathscr{H}_{\rm R}^{+}\) between early and late times. In the quantum theory, as in the electromagnetic case, this implies a logarithmic infrared divergence in the number of gravitons emitted through \(\mathscr{H}_{\rm R}^{+}\) in the process where a uniformly accelerating charge is moved to a new orbit of the same boost Killing field and then remains at the new position forever.
The analysis of Alice's experiment proceeds in a similar manner to the electromagnetic case. Alice does not maintain the relative separation of her wavefunction forever but closes her superposition after a proper time \(T\). As before, the number of entangling gravitons emitted to the Rindler horizon is logarithmically growing in affine time and therefore linearly growing in the proper time duration \(T\) of Alice's experiment. We obtain
\[\left\langle N\right\rangle\sim m^{2}d^{4}a^{5}T\qquad\text{(Rindler, GR)}\,. \tag{3.33}\]
Thus, restoring constants, we find that the Rindler horizon decoheres the quantum superposition of a uniformly accelerating massive body in a time
\[T_{D}^{\rm GR}\sim \frac{\hbar c^{10}}{Gm^{2}d^{4}a^{5}}\qquad\text{(Rindler, GR)} \tag{3.34}\] \[\sim 2\text{ fs}\,\left(\frac{\text{M}_{\rm Moon}}{m}\right)^{2}\cdot \left(\frac{\text{R}_{\rm Moon}}{d}\right)^{4}\cdot\left(\frac{\text{g}}{a} \right)^{5}. \tag{3.35}\]
Therefore, if the Moon were accelerating at one \(g\) and occupied a quantum state with its center of mass superposed by a spatial separation of the order of its own radius then it would decohere within about 2 femtoseconds. Of course, it would not be easy to put the moon in such a coherent quantum superposition.
Note the acceleration of a stationary observer outside of a black hole who is reasonably far16 (\(D\gtrsim M\)) from the event horizon is \(a\sim M/D^{2}\). If we substitute \(a=M/D^{2}\) in eqs. (3.27) and (3.34), we obtain eqs. (2.21) and (2.22), respectively. Therefore, it might be tempting to believe that what is important in all cases is the acceleration of Alice's lab. However, this is not the case. In particular, if we replace the black hole by an ordinary star (and if there are no dissipative effects in the star), then there will not be any analogous decoherence effect, even though the acceleration of Alice's lab is the same as in the case of a black hole. Furthermore, as we shall see in sec. 4, decoherence effects associated with the cosmological horizon occur in de Sitter spacetime even for nonaccelerating observers. It is the presence of a Killing horizon that is the essential ingredient for the fundamental rate of decoherence of quantum superpositions as described in this paper.
Footnote 16: It should be emphasized that the estimates made in [14] that yielded eqs.(2.21) and (2.22) assumed that Alice’s lab is reasonably far from the black hole. If Alice’s lab is extremely close to the black hole (i.e., at a distance \(D\ll M\) from the horizon), then the black hole analysis would reduce to the Rindler case analyzed here.
We now consider another potential cause of decoherence, namely Unruh radiation.
### Decoherence Due to Scattering of Unruh Radiation
The Minkowski vacuum state restricted to a Rindler wedge is a thermal state at the Unruh temperature
\[\mathcal{T}=\frac{a}{2\pi} \tag{3.36}\]
relative to the notion of time translations defined by the Lorentz boost Killing field \(b^{a}\), eq. (3.2). Thus, the superposition state of Alice's particle will be buffeted by this thermal bath of Unruh radiation. Scattering of this radiation will cause some decoherence of Alice's particle. Indeed, since this decoherence should occur at a steady rate while the superposition is kept stationary (and thus the decoherence will be proportional to \(T\)), one might even
suspect that scattering of Unruh radiation could be the same effect as found in the previous section but expressed in a different language. The purpose of this subsection is to show that this is not the case, i.e., decoherence due to scattering of Unruh radiation and decoherence due to radiation of "soft" photons/gravitons through the horizon are distinct effects. Furthermore, we shall show that, for reasonable parameter choices, the decoherence rate due to the scattering of Unruh radiation is smaller than the decoherence rate due to emitted radiation as obtained in the previous section. We will consider only the electromagnetic case in this subsection.
The decoherence rate of a spatial superposition due to collisions with particles in an environment has been analyzed in [36; 37; 38; 39], and we will adapt this analysis to obtain a rough estimate of the decoherence caused by the scattering of Unruh radiation. As in eq. (2.1), Alice has a particle of charge \(q\) in a state \(|\psi\rangle=(|\psi_{1}\rangle+|\psi_{2}\rangle)/\sqrt{2}\), where \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) are spatially separated by a distance \(d\). Since we require \(d\ll 1/a\) (see eq. (3.17)) and since the typical wavelength of Unruh photons at temperature eq. (3.36) is \(\lambda\sim 1/a\), we are in the scattering regime where \(\lambda\gg d\). In an elastic scattering event between Alice's particle and a photon in the Unruh radiation, the final outgoing state of the photon will depend upon which branch of the superposition the photon scattered off of. Let \(|\chi_{1}\rangle\) denote the outgoing state of the Unruh photon for scattering off of \(|\psi_{1}\rangle\) and let \(|\chi_{2}\rangle\) denote the outgoing state for scattering off of \(|\psi_{2}\rangle\). Decoherence will occur to the extent to which these outgoing states of the scattered Unruh photon are distinguishable, i.e., \(\mathscr{D}=1-|\left\langle\chi_{1}|\chi_{2}\right\rangle|\).
In order to obtain a rough estimate of the decoherence resulting from a single scattering event, we consider the corresponding Minkowski process of the scattering of a photon of momentum \(p\) off of an inertial superposition separated by \(d\), with \(d\ll 1/p\). Assuming that the charged particle states \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) are identical except for their location, the scattered photon states \(|\chi_{1}\rangle\) and \(|\chi_{2}\rangle\) should differ only by the action of the translation operator \(e^{-i\vec{p}.\vec{d}}\), i.e.,
\[|\chi_{2}\rangle\approx e^{-i\vec{p}.\vec{d}}\,|\chi_{1}\rangle \tag{3.37}\]
where \(\vec{\mathcal{P}}\) denotes the photon momentum operator. Expanding the exponential, we obtain the following rough estimate of the decoherence resulting from a single scattering event involving a photon of momentum \(p\)
\[1-|\left\langle\chi_{1}|\chi_{2}\right\rangle|\sim p^{2}d^{2} \tag{3.38}\]
where we have ignored any dependence on the angle between the incoming momentum \(\vec{p}\) and the separation \(\vec{d}\). We will take eq. (3.38) as our estimate of the decoherence of Alice's particle resulting from the scattering of a single Unruh photon of "Rindler momentum" \(p\) (i.e., of energy \(\epsilon=p\) with respect to the boost Killing field \(b^{a}\)).
The total decoherence rate due to scattering of Unruh radiation is then given by
\[\Gamma_{\rm scatt}\sim d^{2}\int\limits_{0}^{\infty}dp\ p^{2}\varrho(p) \sigma(p) \tag{3.39}\]
where \(\varrho(p)\) is the number density of photons at momentum \(p\) (so \(\varrho(p)\) is also the incoming flux of photons) and \(\sigma(p)\) is the scattering cross-section. For a thermal distribution of photons17 we have
Footnote 17: The factor of \(p^{2}\) in the numerator of eq. (3.40) arises from the density of states in Minkowski spacetime. We ignore here any differences between the Minkowski and Rindler densities of states.
\[\varrho(p)\sim\frac{p^{2}}{e^{p/\mathcal{T}}-1}. \tag{3.40}\]
We take \(\sigma\) to be given by the Thomson cross-section
\[\sigma=\frac{8\pi}{3}\frac{q^{4}}{(4\pi m)^{2}}, \tag{3.41}\]
where \(m\) is the mass of Alice's particle. Putting this all together, our estimate of the decoherence rate due to scattering of Unruh photons is
\[\Gamma_{\rm scatt}\sim\frac{q^{4}d^{2}a^{5}}{m^{2}}\qquad(\text{Rindler, EM})\,. \tag{3.42}\]
Comparing eq. (3.42) to the rate of decoherence, \(\Gamma_{\rm rad}\) due to the emission of soft photons given by eq. (3.26), one can immediately see that the effects are distinct. In particular, \(\Gamma_{\rm rad}\) has no dependence on the mass, \(m\), of Alice's particle, whereas \(\Gamma_{\rm scatt}\) does depend on \(m\) on account of the mass dependence of the scattering cross-section. The ratio of these decoherence rates is given by
\[\frac{\Gamma_{\rm scatt}}{\Gamma_{\rm rad}}\sim\frac{q^{2}a^{2}}{m^{2}}= \left(\frac{q/m}{D}\right)^{2} \tag{3.43}\]
Now, \(q/m\) is the "charge radius" of Alice's particle and, as argued in [3], it represents a fundamental lower bound to the spread of a charged particle due to vacuum fluctuations of the electromagnetic field. Therefore, in order that \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) not overlap, we must have \(d>q/m\). Since \(d\ll D\), we conclude that
\[\frac{\Gamma_{\rm scatt}}{\Gamma_{\rm rad}}\ll 1 \tag{3.44}\]
i.e., the contribution to decoherence from the scattering of Unruh radiation is negligible compared with the decoherence due to emission of soft photons through the Rindler horizon.
A similar analysis holds for a charged particle superposition outside of a black hole. It is worth noting, that the
decoherence effects due to scattering of Hawking radiation will decrease with distance, \(D\), from the black hole only as \(1/D^{2}\) for large \(D\), giving,
\[\Gamma_{\rm scatt}\sim\frac{q^{4}d^{2}}{m^{2}M^{3}}\frac{1}{D^{2}}\quad\text{( black hole, EM)}. \tag{3.45}\]
On the other hand, by eq. (2.21) the decoherence effects of radiation of soft photons through the horizon decreases with \(D\) as \(1/D^{6}\). Thus at sufficiently large \(D\), the decoherence effects due to scattering of Hawking radiation will dominate. However, in this regime, both effects are extremely small.
### Decoherence From the Inertial Perspective
In our analysis of the decoherence of a spatial superposition in the presence of a black hole [14] as well as in our analysis of the decoherence of a spatial superposition in Rindler spacetime given above in sec. 3.1, it may appear that we have introduced a radical new mechanism for decoherence, namely radiation of soft photons and gravitons through a horizon. The main purpose of this subsection is to show that, in fact, the decoherence we derived in the Rindler case can also be obtained by entirely conventional means. In the Rindler case, we are simply considering a uniformly accelerating superposition in Minkowski spacetime. The radiation of entangling photons to infinity from such a superposition can be calculated in the inertial viewpoint by standard methods, without introducing concepts such as a Rindler horizon. It is instructive to calculate the decoherence from the inertial viewpoint both in order to validate the results of sec. 3.1 as well as to gain insight into how the emitted "soft photons" would be interpreted by an inertial observer. As we shall see, the entangling photons as seen by inertial observer at large distances near \(\theta=0\) will be "hard" even though, from her point of view, Alice has performed the experiment adiabatically. We will restrict our analysis in this subsection to the electromagnetic case.
The Lienard-Wiechert solution for the potential of a point charge in Minkowski spacetime following an arbitrary worldline \(X^{\mu}(\tau)\) is, in Lorenz gauge,
\[A^{\mu}(x)=\frac{1}{4\pi}\frac{1}{\alpha}\frac{q}{|\vec{x}-\vec{X}(t_{\rm ret}) |}\frac{dX^{\mu}}{dt}(t_{\rm ret}) \tag{3.46}\]
where
\[\alpha\equiv 1-\hat{n}\cdot\frac{d\vec{X}}{dt}(t_{\rm ret})\quad\text{and} \ \hat{n}=\frac{\vec{x}-\vec{X}(t_{\rm ret})}{|\vec{x}-\vec{X}(t_{\rm ret})|}. \tag{3.47}\]
For a uniformly accelerated trajectory with acceleration \(a\), we have
\[X^{\mu}(\tau)=\bigg{(}\frac{1}{a}\sinh(a\tau),0,0,\frac{1}{a}\cosh(a\tau) \bigg{)}. \tag{3.48}\]
In Bondi coordinates \((u,r,\theta,\phi)\) with
\[u\equiv t-r \tag{3.49}\]
the future light cone of an event at proper time \(\tau\) on the worldline eq. (3.48) reaches null infinity at
\[au=\sinh(a\tau)-\cos\theta\cosh(a\tau). \tag{3.50}\]
Electromagnetic radiation is described by the pullback of the electromagnetic field, eq. (3.46), to null infinity. Taking the limit as \(r\to\infty\) at fixed \(u\), we obtain18
Footnote 18: The vector potential is not smooth at \(\mathscr{I}^{+}\) in Lorenz gauge but one can do an asymptotic gauge transformation such that \(A_{a}\) is smooth at \(\mathscr{I}^{+}\). Such a gauge transformation does not affect the angular components \(A_{A}\) at \(\mathscr{I}^{+}\)[35], so we can calculate \(A_{A}\) using our Lorenz gauge expression.
\[A_{A}(u,\theta,\phi)=\frac{-q}{4\pi}\frac{\sinh(a\tau)\sin\theta}{\cosh(a\tau )-\cos\theta\sinh(a\tau)}(d\theta)_{A} \tag{3.51}\]
where, in this subsection, capital indices from the early alphabet denote angular components on the 2-sphere cross-sections of \(\mathscr{I}^{+}\). We will be concerned with the difference, at fixed \((u,\theta,\phi)\), between the electromagnetic radiation of a particle following the trajectory eq. (3.48) and a particle following a similar trajectory that is displaced in the \(z\)-direction by a proper distance \(d\ll 1/a\) and thus has
\[\delta a=a^{2}d. \tag{3.52}\]
We denote this difference by
\[A_{A}^{\rm d}(u,\theta,\phi)\equiv A_{A}(a+\delta a)-A_{A}(a)\approx\delta a \left(\frac{\partial A_{A}}{\partial a}\right)_{u,\theta} \tag{3.53}\]
From eq. (3.51), we obtain
\[A_{A}^{\rm d}=-\frac{a^{2}qd}{4\pi}\frac{u\sin\theta}{(\cosh(a\tau)-\cos \theta\sinh(a\tau))^{3}}(d\theta)_{A} \tag{3.54}\]
where eq. (3.50) was used to compute \((\partial\tau/\partial a)_{(u,\theta)}\).
In her experiment, Alice starts with her particle in a uniformly accelerating state. Over a proper time \(T_{1}\), she separates it into two uniformly accelerating components separated by a distance \(d\) as above. She keeps these components separated for a proper time \(T\), and she then recombines them over a proper time \(T_{2}\). The difference between the radiation fields of these components is given by
\[\mathcal{A}_{A}\equiv\mathcal{A}_{1,A}-\mathcal{A}_{2,A}=F(\tau)A_{A}^{\rm d} \tag{3.55}\]
where the smooth function \(F\) is such that \(F(\tau)=0\) for \(\tau<-T_{1}\) and \(\tau>T+T_{2}\), whereas \(F(\tau)=1\) for \(0<\tau<T\)
The entangling photon content is then given by
\[\langle N\rangle=||K{\cal A}||^{2}=2\int\limits_{\mathbb{S}^{2}}d\Omega\int\limits_ {0}^{\infty}\frac{\omega d\omega}{2\pi}\ \overline{\hat{\cal A}_{A}}\hat{\cal A}^{A} \tag{3.56}\]
where \(\hat{\cal A}_{A}(\omega,\theta,\phi)\) denotes the Fourier transform of \({\cal A}_{A}(u,\theta,\phi)\) with respect to \(u\), i.e.,
\[\hat{\cal A}_{A}(\omega,\theta,\phi)=\int\limits_{-\infty}^{\infty}du\ e^{i \omega u}{\cal A}_{A}(u,\theta,\phi). \tag{3.57}\]
We are interested in estimating \(\langle N\rangle\) for large \(T\).
In order to evaluate the Fourier transform integral, it is useful to note that, at fixed \(a\), we have
\[\frac{du}{d\tau}=\cosh(a\tau)-\cos\theta\sinh(a\tau) \tag{3.58}\]
and
\[\frac{d^{2}u}{d\tau^{2}}=a^{2}u. \tag{3.59}\]
It follows that
\[\frac{d}{du}\left(\frac{1}{du/d\tau}\right) =\frac{1}{du/d\tau}\frac{d}{d\tau}\left(\frac{1}{du/d\tau}\right)\] \[=\frac{-a^{2}u}{\left(\cosh(a\tau)-\cos\theta\sinh(a\tau)\right)^ {3}} \tag{3.60}\]
Thus, we have
\[A_{A}^{\rm d}=\frac{qd\sin\theta}{4\pi}(d\theta)_{A}\frac{d}{du}\left(\frac{1} {du/d\tau}\right) \tag{3.61}\]
and
\[\hat{\cal A}_{A}=\frac{qd\sin\theta}{4\pi}(d\theta)_{A}\int\limits_{-\infty}^ {\infty}du\ e^{i\omega u}F(\tau)\frac{d}{du}\left(\frac{1}{du/d\tau}\right). \tag{3.62}\]
Integrating by parts, we obtain
\[\hat{\cal A}_{A}(\omega,x^{A})= -\frac{qd\sin\theta}{4\pi}(d\theta)_{A}\bigg{[}i\omega\int\limits _{-\infty}^{\infty}du\ e^{i\omega u}\frac{F(\tau)}{du/d\tau}\] \[+\int\limits_{-\infty}^{\infty}du\ e^{i\omega u}\frac{F^{\prime} (\tau)}{(du/d\tau)^{2}}\bigg{]}. \tag{3.63}\]
The second term in this equation contributes only during the time intervals \((-T_{1},0)\) and \((T,T+T_{2})\) when Alice opens and closes the superposition. For large \(T\), its contribution can be shown to be negligible compared with the first term. Therefore, we have
\[\hat{\cal A}_{A}(\omega,x^{A})\approx-(d\theta)_{A}\frac{i\omega qd\sin\theta }{4\pi}I \tag{3.64}\]
where
\[I\equiv\int\limits_{-\infty}^{\infty}du\ e^{i\omega u}\frac{F(\tau)}{du/d \tau}. \tag{3.65}\]
To evaluate \(I\), we approximate \(F\) by a step function in the \(\tau\)-interval \([0,T]\). The corresponding interval, \([u_{0},u_{T}]\), in \(u\) is
\[u_{0} =-\frac{1}{a}\cos\theta\] \[u_{T} =\frac{1}{2a}\left[e^{aT}(1-\cos\theta)-e^{-aT}(1+\cos\theta) \right]. \tag{3.66}\]
Noting that
\[\frac{du}{d\tau}=\sqrt{a^{2}u^{2}+\sin^{2}\theta} \tag{3.67}\]
we obtain
\[I\approx\int\limits_{u_{0}}^{u_{T}}du\ \frac{e^{i\omega u}}{\sqrt{a^{2}u^{2}+ \sin^{2}\theta}}. \tag{3.68}\]
It can be seen that for large \(T\), the dominant contribution to \(I\) will come from small angles, \(\theta\ll 1\). For \(aT\gg 1\), the upper limit of the integral may then be approximated as
\[u_{T} \approx\frac{1}{4a}e^{aT}\theta^{2}-\frac{1}{a}e^{-aT}\quad \text{for }\theta\ll 1\] \[\sim\begin{cases}0&\text{for }\theta^{2}/4<e^{-aT}\\ \frac{1}{4a}\theta^{2}e^{aT}&\text{for }\theta^{2}/4\geq e^{-aT}\end{cases}. \tag{3.69}\]
For \(aT\gg 1\), the contribution to \(I\) from \(\theta^{2}/4<e^{-aT}\) can be shown to make a negligible contribution to \(\langle N\rangle\), eq. (3.56). Therefore, we may approximate \(I\) as
\[I\sim\Theta(\theta^{2}-4e^{-aT})\int\limits_{-1/a}^{\exp(aT)\theta^{2}/(4a)} du\ \frac{e^{i\omega u}}{\sqrt{a^{2}u^{2}+\sin^{2}\theta}} \tag{3.70}\]
where
\[\Theta(x)\equiv\begin{cases}0&\text{for }x<0\\ 1&\text{for }x\geq 0.\end{cases} \tag{3.71}\]
For \(0<\omega<4ae^{-aT}/\theta^{2}\), we may bound \(I\) by replacing \(e^{i\omega u}\) by \(1\). The integral can then be evaluated explicitly, and it can be shown that for \(aT\gg 1\), the contribution to \(\langle N\rangle\) from this frequency range is negligible. For \(\omega>4ae^{-aT}/\theta^{2}\), the integrand is oscillatory for \(u>\exp(aT)\theta^{2}/(4a)\), and, for \(aT\gg 1\), we will make negligible error in our estimate of \(\langle N\rangle\) if we replace the upper limit of eq. (3.70) by \(\infty\). We will also make a negligible error by replacing the lower limit by \(0\). Thus, for \(aT\gg 1\)
we may approximate \(I\) as
\[I\sim\Theta(\theta^{2}-4e^{-aT})\Theta(\omega-4ae^{-aT}/\theta^{2})\int\limits_{0 }^{\infty}du\;\frac{e^{i\omega u}}{\sqrt{a^{2}u^{2}+\sin^{2}\theta}}. \tag{3.72}\]
Evaluating the integral we obtain
\[I\sim\frac{1}{a}\Theta(\theta^{2}-4e^{-aT})\Theta(\omega-4ae^{-aT }/\theta^{2})\bigg{(}\frac{1}{2}i\pi I_{0}(\sin\theta\omega/a)\] \[\qquad\qquad+K_{0}(\sin\theta\omega/a)-\frac{1}{2}i\pi\mathbf{L}_{0}( \sin\theta\omega/a)\bigg{)} \tag{3.73}\]
where \(I_{0},K_{0}\) are Bessel functions and \(\mathbf{L}_{0}\) is a Struve function. This expression is highly suppressed for \(\omega>a/\theta\), so we can expand in \(\theta\omega/a\) and truncate the function above \(\omega=a/\theta\) to obtain,
\[I\sim-\frac{1}{a}\Theta(1-\theta\omega/a)\Theta(\theta^{2}-4e^{-aT})\Theta( \omega-4ae^{-aT}/\theta^{2})\ln\left(\theta\omega/a\right). \tag{3.74}\]
Note that the restrictions \(\omega<a/\theta\), and \(\theta>2e^{-aT/2}\) imply a frequency cutoff at \(\omega\sim ae^{aT/2}/2\). By eqs.(3.74) and (3.64), the frequency spectrum of \(\hat{\mathcal{A}}_{A}\) goes as \(\omega\ln(\omega/a)\) up to this cutoff, i.e., the spectrum is "hard" and becomes increasingly so for large \(T\). This contrasts with the increasingly "soft" spectrum on the Rindler horizon, which goes as \(1/\omega\) down to a low frequency cutoff \(\sim 1/V\propto e^{-aT}\). Thus, the "soft horizon photons" from the Rindler perspective are "hard" photons from the inertial perspective.
From eq. (3.56) for \(\langle N\rangle\) together with our expression eq. (3.64) for \(\hat{\mathcal{A}}_{A}\) and the expression eq. (3.74) that we have just derived for \(I\), we obtain
\[\langle N\rangle\sim\left(\frac{qd}{a}\right)^{2}\int d\omega d\theta\;\theta ^{3}\omega^{3}\left(\ln\frac{\omega\theta}{a}\right)^{2} \tag{3.75}\]
where the region of \(\omega\)-\(\theta\) integration is determined by the \(\Theta\)-functions appearing in eq. (3.74) as well as the geometrical restriction \(\theta\lesssim 1\). We can break up this region into the portion with \(\omega\leq a\) and the portion with \(\omega>a\). Since the region with \(\omega\leq a\) and \(\theta\lesssim 1\) is bounded and the integrand of eq. (3.75) is bounded in this region, the contribution to \(\langle N\rangle\) from \(\omega\lesssim a\) is bounded by a constant that is independent of \(T\). We may therefore discard this contribution. In the region \(\omega>a\), the third \(\Theta\)-function in eq. (3.74) is redundant, and the integration region is
\[a\leq a\omega \leq ae^{aT/2}/2 \tag{3.76}\] \[2e^{-aT/2}\leq \theta \leq\frac{a}{\omega}. \tag{3.77}\]
For \(aT\gg 1\), we will make negligible error by replacing the lower limit of \(\theta\) by \(0\). We thereby obtain
\[\langle N\rangle\sim\left(\frac{qd}{a}\right)^{2}\int\limits_{a}^{a\exp(aT/2) /2}d\omega\int\limits_{0}^{a/\omega}d\theta\;\theta^{3}\omega^{3}\left(\ln \frac{\omega\theta}{a}\right)^{2}. \tag{3.78}\]
Making the change of variables from \(\theta\) to
\[x=\frac{\omega}{a}\theta \tag{3.79}\]
we find that the \(\theta\)-integral becomes
\[\int\limits_{0}^{a/\omega}d\theta\;\theta^{3}\omega^{3}\left(\ln\frac{\omega \theta}{a}\right)^{2}=\frac{a}{\omega}a^{3}\int\limits_{0}^{1}dx\;x^{3}(\ln x )^{2}\sim\frac{a^{4}}{\omega}. \tag{3.80}\]
Thus, we obtain
\[\langle N\rangle \sim\left(\frac{qd}{a}\right)^{2}a^{4}\int\limits_{a}^{a\exp(aT/2 )/2}\frac{d\omega}{\omega}\] \[\sim a^{2}q^{2}d^{2}\ln[\exp(aT/2)]\] \[\sim a^{3}q^{2}d^{2}T. \tag{3.81}\]
This estimate agrees with eq. (3.23).
Thus, we have succeeded--with considerable effort!--in our goal of deriving the decoherence of Alice's superposition by entirely conventional means. It is notable how much simpler the calculation of sec. 3.1 was compared to the calculation that we have just completed.
## 4 Cosmological horizons decohere quantum superpositions
In this section, we apply our analysis to de Sitter spacetime. The de Sitter metric in a static patch is given by
\[ds^{2}=-f(r)dt^{2}+f(r)^{-1}dr^{2}+r^{2}q_{AB}dx^{A}dx^{B} \tag{4.1}\]
where, in this section, \(x^{A}\) are angular coordinates on the 2-sphere, \(q_{AB}\) is the unit round metric on the 2-sphere, and
\[f(r)=1-r^{2}/R_{H}^{2} \tag{4.2}\]
where \(R_{H}\) (the "Hubble radius") is a constant. The coordinate singularity at \(r=R_{H}\) corresponds to the "cosmological horizon," which is a Killing horizon of the static Killing field \((\partial/\partial t)^{a}\). The relation between "affine time," \(V\), and "Killing time," \(v\), on the future cosmological horizon is
\[V=e^{v/R_{H}}. \tag{4.3}\]
The general analysis of sec. 2 applies to the decoherence of a static superposition in de Sitter spacetime. The estimates of the decoherence due to emission of soft photons and gravitons through the cosmological horizon when Alice keeps the superposition present for a time \(T\) can be made in exact parallel with the analysis of sec. 3 in the Rindler case and [14] in the black hole case. The only noteworthy new ingredient in de Sitter spacetime is that
the worldline \(r=0\) is an orbit of the static Killing field that is inertial, i.e., non-accelerating. We now estimate the decoherence of a spatial superposition created in Alice's lab at \(r=0\) and thereby show that decoherence will occur even though Alice's lab is not accelerating.
By Gauss' law, a point charge placed at \(r=0\) will give rise to a radial electric field \(E_{U}\) on the future cosmological horizon given by
\[E_{U}\sim\frac{q}{R_{H}^{2}} \tag{4.4}\]
where \(E_{U}=F_{ab}\ell^{a}n^{b}\) on the horizon with \(n^{a}=(\partial/\partial V)^{a}\) tangent to the affinely parametrized null generators of the horizon and \(\ell^{a}=(\partial/\partial U)^{a}\) a radial null vector with \(n^{a}\ell_{a}=-1\). The change in the electric field on the horizon resulting from a displacement of the charge to \(r=d\ll R_{H}\) is
\[\Delta E_{U}\sim\frac{qd}{R_{H}^{3}}. \tag{4.5}\]
By paralleling the steps that led to eq. (3.18) above, we find that the change in the tangential components of the vector potential at the horizon is
\[\left|\Delta\mathcal{A}_{A}\right|\equiv\left(R_{H}^{-2}q^{AB}\Delta\mathcal{ A}_{A}\Delta\mathcal{A}_{B}\right)^{1/2}\sim\frac{qd}{R_{H}^{2}}. \tag{4.6}\]
By paralleling the steps that led to eq. (3.23)--assuming that the electromagnetic field is initially in the de Sitter invariant vacuum (see footnote 7)--we obtain the estimate
\[\left\langle N\right\rangle\sim\frac{q^{2}d^{2}}{R_{H}^{3}}T\qquad\text{(de Sitter, EM)}\,. \tag{4.7}\]
Thus, restoring constants, the decoherence time due to the presence of the cosmological horizon is
\[T_{\text{D}}\sim\frac{\hbar\epsilon_{0}R_{H}^{3}}{q^{2}d^{2}}\qquad\text{(de Sitter, EM)}\,. \tag{4.8}\]
Since \(d\ll R_{H}\), the decoherence time will be much larger than the Hubble time \(R_{H}/c\) unless \(q\) is extremely large relative to the Planck charge \(q_{P}\equiv\sqrt{\epsilon_{0}\hbar c}\). Nevertheless, we see that decoherence does occur despite the fact that Alice's lab is inertial.
A similar analysis applies in the gravitational case for a spatial superposition of a massive particle in Alice's lab at \(r=0\). In parallel with the derivation given in sec. 3.1 above, we find
\[\left\langle N\right\rangle\sim\frac{m^{2}d^{4}}{R_{H}^{5}}T\qquad\text{(de Sitter, GR)} \tag{4.9}\]
which leads to a decoherence time
\[T_{\text{D}}^{\text{GR}}\sim\frac{\hbar R_{H}^{5}}{Gm^{2}d^{4}}\qquad\text{( de Sitter, GR)}\,. \tag{4.10}\]
###### Acknowledgements.
D.L.D. acknowledges support as a Fannie and John Hertz Foundation Fellow holding the Barbara Ann Canavan Fellowship and as an Eckhardt Graduate Scholar in the Physical Sciences Division at the University of Chicago. This research was supported in part by NSF Grant No. 21-05878 to the University of Chicago.
|
2309.12200 | A Variational Auto-Encoder Enabled Multi-Band Channel Prediction Scheme
for Indoor Localization | Indoor localization is getting increasing demands for various cutting-edged
technologies, like Virtual/Augmented reality and smart home. Traditional
model-based localization suffers from significant computational overhead, so
fingerprint localization is getting increasing attention, which needs lower
computation cost after the fingerprint database is built. However, the accuracy
of indoor localization is limited by the complicated indoor environment which
brings the multipath signal refraction. In this paper, we provided a scheme to
improve the accuracy of indoor fingerprint localization from the frequency
domain by predicting the channel state information (CSI) values from another
transmitting channel and spliced the multi-band information together to get
more precise localization results. We tested our proposed scheme on COST 2100
simulation data and real time orthogonal frequency division multiplexing (OFDM)
WiFi data collected from an office scenario. | Ruihao Yuan, Kaixuan Huang, Pan Yang, Shunqing Zhang | 2023-09-19T08:19:34Z | http://arxiv.org/abs/2309.12200v1 | # A Variational Auto-Encoder Enabled Multi-Band Channel Prediction Scheme for Indoor Localization
###### Abstract
Indoor localization is getting increasing demands for various cutting-edged technologies, like Virtual/Augmented reality and smart home. Traditional model-based localization suffers from significant computational overhead, so fingerprint localization is getting increasing attention, which needs lower computation cost after the fingerprint database is built. However, the accuracy of indoor localization is limited by the complicated indoor environment which brings the multipath signal refraction. In this paper, we provided a scheme to improve the accuracy of indoor fingerprint localization from the frequency domain by predicting the channel state information (CSI) values from another transmitting channel and spliced the multi-band information together to get more precise localization results. We tested our proposed scheme on COST 2100 simulation data and real time orthogonal frequency division multiplexing (OFDM) WiFi data collected from an office scenario.
Indoor Localization, Multi-band, WiFi, Fingerprint Localization
## I Introduction
Indoor localization has received growing attention recently. Different from the outdoor localization and tracking tasks, the useful satellite signals in the outdoor environment are usually unreliable for many indoor applications due to the signal blockage. Even with many localization infrastructures available, the indoor localization tasks may suffer from the complicated multi-path signal refraction, reflection and blocking effects, while the localization accuracy is limited in general [1].
In order to improve the localization accuracy, the existing literature focuses on extending the range based [2] or fingerprint based [3] methods. For instance, a majorization minimization method using hybrid range based time-of-arrival (TOA) and received signal strength (RSS) information has been proposed in [4], where the proposed scheme can iteratively minimize the non-linear weighted least squares and the achievable localization accuracy can be improved to around 0.5 meter in terms of normalized mean square errors (NMSE). In the signal fingerprint based localization field, many augment and fusion frameworks have been developed as well [5], which cover a wide application of WiFi [6], ultra-wideband (UWB) [7], and visual images [8]. As illustrated in [8], the indoor localization accuracy can be improved from 5.0 meters to around 0.3 meters by applying weighted access points (WAPs)-based WiFi matching, the Gaussian weighted KNN (GW-KNN)-based image-level localization. An RSS-Image threshold-based fusion and particle filter fusion method learning algorithm has been proposed to incorporate multi-modal sensing data, which enables about 1 meter NMSE reduction [5].
Apart from the above fusion schemes, another effective method is to enlarge the observation windows, either in the time [9] or spatial [3] domain. In [3], the RMSE localization performance could be improved from 1.747 meters to 0.918 meters through multiple observations generated by dummy antennas. A natural extension is whether multi-band CSI samples are beneficial to improve the localization accuracy. Although the answer might be yes in a straight forward sense, there is quite limited literature to discuss the above problem due to the following reasons.
* _Non-linear Cross-Band Correlation Characterization_ In the conventional multi-band localization schemes, the achievable localization accuracy is in general limited using the auto-regression based schemes. This is due to the underlying assumption of linear cross-band correlations, as reported in [10]. However, this assumption may not hold true in many practical systems [11], and a detailed non-linear characterization is thus required.
* _Backward Compatible with Plug-in Structure_ In the practical implementation, the number of available localization bands might be different for many practical applications. Therefore, a more preferred scheme shall make it backward compatible to the conventional multi-band localization framework and a plug-in structure will be promising.
To address the above issues, we transform the original NMSE minimization problem into the equivalent evidence lower bound maximization problem by introducing some auxiliary variables. With this decomposed structure, we develop a variational auto-encoder (VAE) enabled multi-band channel prediction block to characterize the non-linear cross-band correlation, and plug in to the existing multi-band localization structure for high precision indoor localization. Through some numerical and prototype results, we show that our proposed scheme can achieve more than 20% MSE improvement if compared with learning-based channel prediction or auto-regression based mechanisms.
The rest of the paper is organized as follows. In Section II, we introduce the system model and formulate the localization problem. The multi-band localization problem transform and
the deep learning based solution are given in Section III. We present the numerical and prototype experiment results in Section IV and make the conclusion in Section V.
## II System Model & Problem Formulation
In this section, we first introduce the mathematical models adopted in multi-band localization systems and then formulate the indoor localization problem in what follows.
Consider a multi-band orthogonal frequency division multiplexing (OFDM) enabled transmission system as shown in Fig.1, and a single antenna localization entity is receiving WiFi signals from access points (AP) with \(N_{T}\) transmit antennas. For any given location \(\mathcal{L}\), the received signals \(\mathbf{y}\) from the \(i\)-th antenna and the \(n\)-th frequency band, e.g., \(\mathbf{y}^{i}(\mathcal{L},n)=[y_{1}^{i}(\mathcal{L},n),\ldots,y_{N_{sc}}^{i}( \mathcal{L},n)]\), are given by,
\[\mathbf{y}^{i}(\mathcal{L},n)=\mathbf{H}^{i}(\mathcal{L},n)\mathbf{x}^{i}( \mathcal{L},n)+\mathbf{n}^{i}(\mathcal{L},n), \tag{1}\]
where \(\mathbf{H}^{i}(\mathcal{L},n)\in\mathbb{C}^{N_{sc}\times N_{sc}}\), \(\mathbf{x}^{i}(\mathcal{L},n),\mathbf{n}^{i}(\mathcal{L},n)\in\mathbb{C}^{N_{ sc}\times 1}\) denote the channel fading coefficients, the transmitted symbols, and the additive white Gaussian noise with zero mean and unity variances, respectively. \(N_{sc}\) is the number of subcarriers per each frequency band, and \(N_{B}\) represents the total number of frequency bands. According to the COST2100 channel model [12], the channel fading coefficients \(\mathbf{H}^{i}(\mathcal{L},n)\) are given by,
\[\mathbf{H}^{i}(\mathcal{L},n) = \sum_{p=1}^{P}\alpha_{p}(\mathcal{L},n)\cdot e^{-j\cdot 2\pi \cdot n\cdot\tau_{p}(\mathcal{L})}, \tag{2}\]
where \(p\) is the index of fading paths, \(P\) denotes the total number of multi-paths, and \(\alpha_{p}(\mathcal{L},n)\) represents the path-loss coefficients of the \(p\) path. In practice, \(\alpha_{p}(\mathcal{L},n)\) can be affected by the locations of visible clusters, the direction-of-arrival and direction-of-departure of fading paths through the location \(\mathcal{L}\), as well as different frequency responses through the band index \(n\).
By estimating and collecting channel fading coefficients from different frequency bands together, we construct the localization database according to the following format.
\[\mathcal{DB}=\left\{\left(\mathcal{L},\hat{\mathbf{H}}(\mathcal{L},1),\ldots, \hat{\mathbf{H}}(\mathcal{L},N_{B})\right)\right\}, \tag{3}\]
where \(\hat{\mathbf{H}}(\mathcal{L},n)=\{\hat{\mathbf{H}}^{i}(\mathcal{L},n)\}, \forall n\in[1,\ldots,N_{B}]\), denotes the measured channel responses of the \(n\)-th frequency band, after removing the random phase offset as explained in [13].
With the established database \(\mathcal{DB}\), our proposed localization system shall identify the location \(\mathcal{L}_{m}\) from the real time measured channel responses \(\hat{\mathbf{H}}(\mathcal{L}_{m},n)\). Mathematically, the MSE minimization problem is given as follows.
**Problem 1** (_MSE Minimization_): _The localization MSE minimization problem for our proposed localization system is given as follows,_
\[\underset{\mathcal{F}(\cdot)}{\text{minimize}} \frac{1}{M}\sum_{m=1}^{M}\|\hat{\mathcal{L}}_{m}-\mathcal{L}_{m} \|_{2}^{2}, \tag{4}\] \[\text{subject to} \hat{\mathcal{L}}_{m}=\mathcal{F}\left(\mathcal{DB},\hat{ \mathbf{H}}(\mathcal{L}_{m},n)\right),\forall m,\] (5) \[\hat{\mathcal{L}}_{m},\mathcal{L}_{m}\in\mathcal{A}, \tag{6}\]
_where \(\mathcal{A}\) represents the feasible indoor localization areas, and \(M\) denotes the total number of localization tasks and \(\mathcal{F}(\cdot)\) denotes the localization function._
The above problem is in general difficult to solve, since the optimal localization function \(\mathcal{F}^{\star}(\cdot)\) can hardly be obtained by searching all the possible functions.
## III Proposed VAE Enabled Localization Scheme
In this section, we transform the above MSE minimization problem into the equivalent channel prediction error minimization problem, and propose the VAE enabled localization scheme in what follows.
By introducing the auxiliary variables, \(\{\tilde{\mathbf{H}}(\mathcal{L}_{m},n)\}\), the original MSE minimization problem can be transformed into the following format.
\[\underset{\mathcal{F}(\cdot),\{\mathcal{G}_{n^{\prime}}(\cdot)\}} \frac{1}{M}\sum_{m=1}^{M}\|\hat{\mathcal{L}}_{m}-\mathcal{L}_{m} \|_{2}^{2},\] (7) subject to \[\hat{\mathcal{L}}_{m}=\tilde{\mathcal{F}}\left(\mathcal{DB},\tilde {\mathbf{H}}(\mathcal{L}_{m},1),\ldots,\tilde{\mathbf{H}}(\mathcal{L}_{m},N_{B })\right), \tag{8}\] \[\tilde{\mathbf{H}}(\mathcal{L}_{m},n^{\prime})=\mathcal{G}_{n^{ \prime}}\left(\hat{\mathbf{H}}(\mathcal{L}_{m},n)\right),\] \[\hat{\mathcal{L}}_{m},\mathcal{L}_{m}\in\mathcal{A},\forall m\in[1,M],\forall n^{\prime}\in[1,N_{B}], \tag{9}\]
where \(\{\mathcal{G}_{n^{\prime}}(\cdot)\}\) denote the channel prediction functions from the current estimated band \(\hat{\mathbf{H}}(\mathcal{L}_{m},n)\) to all \(N_{B}\) frequency bands. With all the available predicted channel states of \(N_{B}\) frequency bands, the optimal localization function \(\tilde{\mathcal{F}}^{\star}(\cdot)\) can be solved by standard machine learning technique as elaborated in [9]. Specifically, we can apply deep neural network architecture with three hidden layers to model this non-linear relationship. By adopting the above optimized localization function \(\tilde{\mathcal{F}}^{\star}(\cdot)\), the optimal channel prediction
Fig. 1: Structure of the whole system.
functions, \(\{\mathcal{G}_{n^{\prime}}^{*}(\cdot)\}\), can be solved through the following posterior probability maximization problem.
**Problem 2** (_Evidence Lower Bound (ELBO) Maximization)_: _The ELBO maximization problem can be expressed as,_
\[\underset{\begin{subarray}{c}\{\mathcal{G}_{n^{\prime}}(\cdot)\} \end{subarray}}{\text{maximize}} \Psi(\tilde{\mathbf{H}}(\mathcal{L}_{m},n^{\prime}),\tilde{\mathbf{ H}}(\mathcal{L}_{m},n)), \tag{10}\] \[\text{subject to} \tilde{\mathbf{H}}(\mathcal{L}_{m},n^{\prime})=\mathcal{G}_{n^{ \prime}}\left(\hat{\mathbf{H}}(\mathcal{L}_{m},n)\right),\] (11) \[\forall n^{\prime}\in[1,N_{B}], \tag{12}\]
_where \(\Psi(\tilde{\mathbf{H}}_{1},\hat{\mathbf{H}}_{2})=\mathbb{E}[\log P_{r}( \tilde{\mathbf{H}}_{1}|e(\tilde{\mathbf{H}}_{2}))]-\mathbb{D}_{KL}[P_{r}( \ e(\hat{\mathbf{H}}_{2})|\tilde{\mathbf{H}}_{1})\|P_{r}(e(\hat{\mathbf{H}}_{ 2}))]\) denotes the ELBO function as defined in [14]. In the above expression, \(\mathbb{E}(\cdot)\) represents the mathematical expectation operation. \(P_{r}(A|B)\) denotes the probability distribution of the random variable \(A\) condition on the distribution of \(B\). \(\mathbb{D}_{KL}[A\|B]\) denotes the KL-divergence between the probability distributions of \(A\) and \(B\). \(e(\cdot)\) is a mapping function transforming the inner distribution into a lower dimension._
**Lemma 1**: _If \(e(\hat{\mathbf{H}}_{2})\) is a standard Gaussian distribution with zero mean and unit variance, and \(P_{r}(\tilde{\mathbf{H}}_{1}|e(\hat{\mathbf{H}}_{2}))\) and \(P_{r}(e(\hat{\mathbf{H}}_{2})|\tilde{\mathbf{H}}_{1})\) are assumed to be Gaussian, Problem 1 and Problem 2 are equivalent in terms of the Cramer-Rao Lower Bound (CRLB)._
Proof:: Please refer to Appendix A for the proof.
### _Network Structure Design_
Following our previous work [15], we adopt a multi-layer perceptron (MLP) neural network architecture with three hidden layers to approximate the function \(\tilde{\mathcal{F}}^{*}(\cdot)\). Rectified linear unit (ReLU) is chosen to be the activation function, and the dropout technique is applied to address the over-fitting issue. Detailed network parameters are listed in Table II.
As illustrated in [16], the mapping function \(\mathcal{G}_{n^{\prime}}(\cdot)\) exists and could not be easily characterized by any closed-form expression. To make it tractable to processing, we adopt the VAE structure as suggested in [17] to approximate the mapping function \(\mathcal{G}_{n^{\prime}}(\cdot)\) which projects the measured channel responses \(\tilde{\mathbf{H}}(\mathcal{L}_{m},n)\) into a lower dimensional space \(e(\hat{\mathbf{H}}(\mathcal{L}_{m},n))\) via the encoder, and expands to the predicted channel responses \(\tilde{\mathbf{H}}(\mathcal{L}_{m},n^{\prime})\) through the decoder. In VAE, the mentioned lower dimensional space is denoted by \(z\). We use fully connected (FC) layers to model the encoder and decoder, respectively, and the detailed network structure is depicted in Fig. 2. \(Re(\cdot)\) and \(Im(\cdot)\) are the real and imaginary part of the data, respectively. In Table I, we list the network parameters with different sizes and all the activation function of different layers are chosen to be LeakyReLU [18], in order to avoid dying ReLU problem compared with ReLU function. In addition, the dimension of \(e(\hat{\mathbf{H}}(\mathcal{L}_{m},n))\) is selected to be 25 in the numerical evaluation.
### _Loss Function Design_
In order to maximize the \(ELBO\), we can intuitively maximize \(\mathbb{E}[\log P_{r}(\tilde{\mathbf{H}}(\mathcal{L}_{m},n^{\prime})|z)]\) and minimize \(\mathbb{D}_{KL}[P_{r}(z|\tilde{\mathbf{H}}(\mathcal{L}_{m},n^{\prime})\|P_{r }(z)]\) simultaneously. Without loss of generality, we assume \(P_{r}(\tilde{\mathbf{H}}(\mathcal{L}_{m},n^{\prime})|z)\) and \(Pr(z|\tilde{\mathbf{H}}(\mathcal{L}_{m},n^{\prime}))\) follow the Gaussian distribution with means \(\mu\), \(\mu^{\prime}\) and variances \(\sigma\), \(\sigma^{\prime}\), respectively [14]. Therefore, the two terms in \(ELBO\) are given by,
\[\log P_{r}(\tilde{\mathbf{H}}(\mathcal{L}_{m},n^{\prime})|z)\sim-\frac{1}{2} \|\tilde{\mathbf{H}}(\mathcal{L}_{m},n^{\prime})-\hat{\mathbf{H}}(\mathcal{L}_ {m},n^{\prime})\|_{2}^{2} \tag{13}\]
\[\mathbb{D}_{KL}[P_{r}(z|\tilde{\mathbf{H}}(\mathcal{L}_{m},n^{ \prime})\|P_{r}(z)]\] \[=\mathbb{D}_{KL}(N(\mu^{\prime},\sigma^{\prime 2})\|N(0,1))\] \[=\frac{1}{2}(-\log\sigma^{\prime 2}+\mu^{\prime 2}+\sigma^{ \prime 2}-1) \tag{14}\]
Fig. 2: The structure of the proposed VAE model.
With the above understanding, we choose the loss function of VAE to be:
\[\ell_{\mathcal{G}_{n^{\prime}}(\cdot)}=\frac{1}{M}\sum_{m=1}^{M}\| \tilde{\mathbf{H}}(\mathcal{L}_{m},n^{\prime})-\hat{\mathbf{H}}(\mathcal{L}_{m}, n^{\prime})\|_{2}^{2}\] \[+\beta\times\frac{1}{2}(-\log\sigma^{\prime 2}+\mu^{\prime 2}+ \sigma^{\prime 2}-1) \tag{15}\]
To balance the contribution of the two reconstruct loss and KL-divergence in \(ELBO\), the \(\beta\)-VAE is proposed in [19], it balances the two parts of loss function during training by a hyperparameter \(\beta\). The hyperparameters of the above networks in the training process are shown in Table. II.
## IV Numerical and Prototype Results
In this section, we compare the proposed VAE enabled multi-band indoor localization scheme with two conventional baseline methods. _Baseline 1: Learning-based Channel Prediction [20, 21, 22]_, which applies MLP to learn the cross-band correlations. _Baseline 2: Auto-regression with Extended Kalman Filter (EKF) [11]_, which relies on EKF with iterative detector decoder for the channel estimation and prediction. _Baseline 3: Real-time Sampled Data,_ we directly collect all the channel responses from different bands simultaneously and perform the localization using the conventional multi-band localization scheme. Both numerical and prototype experiments are performed on 5 GHz WiFi scenario with \(N_{B}=3\) bands, where the corresponding center frequencies are given by 5.765 GHz, 5.785 GHz, and 5.805 GHz (Band index 153, 157, and 161), respectively. Each band contains 20 MHz with 64 OFDM sub-carriers. Before prototype experiments, we tested our system on simulation numerical experiments, and the CSIs in numerical evaluations are generated according to the COST2100 model [12] and the CSIs in prototype evaluations are collected from two laptops equipped with Intel 5300 network interface cards, where one laptop is running the access point mode and the other is the localization entity as shown in Fig.1. Other simulation and experimental parameters are listed in Table III.
### _Numerical Results_
In the first experiment, we numerically plot the estimated channel responses and compare with the predicted value from the VAE network in Fig. 3, where black and red lines denote the amplitude and phase information, respectively. As shown in Fig. 3(a), the VAE predicted results (dashed lines) and the estimated channel responses (solid lines) are matched quite well.
In the second experiment, we compare the channel prediction accuracy of different channel prediction schemes using the channel coefficient normalized error (CCNE) performance [16] defined as,
\[CCNE=10\lg\Bigg{(}\frac{\|\tilde{\mathbf{H}}(\mathcal{L}_{m},n^{\prime})- \hat{\mathbf{H}}(\mathcal{L}_{m},n^{\prime})\|^{2}}{\|\hat{\mathbf{H}}( \mathcal{L}_{m},n^{\prime})\|^{2}}\Bigg{)}. \tag{16}\]
As shown in Fig.4, the proposed VAE based channel prediction scheme can achieve about 54% and 41% CCNE improvement1 for \(SNR=30\) dB, if compared with _Baseline 1_ and _Baseline 2_, respectively.
Footnote 1: Since _Baseline 3_ is the real-time sampled channel responses, we do not plot the CCNE result for this case.
In the third experiment, we compare the MSE performance of different localization schemes, where the simulation results are shown in Fig.5. From this figure, we can observe that the proposed VAE enabled multi-band cooperative localization scheme is able to significantly outperform _Baseline 2_ by at least 24% MSE improvement, and achieve the upper bound of multi-band cooperative localization scheme with real-time measured responses (_Baseline 3_).
Fig. 3: The amplitude and phase values of predicted channel in both numerical and prototype experiments.
### _Prototype Results_
In this part, we redo the above three experiments using our prototype localization systems, where the topology is shown in Fig.1. Before passing to the multi-band localization system, we eliminate the carrier frequency offset (CFO) and sampling frequency offset (SFO) as introduced in [13] to obtain more reliable channel responses. As expected, we can observe a close match between the estimated and predicted channel responses in the first experiment. In the second experiment, the CCNE values for _Baseline 1_, _Baseline 2_, and the proposed schemes are -1.6417 dB, 0.5641 dB, and -2.2234 dB, respectively.
In the third experiment, the achieved MSEs of different multi-band localization schemes are shown in Fig. 6. Compared with _Baseline 1_ and _Baseline 2_, our proposed VAE enabled multi-band localization scheme can achieve 47% and 13% improvement, respectively. Although the achievable localization performance improvement is slightly reduced if compared to numerical simulations, we can still show the effectiveness of the proposed VAE enabled multi-band localization schemes.
## V Conclusion
In this paper, we propose a novel VAE enabled multi-band indoor localization scheme. Different from conventional approaches, we apply VAE structure to describe the non-linear cross-band correlations, and expand the single band measured channel response to multiple bands via channel prediction. By incorporating measured and predicted channel responses of multiple bands, we can achieve 13% to 54% MSE improvement in the numerical and prototype experiments, if compared with other traditional multi-band localization schemes.
## Appendix A Proof of Lemma 1
The equivalent channel prediction posterior probability maximum problem for indoor localization is given as follows,
\[\text{maximize}\qquad\sum_{m=1}^{M}\log P_{r}(\tilde{\mathbf{H}}(\mathcal{L }_{m},n^{\prime})) \tag{17}\]
However, this problem is an intractable problem, so we transform this posterior probability maximization into the following evidence lower bound (ELBO) maximization problem.
Fig. 4: The channel coefficient normalized error of different methods of channel prediction.
Fig. 5: The localization results for numerical experiments
Fig. 6: The localization results for prototype experiments
\[\log P_{r}(\tilde{\mathbf{H}}(\mathcal{L}_{m},n^{\prime}))\] \[=\log P_{r}(\tilde{\mathbf{H}}(\mathcal{L}_{m},n^{\prime}),e( \hat{\mathbf{H}}(\mathcal{L}_{m},n)))\] \[-\log P_{r}(e(\hat{\mathbf{H}}(\mathcal{L}_{m},n))|\tilde{\mathbf{ H}}(\mathcal{L}_{m},n^{\prime}))\] \[=\mathbb{E}[\log(P_{r}(\tilde{\mathbf{H}}(\mathcal{L}_{m},n^{ \prime}),e(\hat{\mathbf{H}}(\mathcal{L}_{m},n)))/P_{r}(e(\hat{\mathbf{H}}( \mathcal{L}_{m},n))))\] \[-\log(P_{r}(e(\hat{\mathbf{H}}(\mathcal{L}_{m},n))|\tilde{\mathbf{ H}}(\mathcal{L}_{m},n^{\prime}))/P_{r}(e(\hat{\mathbf{H}}(\mathcal{L}_{m},n))))]\] \[=ELBO\] \[+D_{KL}(P_{r}(e(\hat{\mathbf{H}}(\mathcal{L}_{m},n))|\tilde{ \mathbf{H}}(\mathcal{L}_{m},n^{\prime}))\] \[\|P_{r}(e(\hat{\mathbf{H}}(\mathcal{L}_{m},n))|\tilde{\mathbf{ H}}(\mathcal{L}_{m},n^{\prime})))\] \[\geq ELBO \tag{18}\]
In order to reflect the relationship between the independent variables, we denote the loss function \(ELBO\) as \(\Psi(\tilde{\mathbf{H}}(\mathcal{L}_{m},n^{\prime}),\hat{\mathbf{H}}(\mathcal{ L}_{m},n))\). The maximization problem of posterior probability problem could be converted to the maximization problem of \(ELBO\).
According to [2], the Cramer-Rao Bound(CRLB) of CSI ranging error is formulated as:
\[CRLB=\frac{c^{2}}{8\pi^{2}\beta^{2}SNR} \tag{19}\]
\(c\) represents the speed of light, \(SNR=E_{p}/N_{0}\), where \(E_{p}\) is the average received energy, and \(N_{0}\) is the energy of received noise, and \(\beta\) is the transmit effective bandwidth, which means the CRLB of localization error decreases as the transmission bandwidth increases, so we choose to splice CSI data on two channel bands to improve the localization accuracy.
The effective bandwidth on the predicted channel and the CRLB for the spliced channel is given by:
\[\beta^{\prime}=P_{r}(\tilde{\mathbf{H}}(\mathcal{L}_{m},n^{\prime }))\beta \tag{20}\] \[\hat{\beta}=\beta+\beta^{\prime}\] (21) \[\widehat{CRLB}=\frac{c^{2}}{8\pi^{2}\hat{\beta}^{2}SNR}\leq CRLB \tag{22}\]
\(\beta^{\prime}\) is the effective bandwidth of the predicted channel, \(\hat{\beta}\) is the total effective bandwidth of the spliced channel. \(\widehat{CRLB}\) is the range error lower bound of the spliced channel which is less than CRLB of single channel, getting a more accuracy prediction could extend the efficient bandwidth, this is the reason for Problem 1 and Problem 2 are equivalent.
|
2309.04304 | Hubbard Bands and Exotic States in Doped and Undoped Mott Systems: The
Kotliar-Ruckenstein Representation | The slave-particle representation is a promising method to treat the
properties of exotic strongly correlated systems. We develop a unified approach
to describe both the paramagnetic state with possible spin-liquid features and
states with strong long-range or short-range magnetic order. Combining the
Kotliar-Ruckenstein representation and fractionalized spin-liquid deconfinement
picture, the Mott transition and Hubbard subbands are considered. The spectrum
in the insulating state is significantly affected by the presence of the spinon
spin-liquid spectrum and a hidden Fermi surface. Presenting a modification of
the Kotliar-Ruckenstein representation in the spin-wave region, we treat the
case of magnetic order, with special attention being paid to the half-metallic
ferromagnetic state. The formation of small and large Fermi surfaces for doped
current carriers in the antiferromagnetic state is also discussed. | Valentin Yu. Irkhin | 2023-09-08T13:05:58Z | http://arxiv.org/abs/2309.04304v1 | Hubbard bands and exotic states in doped and undoped Mott systems: the Kotliar-Ruckenstein representation
###### Abstract
The slave-particle representation is promising method to treat the properties of exotic strongly correlated systems. We develop a unified approach to describe both paramagnetic state with possible spin-liquid features and states with strong long-range or short-range magnetic order. Combining the Kotliar-Ruckenstein representation and fractionalized spin-liquid deconfinement picture, the Mott transition and Hubbard subbands are considered. The spectrum in the insulating state is significantly affected by the presence of the spinon spin-liquid spectrum and a hidden Fermi surface. Presenting a modification of the Kotliar-Ruckenstein representation in the spin-wave region, we treat the case of magnetic order, especial attention being paid to half-metallic ferromagnetic state. The formation of small and large Fermi surfaces for doped current carriers in the antiferromagnetic state is also discussed.
## 1 Introduction
The problem of describing strongly correlated states has been a topic of interest and significance for a long time. In particular, here belong the aspects of the Mott transition, which refers to the correlation-driven transition from a metallic state to an insulating state [1]. The related physical phenomena occur in a number of doped and undoped Mott systems, including insulators and metals with exotic properties [2].
The physics of the Mott systems originates from competition of magnetism, Coulomb correlations, frustration and topology. Typically (in most d-metal compounds), the Mott transition occurs according to the Slater mechanism, i.e., involves the insulating phase with antiferromagnetic band splitting (see, e.g., Ref. [3]). However, the situation changes when dealing with frustrated systems which do not demonstrate antiferromagnetic ordering, so that only the paramagnetic metallic and insulator states (possibly, with unusual characteristics) are present, leading to the formation of a spin-liquid-type state [4; 5].
Such a transition into the insulator state, known as the Mott scenario, is associated with the correlation Hubbard splitting. In the Mott state, the spectrum exhibits a significant charge gap that is determined by bosonic excitation branches. Consequently, the electrons become composite particles and undergo fractionalization, where the spin characteristics are controlled by neutral fermions called spinons, and the charge ones by bosons [6; 7]. This concept can be formalized by using the slave-boson representations [8; 6; 7].
The interaction between bosons and fermions mediated by a gauge field plays a significant role as it gives rise to confinement [7]. This leads to a transition towards a confinement metallic state, which is marked by the occurrence of Bose condensation and a non-zero residue in the electron Green's function. Conversely, in the insulator state, the bosons have a gap in their energy spectrum, leading to an incoherent overall spectrum that encompasses Hubbard's bands. In this case, the electron Green's function is a combination of the boson and fermion Green's functions through convolution.
Recent theoretical advancements have offered a fresh perspective on the Mott transition by introducing a topological framework. This is particularly relevant because spin liquids, known for their topological order, are involved in this transition. In the study of phase transitions in magnetically frustrated systems, the consideration of topological excitations becomes essential as they play a significant role in confinement. These ideas have been extensively reviewed, e.g., in Refs. [9; 10].
As for doped Mott systems, copper-oxide materials which are basic for high-\(T_{c}\) superconductors should be mentioned to the first place. In the overdoped case, the normal (non-superconducting) ground state is characterized as a Fermi liquid with a "large" Fermi surface (including both localized and itinerant states), where Luttinger's theorem holds. At the same time, in the underdoped case the ground state is more complicated and may possess small hole pockets of the Fermi surface [7; 11]. The description of this state depends again on the presence of absence of antiferromagnetic ordering. The small Fermi surface can occur not only in the case of long-range order, but also in the situation of strong short-range order [12; 13; 14].
In this paper, we examine the metal-insulator transition through a topological perspective, specifically focusing on spin-charge separation within the framework of the Kotliar-Ruckenstein slave-boson representation. We employ the deconfinement concept to investigate the Hubbard subbands' spectrum. Our treatment aims to understand the Mott transition leading to a spin-liquid state, while also establishing the connection between the charge gap in the boson spectrum and the Hubbard splitting.
The idea of preserving the Fermi surface during a quantum phase transition is supported by the presence of a spinon Fermi
surface in the paramagnetic phase of a Mott insulator [5]. In a gapped phase like the Mott state, the traditional Fermi surface does not exist and instead transforms into a hidden or ghost Fermi surface. However, the volume enclosed by the Fermi surface, as described by the Luttinger theorem, remains conserved [15]. This concept has also been applied to half-metallic ferromagnets [16; 17]. In this study, we expand upon this approach and demonstrate how to combine the concept of composite particles with spin-liquid states and magnetic ordering in various cases.
In Sec. 2 we review various versions of the slave-boson representations. In Sec. 3 we treat the problem of metal-insulator in the paramagnetic case. Although we apply the standard Kotliar-Ruckenstein representation used in previous works [18; 19], we provide a new interpretation which takes into account spin-charge separation in terms of exotic quasiparticles - spinons and holons. In Sec. 4 we derive a new form of the Kotliar-Ruckenstein representation, which is compatible with the approach of many-electron Hubbard's operators [20] and is convenient in the magnetic state. We apply this form to treat conducting ferromagnets and antiferromagnets. In Sec. 5, a discussion is presented.
## 2 Slave-particle representations of the Hubbard model
The Hamiltonian of the Hubbard model reads
\[{\cal H}=-\sum_{ij\sigma}t_{ij}c^{\dagger}_{i\sigma}c_{j\sigma}+U\sum_{i}n_{i \uparrow}n_{i\downarrow}+{\cal H}_{d}, \tag{1}\]
where \(c^{\dagger}_{i\sigma}\) are electron creation operators. The Heisenberg interaction
\[H_{d}=\sum_{ij}J_{ij}{\bf S}_{i}{\bf S}_{j}, \tag{2}\]
which can arise as an effective superexchange interaction in the second order of perturbation theory in the Hubbard model, is explicitly incorporated for further ease of representation. Such a mixed representation is known as \(t-J-U\) model which reduces in the large-\(U\) limit to the well-known \(t-J\) model (see, e.g., the review [21]). The Hamiltonian of the latter model for the hole doping can be represented in the form
\[{\cal H}=\sum_{ij\sigma}t_{i}\tilde{c}^{\dagger}_{i\sigma}\tilde{c}_{j\sigma} +{\cal H}_{d} \tag{3}\]
where \(\tilde{c}^{\dagger}_{i\sigma}=X_{i}(0\sigma)=|i0\rangle\langle i\sigma|=c_{i \sigma}(1-n_{i-\sigma})\) are the Hubbard projection operators creating empty on-site states.
In situations where strong correlation effects are dominant, it is often useful to employ auxiliary or "slave" boson and fermion representations. The slave-boson representation was proposed in the pioneering works by Barnes [22] and Coleman [23] for the Anderson models and developed by many authors.
Anderson [6] proposed a physical interpretation of slave-boson representation for the Hubbard model based on the concept of separating the spin and charge degrees of freedom of an electron,
\[c_{i\sigma}=X_{i}(0,\sigma)+\sigma X_{i}(-\sigma,2)=e^{\dagger}_{i}f_{i\sigma }+\sigma d_{i}f^{\dagger}_{i-\sigma}. \tag{4}\]
where \(\sigma=\pm 1\), \(f_{i\sigma}\) are the annihilation operators for neutral fermions (spinons), and \(e_{i}\), \(d_{i}\) for charged spinless bosons (holons and doublons). In the large-\(U\) limit we have to retain in (4) only the first (second) term for the hole (electron) doping.
Alternatively, the slave-fermion representation which uses the Schwinger boson operators \(b_{i\sigma}\) can be used (see, e.g., Ref. [24]),
\[X_{i}(0,\sigma)=f^{\dagger}_{i}b_{i\sigma},\,X_{i}(+,-)=b^{\dagger}_{i\uparrow }b_{i\downarrow}, \tag{5}\]
so that
\[\sum_{\sigma}b^{\dagger}_{i\sigma}b_{i\sigma}+f^{\dagger}_{i}f_{i}=1. \tag{6}\]
This representation is more suitable in the case of magnetic ordering, Such uncertainty in statistics of excitations leads to difficulties in constructing a unified picture and requires more advanced approaches.
A more complicated representation was proposed by Kotliar and Ruckenstein [8]. This uses the Bose operators \(e_{i}\), \(p_{i\sigma}\), \(d_{i}\) and Fermi operators \(f_{i\sigma}\):
\[c^{\dagger}_{i\sigma}\to f^{\dagger}_{i\sigma}z^{\dagger}_{i\sigma},\,z^{ \dagger}_{i\sigma}=g_{2i\sigma}(p^{\dagger}_{i\sigma}e_{i}+d^{\dagger}_{i}p_ {i-\sigma})g_{1i\sigma}, \tag{7}\]
with the constraints
\[\sum_{\sigma}p^{\dagger}_{i\sigma}p_{i\sigma}+e^{\dagger}_{i}e_{i}+d^{\dagger }_{i}d_{i}=1,\,\,f^{\dagger}_{i\sigma}f_{i\sigma}=p^{\dagger}_{i\sigma}p_{i \sigma}+d^{\dagger}_{i}d_{i}, \tag{8}\]
which can be used to introduce gauge fields [7].
According to Kotliar and Ruckenstein, the representation of many-electron operators is not fixed and can include additional operator factors as long as they have eigenvalues of 1 in the physical subspace. While all these forms yield accurate results in exact treatments, they may differ in approximate calculations. This is particularly significant when constructing mean-field approximations as it allows for agreement with limiting cases. Thus the factors \(g_{1,2i\sigma}\) are somewhat arbitrary, but to obtain an agreement with the Hartree-Fock limit one uses the values
\[g_{1i\sigma}=(1-d^{\dagger}_{i}d_{i}-p^{\dagger}_{i\sigma}p_{i \sigma})^{-1/2},\] \[g_{2i\sigma}=(1-e^{\dagger}_{i}e_{i}-p^{\dagger}_{i-\sigma}p_{i -\sigma})^{-1/2}. \tag{9}\]
In the mean-field approximation for a non-doped case and a non-magnetic state we can put \(g^{2}_{1,2\sigma}=2\). It should be noted that such a choice results in some difficulties, in particular leads to inconsistency in the atomic limit [19]. Also, we will see below that this choice is inadequate in a magnetic state.
In the framework of various slave-boson approaches, a number of mean-field theories were developed [7]. In particular, treatments within the Kotliar and Ruckenstein representations on saddle-point level became popular because of their good agreement with numerical simulations. However, such treatments are not free of difficulties [29; 30]. Generally speaking, they suffer from drawbacks connected with spurious Bose condensation. To overcome this difficulty and develop more advanced theories, one can use the \(1/N\)-expansion [23] or gauge-field theories which are extensively discussed in the review [7].
In this connection, treatments of the limiting cases, where the slave-boson approach is exact or controlled [25; 26], can be useful.
To take into account properly spin-flip processes, it is suitable to use the rotationally invariant version [27; 28]. Here projected electron is represented as a composite of Fermi spinon with scalar and vector bosons \(p_{\partial}\) and \({\bf p}_{i}\). Using the coupling rule of momenta 1 and 1/2 one obtains
\[c_{i\sigma}=\sum_{\sigma^{\prime}}(e_{i}^{\dagger}p_{i\sigma^{\prime}\sigma}+ \sigma p_{i-\sigma-\sigma^{\prime}}^{\dagger}d_{i})f_{i\sigma^{\prime}} \tag{10}\]
with
\[\hat{p}_{i}=\frac{1}{2}(p_{\partial 0}\sigma_{0}+{\bf p}_{i}\mathbf{ \sigma}) \tag{11}\]
and the constraints
\[e_{i}^{\dagger}e_{i}+\sum_{\mu=0}^{3}p_{i\mu}^{\dagger}p_{i\mu}+d_{i}^{\dagger} d_{i}=1, \tag{12}\]
\[\sum_{\sigma}f_{i\sigma}^{\dagger}f_{i\sigma}=\sum_{\mu=0}^{3}p_{i\mu}^{ \dagger}p_{i\mu}+2d_{i}^{\dagger}d_{i}. \tag{13}\]
Introducing proper factors one has [28]
\[c_{i\sigma}=\sum_{\sigma^{\prime}}f_{i\sigma^{\prime}}z_{i\sigma^{\prime} \sigma},\ \hat{z}_{i}=e_{i}^{\dagger}\hat{L}_{i}M_{i}\hat{R}_{i}\hat{p}_{i}+\widehat{p}_ {i}^{\dagger}\hat{L}_{i}M_{i}\hat{R}_{i}d_{i} \tag{14}\]
where
\[\hat{L}_{i} = [(1-d_{i}^{\dagger}d_{i})\sigma_{0}-2\widehat{p}_{i}^{\dagger} \widehat{p}_{i}]^{-1/2} \tag{15}\] \[\hat{R}_{i} = [(1-e_{i}^{\dagger}e_{i})\sigma_{0}-2\widehat{p}_{i}^{\dagger} \widehat{p}_{i}]^{-1/2}\] (16) \[M_{i} = (1+e_{i}^{\dagger}e_{i}+\sum_{\mu=0}^{3}p_{i\mu}^{\dagger}p_{i\mu }+d_{i}^{\dagger}d_{i})^{1/2}. \tag{17}\]
The additional square-root factors in (15)-(17) can be treated in spirit of mean-field approximation. In particular, the factor \(M\) is equal to \(\sqrt{2}\) due to sum rule (12) and enables one to obtain an agreement with the small-\(U\) limit and with the saturated ferromagnetic case. The scalar and vector bosons \(p_{\partial}\) and \({\bf p}_{i}\) are introduced as
\[\hat{p}_{i}=\frac{1}{2}(p_{\partial 0}\sigma_{0}+{\bf p}_{i}\mathbf{ \sigma}) \tag{18}\]
with \(\sigma\) being Pauli matrices and \(\hat{\bar{p}}_{i}=(1/2)(p_{\partial 0}\sigma_{0}-{\bf p}_{i}\mathbf{ \sigma})\) the time reverse of operator \(\hat{p}_{i}\).
In Sec. 4, we will extensively employ the rotationally invariant representation to treat in detail the magnetically ordered case. We will perform the corresponding analytical transformations and demonstrate that the full form of the radicals plays an important role. In particular, this is crucial to describe incoherent states in a ferromagnet.
## 3 Mott transition and Hubbard bands in the paramagnetic and spin-liquid state
In order to treat the Mott transition in frustrated systems within the paramagnetic phase, several studies [5; 31; 32] utilized the rotor representation. While this representation is straightforward, it is not ideal as it does not explicitly incorporate the spectrum of both Hubbard bands. An alternative description of the Mott transition and Hubbard bands can be obtained within the Kotliar-Ruckenstein representation [18; 19] These works use a Gutzwiller-type approach for a structureless paramagnetic state. Here we perform a more advanced treatment with account of possible spin-liquid picture. To take into account spin frustrations, we include explicitly into the model the Heisenberg interaction. Then the Lagrangian of the Hubbard-Heisenberg model has the form
\[{\cal L} = -\sum_{i\sigma}t_{ij}z_{i\sigma}^{\dagger}z_{j\sigma}f_{i\sigma}^ {\dagger}f_{j\sigma}+\sum_{i\sigma}f_{i\sigma}^{\dagger}(\partial_{\tau}-\mu+ \lambda_{2\sigma})f_{i\sigma} \tag{19}\] \[+ \sum_{i\sigma}p_{i\sigma}^{\dagger}(\partial_{\tau}+\lambda_{1}- \lambda_{2\sigma})p_{j\sigma}+\sum_{i}e_{i}^{\dagger}(\partial_{\tau}+ \lambda_{1})e_{i}\] \[+ \sum_{i}d_{i}^{\dagger}(\partial_{\tau}+\lambda_{1}-\sum_{\sigma} \lambda_{2\sigma}+U)d_{i}+{\cal H}_{d}.\]
By employing the Heisenberg Hamiltonian in the \(f\)-pseudofermion representation, it is possible to analyze spin degrees of freedom independently. In certain circumstances, it is anticipated that a spin-liquid state may emerge, characterized by excitations primarily consisting of spinons, which are neutral fermions.
In the mean-field approximation, the Lagrange factors \(\lambda_{1,2}\) associated with (8) are not dependent on the specific sites. When in the insulator phase, it has been established by Lavagna [18] that \(\lambda_{1}=\lambda_{2\sigma}=U(1\pm\zeta)/2\) equals to the chemical potential for a infinitesimally small electron or hole doping (the addition or removal of an electron), \(\zeta=(1-1/u)^{1/2}\), \(u=U/U_{c}\). Here
\[U_{c}=4p^{2}g_{1}^{2}g_{2}^{2}\varepsilon=8\varepsilon\]
is the critical value for the Mott transition in the Brinkman-Rice approximation (see Ref. [33]), \(\varepsilon=2\left|\int_{-\infty}^{\mu}d\omega\omega\rho(\omega)\right|\) the average energy of non-interacting electron system, \(\rho(\omega)\) the bare density of states.
Following to Refs.[33; 18] we can introduce the variable \(x=e+d\). Then we obtain for \(y=1/x^{2}\) the cubic equation
\[y^{3}-(u-1)y/u\delta^{2}=1/u\delta^{2}. \tag{20}\]
Earlier the solution of this equation was discussed in Refs. [18; 28]. Here we present the solution in a more convenient form. Passing to the variable \(1/y\) and using trigonometric solution of the cubic equation we derive for \(u<1\) (correlated metal phase)
\[y=2\left(\frac{1-u}{3u\delta^{2}}\right)^{1/2}\sinh\left(\frac{1}{3}\mbox{ arcsinh}\frac{\delta}{\delta_{0}}\right) \tag{21}\]
For \(u>1\) one has
\[y=2\left(\frac{u-1}{3u\delta^{2}}\right)^{1/2}\!\!\times\!\left\{\begin{array} []{cc}\cos\left(\frac{1}{3}\mbox{arccosh}(\delta/\delta_{0})\right),&\delta< \delta_{0}\\ \cosh\left(\frac{1}{3}\mbox{arccosh}(\delta/\delta_{0})\right),&\delta>\delta _{0}\end{array}\right. \tag{22}\]
where
\[\delta_{0}=2|u-1|^{3/2}/(27u)^{1/2} \tag{23}\]
This solution is a smooth and analytic function of doping \(\delta\) in the whole region \(\delta<1\). For small \(\delta\ll\delta_{0}\) we have
\[x^{2}=1/y=\left\{\begin{array}{cc}1-u+O(\delta^{2}),&u<1\\ \delta/\sqrt{1-1/u},&u>1\end{array}\right. \tag{24}\]
Generally, a considerable \(U\)-dependence takes place at any \(\delta\). For \(\delta\gg\delta_{0}\) (close to the Mott transition) we have \(x^{2}\simeq\delta^{2/3}\).
The behavior (24) can be considerably changed when taking into account gauge fluctuations [5; 34], especially in the two-dimensional case where intermediate energy and temperature scales can occur beyond mean-field picture.
It is convenient to introduce the boson combination \(b_{i}^{\dagger}=e_{i}^{\dagger}+d_{i}\) yields (cf. Ref.[19]). The expression of the corresponding Green's function takes the form
\[D({\bf q},\omega) = \langle\langle b_{\bf q}|b_{\bf q}^{\dagger}\rangle\rangle_{ \omega}=\sum_{a=1,2}\frac{Z_{a{\bf q}}}{\omega-\omega_{a{\bf q}}}, \tag{25}\] \[Z_{a{\bf q}} = (-1)^{a}U/\sqrt{U^{2}\zeta^{2}+U(U_{c}-4\Sigma({\bf q}))} \tag{26}\]
with the spectrum of boson subsystem
\[\omega_{a{\bf q}} = \frac{1}{2}[\pm U\zeta-(-1)^{a}\sqrt{U^{2}\zeta^{2}+U(U_{c}-4 \Sigma({\bf q}))}] \tag{27}\]
One of two boson branches becomes gapless and provides formation of the boson condensate at the Mott transition.
To obtain the boson self-energy we perform a decoupling of the first term in (19), which yields essentially the correlation correction first introduced in Ref.[35]. The result reads
\[\Sigma({\bf q})=-p^{2}g_{1}^{2}g_{2}^{2}\sum_{{\bf k}\sigma}t_{{\bf k}-{\bf q }}n_{{\bf k}\sigma},\,n_{{\bf k}\sigma}=\langle f_{{\bf k}\sigma}^{\dagger}f_ {{\bf k}\sigma}\rangle. \tag{28}\]
In Ref. [19], the limit of vanishing renormalized electron bandwidth (i.e., bearing in mind the Mott phase where the averages \(e\), \(d\to 0\)) was treated in a Gutzwiller-type approach. Here we use a more straightforward approach: a finite bandwidth of holons occurs in a natural way by taking into account the spinon dispersion. Note that earlier a similar consideration was performed for the \(t-J\) model [7].
The presence of a small (as compared to electron energies) characteristic scale of spinon energies is crucial. As a result, the temperature dependence of the spinon Fermi surface becomes significant. This scenario shares similarities with the situation observed in magnetic order (e.g., band spliting owing to long- or short-range antiferromagnetic ordering). The dispersion of bosons is affected by the specific characteristics of the fermion spectrum, which are determined by the state of the \(f\)-system.
The spinon spectrum \(E_{\bf k}\) can be stabilized in the mean-field scenario through either a non-compact gauge field or by having gapless Fermi excitations [36; 5; 37]. In the insulator state, this spectrum remains unaffected by bosons, leading to the emergence of various spin-liquid phases [7].
When there is minimal dependence on \({\bf k}\) of \(n_{{\bf k}\sigma}\) (indicating a localized spin phase without fermion hopping), the value of \(\Sigma\) approaches zero. However, in the case of a spin liquid, a distinct Fermi surface is present. Despite the spectrum of spinons can differ from that of bare electrons, putting \(q=0\) we still obtain \(\Sigma(0)=U_{c}/4\), since the spinon band is half-filled and the position of the Fermi energy (the chemical potential) remains fixed.
In the nearest-neighbor approximation, when converting equation (28) into real-space representation, it becomes evident that the spinon spectrum and the correction to the holon spectrum vary only in terms of replacing the parameter \(J\) with \(t\) (\(\Sigma({\bf q})\propto E({\bf q})\), as described in Ref. [7] for the \(t-J\) model). Specifically, we can observe that
\[\Sigma({\bf q})=U_{c}(\cos q_{x}+\cos q_{y})/8,\] \[\Sigma({\bf q})=\pm U_{c}\sqrt{\cos^{2}q_{x}+\cos^{2}q_{y}}/(4 \sqrt{2}) \tag{29}\]
for Anderson's uniform resonating valence bond (uRVB) and \(\pi\)-flux (\(\pi\)Fl) phases, respectively. Thus in the case of uRVB state the quasimomentum dependences of electron and spinon spectrum coincide: \(E_{\bf k}\sim J(\cos k_{x}+\cos k_{y})\). At the same time, our method enables one to treat a more general situation. So, in the \(\pi\)Fl phase (which includes Dirac points) \(E_{\bf k}\sim\pm J\sqrt{\cos^{2}k_{x}+\cos^{2}k_{y}}\). For the gapped Z\({}_{2}\) phase, which can occur in the presence of next-nearest-neighbor interactions, the mapping of the spectra is violated and the consideration is more difficult.
In the case of large \(U\) we have two well-separated bands
\[\omega_{a{\bf q}}={\rm const}-(-1)^{a}\Sigma({\bf q})/\zeta.\]
The observable electron Green's function is obtained as a convolution of the boson and spinon Green's functions [19; 7; 37]. For \(J\ll|t|\), this spinon smearing does not strongly influence the shape of density of state. Then we can put \({\rm Im}\langle\langle f_{{\bf k}\sigma}|f_{{\bf k}\sigma}^{\dagger}\rangle \rangle_{E}\sim\delta(E-\lambda_{2})\) to obtain the Hubbard bands with the energies \(\lambda_{2}-\omega_{1,2{\bf q}}\) for vanishing electron (hole) doping with energies near \(0\) and \(U\), respectively, \(\lambda_{2}\) being the corresponding chemical potential [19]. This energy spectrum consists of upper and lower Hubbard subbands, each with a width of order of the bare bandwidth. At the transition point where the interaction strength approaches the critical value \(U_{c}\), the energy gap between these subbands diminishes and eventually closes. A further analysis of collective modes arising from the Hubbard bands with account of doping was performed in Ref. [38].
## 4 Magnetic states of the doped Mott insulator
### Derivation of the Hamiltonian in the spin-wave region
For magnetically ordered phase with strong long-range or short-range order the approximations of previous section do not work since the above approximation for factors \(g\) is not valid [16]. Most simple is the case of ferromagnetic ordering which was investigated earlier in terms of Hubbard's operators [39; 40]. According to Nagaoka [41], the ground state in the large-\(U\) limit is a saturated ferromagnet for one excess hole (or double); this conclusion can be extended on the case of finite doping, as demonstrated from analysis of instabilities of this
state which can be characterized as a half-metallic ferromagnet with an energy gap for one of spin projections [39; 40].
The original version of the Kotliar-Ruckenstein representation (7) provides a mean-field description, but turns out to be insufficient, since it does not describe spin-flip processes which are crucial to describe incoherent states. Therefore we use the rotationally invariant representation (10) and carry out its further transformations.
The square-root factors in (17) can be treated in spirit of mean-field approximation. Correspondingly, the factor \(M\) is put \(\sqrt{2}\) due to sum rule (12); this permits to obtain an agreement with the free-electron limit and with the ferromagnetic case.
According to Ref. [28], we have
\[{\bf S}=\frac{1}{2}\sum\sigma_{\sigma\sigma^{\prime}}p^{\dagger}_{\sigma \sigma_{1}}p_{\sigma\sigma^{\prime}}=\frac{1}{2}(p^{\dagger}_{0}{\bf\overline{ p}}+{\bf\overline{p}}^{\dagger}p_{0}-i|{\bf\overline{p}}^{\dagger}\times{\bf\overline{ p}}|) \tag{30}\]
with \({\bf\overline{p}}=(p^{x},-p^{y},p^{z})\). Then we derive
\[S^{z} = \frac{1}{2}(p^{\dagger}_{0}p_{z}+p^{\dagger}_{z}p_{0}+i(p^{ \dagger}_{x}p_{y}-p^{\dagger}_{y}p_{x})) \tag{31}\] \[= \frac{1}{2}(p^{\dagger}_{0}p_{z}+p^{\dagger}_{z}p_{0}+{p^{+}}^{ \dagger}p^{+}-p^{-\dagger}p^{-})\] \[= \frac{1}{2}(1-(p^{\dagger}_{0}-p^{\dagger}_{z})(p_{0}-p_{z}))-p^{ \dagger}p^{-}\] \[S^{+} = \frac{1}{\sqrt{2}}((p^{\dagger}_{0}+p^{\dagger}_{z})p^{-}+p^{+ \dagger}(p_{0}-p_{z})) \tag{32}\]
where \(p^{\pm}=(p_{x}\pm ip_{y})/\sqrt{2}\) and we have taken into account (12). One can see that commutation relations for spin operators are exactly satisfied, unlike the linearized Holstein-Primakoff representation.
For a Heisenberg ferromagnet (\(p_{0}\simeq p^{z}\simeq 2^{-1/2}\)) we obtain \(S^{+}_{i}\simeq p^{-}_{i}\) to lowest-order approximation, so that the Heisenberg Hamiltonian takes the usual spin-wave form
\[{\cal H}_{d}=\sum_{\bf q}\omega_{\bf q}p^{-\dagger}_{\bf q}p^{-}_{\bf q}+{\rm const },\;\omega_{\bf q}=J_{\bf q}-J_{0} \tag{33}\]
It is crucial to highlight that in order to achieve this outcome, it is essential to retain the vector product in equation (30) to prevent mixing of the bosons \(p\) and \(p^{\dagger}\). This retainment differs from the approach employed in Ref. [28] for the paramagnetic phase. Note that in the magnetic ordering case \(p^{+}_{i}\) is not related to spin operators, see (31), (32).
Eq.(14) can be simplified in the case of half-metallic ferromagnetism near half-filling (small doping, band filling \(n\lesssim 1\)) where, in the mean-field approach, \(p_{0}=p_{z}=p\simeq 1/\sqrt{2}\), \(e\simeq\langle e\rangle=(1-n)^{1/2}\). Taking into account the relation
\[2\overline{p}^{+}_{i}\overline{p}_{i}=\frac{1}{2}(p^{2}_{i0}\sigma_{0}+({\bf S }_{i}{\bf\sigma})) \tag{34}\]
we obtain
\[L = \left(\begin{array}{cc}1-p^{2}_{0}-S^{z}&-S^{+}\\ -S^{-}&1-p^{2}_{0}+S^{z}\end{array}\right)^{-1/2}, \tag{35}\] \[R = \left(\begin{array}{cc}1-p^{2}_{0}+S^{z}-e^{\dagger}_{i}e_{i}&-S ^{-}\\ -S^{+}&1-p^{2}_{0}-S^{z}-e^{\dagger}_{i}e_{i}\end{array}\right)^{-1/2}\] (36) \[p = \left(\begin{array}{cc}p_{0}+p_{z}&p^{-}\\ p^{+}&p_{0}-p_{z}\end{array}\right) \tag{37}\]
Using the sum rule (12) and retaining only diagonal terms we obtain \(L_{++}\sim 1/|e|\), \(R_{--}\sim 1/|p^{\pm}|\). Neglecting the terms proportional to holon operators we derive in the large-\(U\) limit Thus the factor \(e^{\dagger}\) in the numerator of (14) is canceled and we derive in the large-\(U\) limit for the projected operator of hole creation
\[\tilde{c}^{\dagger}_{i\sigma}=\sqrt{2}\sum_{\sigma^{\prime}}\tilde{p}_{i \sigma\sigma^{\prime}}f_{i\sigma}=\frac{1}{\sqrt{2}}\sum_{\gamma\sigma}f_{i \sigma^{\prime}}[\delta_{\sigma\sigma^{\prime}}p_{i0}+({\bf p}_{i}\sigma_{ \sigma^{\prime}\sigma^{\prime}})] \tag{38}\]
or
\[\tilde{c}^{\dagger}_{i\uparrow} = \frac{1}{\sqrt{2}}f_{i\uparrow}(p_{0}+p_{iz})+f_{i\downarrow}p^{+}_ {i}\] \[\tilde{c}^{\dagger}_{i\downarrow} = \frac{1}{\sqrt{2}}f_{i\downarrow}(p_{0}-p_{iz})+f_{i\uparrow}p^{-}_ {i}. \tag{39}\]
In particular, this representation satisfies exactly commutation relations for Hubbard's operators. However, the multiplication rule, which is crucial for calculations,
\[X_{i}(0,-)=X_{i}(0,+)X_{i}(+,-) \tag{40}\]
is satisfied only approximately (\(\frac{1}{\sqrt{2}}(p_{i0}+p_{iz})\simeq 1,X_{i}(+,-)\simeq p^{-}_{i}\)).
In derivation of (39), which was first performed in Ref. [16], spin-wave correction in matrices \(L\) and \(R\) were neglected to replace the operator \(S^{z}\) by 1/2. We can make the next step by noting that according to (31), (32), (8)
\[\sqrt{1/2\pm S^{z}}\simeq\sqrt{1/2\pm 2p_{0}p_{z}/2}\simeq(p_{0}\pm p_{z})/\sqrt{2} \tag{41}\]
(a more strict derivation may be performed within path integral approach). Then we can write the representation in terms of spin operators,
\[\tilde{c}_{i\sigma}=\sum_{\sigma^{\prime}}f^{\dagger}_{i\sigma^{\prime}}\{ \frac{1}{2}\delta_{\sigma\sigma^{\prime}}+({\bf S}_{i}\sigma_{\sigma^{\prime} \sigma})\} \tag{42}\]
or
\[\tilde{c}_{i\uparrow} = f^{\dagger}_{i\uparrow}(\frac{1}{2}+S^{z}_{i})+f^{\dagger}_{i \downarrow}S^{+}_{i}\] \[\tilde{c}_{i\downarrow} = f^{\dagger}_{i\downarrow}(\frac{1}{2}-S^{z}_{i})+f^{\dagger}_{i \uparrow}S^{-}_{i}. \tag{43}\]
Although it was justified above for small doping and magnetic states only, it seems to be reasonable in a more general situation, as will be discussed below.
### Electron states and spin waves in the strongly correlated Hubbard model
Now we can consider the electron and spin Green's functions for a saturated ferromagnet with the use of the Hamiltonian (3). In such a situation of small hole doping, the spin-up spinon states propagate freely and their band is almost half-filled, so that
\[\bar{n}_{\bf k}=\langle f_{\bf k}\uparrow}f^{\dagger}_{\bf k\uparrow}\rangle=n(t _{\bf k})\]
with \(n(E)\) being the Fermi function.
The representation (43) retains commutation relations for Hubbard's X-operators and even the multiplication rule (40), so that the calculations of electron and spin-wave spectra can
be performed step-by-step in analogy with [39; 40] with using expansion in occupation numbers of holes and magnons.
Although the correlated electrons (holes), described by the operators (39), (43), are composite particles, the spin-up states propagate freely on the background of the ferromagnetic ordering, the temperature correction being proportional to \(T^{5/2}\) owing to rotational invariance (however, the residue of the electron Green's function has a more strong \(T^{3/2}\) dependence). Physically, this free motion is due to condensation of \(p_{z}\)-bosons.
On the other hand, the situation is quite non-trivial for spin-down states. Such a state is a complex of spinon \(f_{\uparrow}^{\dagger}\) and boson \((p^{-})^{\dagger}\) so that in the simplest approximation we can write down the convolution of the spinon and magnon Green's functions to obtain
\[G_{\mathbf{k}\downarrow}^{0}(E)=\sum_{\mathbf{q}}\frac{\tilde{n}_{\mathbf{k} +\mathbf{q}}+N(\omega_{\mathbf{q}})}{E-\hbar_{\mathbf{k}+\mathbf{q}}+\omega_{ \mathbf{q}}} \tag{44}\]
with \(N(\omega)\) the Bose function. It should be noted that this result can be also reproduced starting from the boson representation (39), Ref. [16] and even in the more simple Schwinger boson representation (5), see Ref.[17]. The instability of the saturated (half-metallic) state is described as condensation of these bosons.
To improve the approximation and describe the instability we write down the equation of motion for the Green's function
\[G_{\mathbf{k}\downarrow}(E)=\langle\langle\tilde{c}_{\mathbf{k} \downarrow}|\tilde{c}_{\mathbf{k}\downarrow}^{\dagger}\rangle\rangle_{E}=\sum _{\mathbf{q}}\Gamma_{\mathbf{k}\mathbf{q}}(E), \tag{45}\] \[\Gamma_{\mathbf{k}\mathbf{q}}(E)=\sum_{\mathbf{q}^{\prime}}\langle \langle S_{\mathbf{q}}^{-}f_{\mathbf{q}-\mathbf{k}\uparrow}^{\dagger}|f_{ \mathbf{q}^{\prime}-\mathbf{k}\uparrow}S_{-\mathbf{q}^{\prime}}^{+}\rangle \rangle_{E} \tag{46}\]
(note that the terms with \(f_{\downarrow}\) do not work in low orders). Commuting the operator \(S_{\mathbf{q}}^{-}\) with the Hamiltonian (3) and performing decoupling we obtain for \(T=0\) the equation for the Green's function in the right-hand side of (46)
\[(E - t_{\mathbf{k}-\mathbf{q}}+\omega_{\mathbf{q}})\Gamma_{\mathbf{ k}\mathbf{q}}(E) \tag{47}\] \[= \tilde{n}_{\mathbf{k}-\mathbf{q}}[1-(t_{\mathbf{k}-\mathbf{p}}- t_{\mathbf{k}})\sum_{\mathbf{p}}\Gamma_{\mathbf{k}-\mathbf{p}}(E)].\]
The solution of this integral equation yields the result
\[G_{\mathbf{k}\downarrow}(E)=\left\{E-t_{\mathbf{k}}+\left[G_{\mathbf{k} \downarrow}^{0}(E)\right]^{-1}\right\}^{-1} \tag{48}\]
The expressions (44) and (48) were previously derived using the many-electron approach of Hubbard's operators, as described in references [39; 40]. It was noted that these results bear resemblance to Anderson's spinons, which also exhibit zero residue in their Green's function. The Green's function (44) represents a purely non-quasiparticle state, indicating its unconventional nature. Due to the limited dependence on momentum (\(\mathbf{k}\)), the corresponding distribution function of these non-quasiparticle (incoherent) states exhibits low mobility and cannot provide electrical current.
Regarding the Green's function (48), when the doping level \(1-n\) is small, it does not exhibit any poles below the Fermi level (for holes), confirming the previous conclusions. However, as the doping increases, a spin-polaron pole \(E_{F}\) emerges, resulting in the destruction of half-metallic ferromagnetism.
The description of the transition to the saturated state, where the spin-down quasiparticle residue diminishes, resembles that of the Mott transition in the paramagnetic Hubbard model [5]. In this sense, the situation is somewhat comparable to a partial Mott transition occurring in the spin-down subband. For a more detailed discussion on this matter, cf. the review [4] where the orbital-selective Mott transition is explored.
Now we calculate the correction to the magnon frequency. The equation of motion for the spin Green's function reads
\[(\omega-\omega_{\mathbf{q}})\langle\langle S_{\mathbf{q}}^{+}|S_{-\mathbf{q} }^{-}\rangle\rangle_{\omega}=2\langle S^{z}\rangle+\sum_{\mathbf{k}\mathbf{p} }(t_{\mathbf{k}-\mathbf{q}}-t_{\mathbf{k}})\Lambda_{\mathbf{k}\mathbf{q} \mathbf{p}}(\omega), \tag{49}\]
\[\Lambda_{\mathbf{k}\mathbf{q}\mathbf{p}}(E)=\langle\langle S_{\mathbf{k}+ \mathbf{q}-\mathbf{p}}^{+}f_{\mathbf{p}\uparrow}^{\dagger}f_{\mathbf{k} \uparrow}^{\dagger}|S_{-\mathbf{q}}^{-}\rangle\rangle_{\omega} \tag{50}\]
In the same manner, we derive the integral equation
\[(\omega-t_{\mathbf{k}}+t_{\mathbf{p}}-\omega_{\mathbf{k}+\mathbf{q} -\mathbf{p}})\Lambda_{\mathbf{k}\mathbf{q}\mathbf{p}}(\omega) \tag{51}\] \[=\delta_{\mathbf{k}\mathbf{p}}\tilde{n}_{\mathbf{k}}+\sum_{ \mathbf{r}}(t_{\mathbf{k}+\mathbf{r}+\mathbf{q}-\mathbf{p}}-t_{\mathbf{r}}) \Lambda_{\mathbf{r}\mathbf{q}\mathbf{p}}(\omega).\]
Neglecting the integral term (which is possible to leading order in the inverse nearest-neighbor number) we obtain from the expansion of the Dyson equation the renormalized magnon frequency
\[\Omega_{\mathbf{q}}=\omega_{\mathbf{q}}+\sum_{\mathbf{k}}(t_{\mathbf{k}- \mathbf{q}}-t_{\mathbf{k}})\tilde{n}_{\mathbf{k}}. \tag{52}\]
Th exact solution of Eq.(51) provides accurate results to leading order in doping, in agreement with the consideration by Nagaoka [41] (see also [39]).
### Antiferromagnetic case: small and large Fermi surfaces
With increase of the doping, the Nagaoka ferromagnetic state becomes unstable. The instabilities can be also treated within the Kotliar-Ruckenstein representation, as was performed numerically in Refs.[42; 43]. This representation, adopted above for the ferromagnetic phase (43), is expected to hold also in the antiferromagnetic state when being written down in the local (rotating) coordinate system. Moreover, it will work also in the systems with strong spin fluctuations and short-range order (e.g., in the singlet RVB state), but not in the usual structureless paramagnetic state.
The representation (43) is formally very similar to the representation of the Fermi dopons \(d_{i\sigma}^{\dagger}\)[44; 13] introduced to describe formation of small and large Fermi surfaces in doped two-dimensional cuprates. This has the form
\[\tilde{c}_{i-\sigma}^{\dagger}=-\frac{\sigma}{\sqrt{2}}\sum_{\sigma^{\prime}}d_ {i\sigma^{\prime}}^{\dagger}(1-n_{i-\sigma^{\prime}})[S\delta_{\sigma\sigma^{ \prime}}-(\mathbf{S}_{i}\sigma_{\sigma^{\prime}\sigma})]. \tag{53}\]
where \(\sigma=\pm\), \(n_{i\sigma}=d_{i\sigma}^{\dagger}d_{i\sigma}\), and both Fermi spinon (Abrikosov) and Schwinger boson representations can be used for localized \(S=1/2\) spins. The latter representation has the advantage that hybridization of spinons with dopons can describe formation of the large Fermi surface including the localized states.
On the other, the Bose version [13] can successfully describe the small Fermi surface. The presence of strong antiferromagnetic correlations leads to that hopping of dopons between nearest-neighbors is strongly suppressed owing to a local antiferromagnetic order [44; 13]. Then small hole pockets of the Fermi surface, characteristic for the cuprates, are formed, which tend to the \((\pi/2,\pi/2)\) point of the Brillouin zone with increasing the short-range order [13].
Thus we can apply our representation (43) to the same problem. Note that the description in terms of bosons \(p\) (representation (39)) turns out to be oversimplified and incomplete, unlike the approach bases on (43), which provides description in terms of true spin degrees of freedom.
At first sight, the dopon representation can seem to be quite different from standard slave-boson representations. However, the connection can be established by using the constraint \(\sum_{\sigma}f_{i\sigma}^{\dagger}f_{i\sigma}\simeq 1\) (which holds at small doping) and the Abrikosov representation for spin operators
\[S_{i}^{z}=\frac{1}{2}(f_{i\uparrow}^{\dagger}f_{i\uparrow}-f_{i\downarrow}^{ \dagger}f_{i\downarrow}),\ S_{i}^{\sigma}=f_{i\sigma}^{\dagger}f_{i-\sigma}. \tag{54}\]
We rewrite (53) as
\[\tilde{c}_{i\sigma}=\frac{1}{2}(d_{i\downarrow}^{\dagger}f_{i\uparrow}^{ \dagger}-d_{i\uparrow}^{\dagger}f_{i\downarrow}^{\dagger})f_{i\sigma}. \tag{55}\]
Then, we can introduce Anderson's Bose holon operator as a singlet combination of Fermi spinon and new dopon operators [44; 16],
\[e_{i}=f_{i\uparrow}d_{i\downarrow}-f_{i\downarrow}d_{i\uparrow}. \tag{56}\]
Thus, we return to Anderson's representation (4), except for the difference in the factor of \(1/\sqrt{2}\). The problem with this factor does not take place in our version of the Kotliar-Ruckenstein representation (43) due to the factor of \(M\) in (17). Note that the dopon representation can be also derived in the many-electron approach of Hubbard's operators using the analogy with the equivalent narrow-band \(s-d\) exchange model [20; 45].
## 5 Discussion
We have demonstrated that the Kotliar-Ruckenstein representation [8] provides unified description of paramagnetic and magnetic phases. In the paramagnetic phase we present a new interpretation in terms of spin-charge separation and conservation of the Fermi surface in the insulator state. We have also performed the derivation of the Hamiltonian in the magnetically ordered phase in the spin-wave region, which enables one to obtain an agreement with well-established results for ferromagnetic case.
The constructed approach is somewhat similar to the Holstein-Primakoff representation for Heisenberg systems. The Kotliar-Ruckenstein representation includes both Fermi and Bose (or spin) operators and has a rather complicated structure with radicals. Therefore, it in a sense solves the problem of describing transmutation statistics of auxiliary particles when passing from spin-liquid to magnetic phase, which was discussed in Sec. 2 and formulated earlier as an important issue (see, e.g., Ref.[12]).
Under deconfinement conditions, the characteristics of the energy spectrum are significantly affected by the presence of spinon excitations, and this should result in their pronounced dependence on temperature on the scale of the Heisenberg interaction, which can be small in comparison with bare electron energies. The corresponding expressions for the Green's functions can be applied to write down the optical conductivity and describe the optical transitions between the Hubbard's subbands, as demonstrated in Ref.[19].
Anderson [6] applied the concept of spinons to explain the linear specific heat in copper-oxide systems by the contribution of gapless spinons forming the Fermi surface in the spin-liquid-like uniform resonating valence bonds (RVB) state. Although for the cuprates this point remains highly debatable, there exist experimental evidences for contributions of spinons (gapless magnetic excitations) to specific heat and thermal conductivity, etc., in some compounds with frustrated lattices (see, e.g., Refs. [31; 46; 47]),
At the same time, in magnetically ordered phase we have usual spin-wave excitations. These phases are also successfully described by the Kotliar-Ruckenstein representation with account of incoherent states. Exotic phases including both antiferromagnetic order and fractionalized excitations (so-called AFM\({}^{*}\) or SDW\({}^{*}\) phase [4; 48]) can be considered too. In systems with magnetic or superconducting ground state, there is still a possibility for a spin-liquid-like state to emerge at intermediate temperatures, particularly in systems with frustration [48].
As we have demonstrated, topological transitions of a different nature with a reconstruction of the Fermi surface occur in antiferromagnetic and ferromagnetic [17] phases. It is evident now that the Mott transition leading to a non-magnetic ground state is closely linked to topological characteristics. This transition involves a deconfined spin-liquid state that exhibits fractionalization and extensive quantum entanglement [10]. Understanding the exotic correlated paramagnetic phase, which can possess intricate structures, is a significant challenge in this context.
The author is grateful to Yu. N. Skryabin and M. I. Katsnelson for useful discussions. The research funding from the Ministry of Science and Higher Education of the Russian Federation (the state assignment, theme "Quantum" No. 122021000038-7) is acknowledged. The treatment of half-metallic ferromagnetic state is supported by the grant of the Russian Science Foundation 23-42-00069.
|
2309.14117 | Small Objects Matters in Weakly-supervised Semantic Segmentation | Weakly-supervised semantic segmentation (WSSS) performs pixel-wise
classification given only image-level labels for training. Despite the
difficulty of this task, the research community has achieved promising results
over the last five years. Still, current WSSS literature misses the detailed
sense of how well the methods perform on different sizes of objects. Thus we
propose a novel evaluation metric to provide a comprehensive assessment across
different object sizes and collect a size-balanced evaluation set to complement
PASCAL VOC. With these two gadgets, we reveal that the existing WSSS methods
struggle in capturing small objects. Furthermore, we propose a size-balanced
cross-entropy loss coupled with a proper training strategy. It generally
improves existing WSSS methods as validated upon ten baselines on three
different datasets. | Cheolhyun Mun, Sanghuk Lee, Youngjung Uh, Junsuk Choe, Hyeran Byun | 2023-09-25T13:15:57Z | http://arxiv.org/abs/2309.14117v1 | # Small Objects Matters in Weakly-supervised Semantic Segmentation
###### Abstract
Weakly-supervised semantic segmentation (WSSS) performs pixel-wise classification given only image-level labels for training. Despite the difficulty of this task, the research community has achieved promising results over the last five years. Still, current WSSS literature misses the detailed sense of how well the methods perform on different sizes of objects. Thus we propose a novel evaluation metric to provide a comprehensive assessment across different object sizes and collect a size-balanced evaluation set to complement PASCAL VOC. With these two gadgets, we reveal that the existing WSSS methods struggle in capturing small objects. Furthermore, we propose a size-balanced cross-entropy loss coupled with a proper training strategy. It generally improves existing WSSS methods as validated upon ten baselines on three different datasets.
## 1 Introduction
Recently, weakly-supervised learning (WSL) has been attracting attention because of its low-cost annotation. Among many tasks, weakly-supervised semantic segmentation (WSSS) methods learn to predict semantic segmentation masks given only weak labels such as image-level class labels for training.
To solve this problem, existing WSSS techniques generate pseudo segmentation masks from a classification network and then train a fully-supervised semantic segmentation model such as DeepLabV2 [4]. To improve WSSS performances, most existing methods have focused on producing more accurate pseudo labels. With this strategy, WSSS performances have been greatly improved in the last five years [1, 26, 29, 30, 35, 42, 43, 50, 38].
However, we lack a detailed sense of performance: do methods with high mIoU always better capture all the details? Interestingly, we observe that some methods with lower mIoU better capture small objects than others. Although it is undoubtedly important that the segmentation model also correctly captures small objects, this limitation has not been well studied yet in WSSS literature. How does each method behave in different types of environments? To answer this question, we address the limitations of the conventional metric, the dataset, and the training objective, and propose a complement thereby we anticipate WSSS techniques to become more complete and applicable to different needs.
**Conventional metric (mIoU) and its pitfall.** mIoU is mean of per-class IoUs where IoU is the intersection-over-union of the segmented objects. While an IoU is depicted with one predicted segment and one ground-truth segment, it pre-accumulates _all_ predicted pixels and _all_ ground-truth pixels in the entire dataset (Fig. 2 (a)). mIoU has widely been used to measure the performance of different models in semantic segmentation.
Despite of its usefulness in measuring the overall accuracy of segmentation predictions, mIoU does not account for the comprehensiveness of the predictions. As illustrated in Fig. 1 (a), _Prediction 1_ and _Prediction 2_ have the same IoU score since they miss the same number of pixels. However, in _Prediction 1_, the red cross marks indicate a complete failure in object segmentation, while _Prediction 2_ can be considered as minor errors.
**Conventional dataset.** The PASCAL VOC 2012 [13] is the representative benchmark for WSSS. The problem is, however, the evaluation set of VOC has an imbalanced distribution in terms of object-size. Fig. 1 (b) shows the over
all distribution for 20 classes of the VOC validation set per each size+. Many classes fall short in the number of small objects. Even with an ideal metric, we will never know how methods perform on small objects with few samples such as small birds. Besides, we note that MS COCO [33], another popular benchmark with 80 classes for WSSS, also suffers from imbalanced distribution. More information of dataset distribution is in the supplementary material.
Footnote †: Following MS COCO, we regard an instance as small if total number of pixels\(<32\times 32\), medium if the total number of pixels\(<96\times 96\), and large for the rest.
**Training objective.** Pixel-wise cross-entropy considers all individual pixels equally important by averaging. Thus the networks will consider small objects less important and lean toward large objects with many pixels. While the fully-supervised semantic segmentation methods have some remedy [12, 34], WSSS literature has paid less attention to this problem. Existing works mostly focus on producing better pseudo masks to train the main segmentation network with the same pixel-wise cross-entropy.
**Our solutions.** In this paper, we suggest a way to address the above three limitations. First, we introduce a new evaluation metric for semantic segmentation, instance-aware mean intersection-over-union (IA-mIoU). It is important to accurately capture objects of all sizes to improve IA-mIoU. Next, we propose an evaluation dataset balanced in terms of object-size, PASCAL-B, which contains almost the same number of instances for each size, namely, large, medium, and small. With our new benchmark and evaluation metric, we can correctly measure the performances of existing WSSS models in terms of object size. Specifically, we re-evaluate ten state-of-the-art methods [1, 26, 29, 30, 35, 38, 42, 43, 45, 50] and observe interesting results; all evaluated methods struggle in capturing small objects. Lastly, we propose a new loss function paired with a training strategy for segmentation models to balance the objective. Thorough experiments on three datasets demonstrate that our method achieves comprehensive performance boost on ten existing WSSS methods. We believe that it will serve as a strong baseline to start with toward more comprehensive performance. The code and the dataset will be publicly available for research community.
## 2 Instance-aware mIoU
In this section, we explain how our metric addresses the limitations of mIoU. Then, we compare mIoU and our instance-aware mIoU (IA-mIoU) with the results of several corner cases.
### Definition of IA-mIoU.
In Fig. 2 (a), we visualize the way of calculating Io\(U_{c}\) of a class \(c\), for mIoU. First, IoU\({}_{c}\) unions all pixels of ground-truth (GT\({}_{c}\)) and prediction (Pr\({}_{c}\)) respectively, and then calculates the intersection of them. During the process, it does not consider which instance each pixel belongs to. As a result, mIoU inherently does not provide a detailed sense of performance but provides coarse judgment.
To reflect the different importance of pixels, we suggest measuring the performance of each instance individ
Figure 1: Problems of conventional metric and dataset. (a) Prediction 1 and 2 show the prediction for different cases which result in the same IoU scores. (b) Some classes of PASCAL VOC validation set suffer from a lack of small-sized objects. We sort the number of instances in descending order for each class per each size.
Figure 2: Visual comparison of the computing process of IoU\({}_{c}\) for mIoU and IoU\({}_{c}\) for IA-mIoU regarding a class \(c\)
ually. We first split predictions and ground-truths of class \(c\) into different instances _i.e._, \(\text{Pr}_{c,1}\), \(\text{Pr}_{c,2}\), \(\text{GT}_{c,1}\) and \(\text{GT}_{c,2}\) as shown in Fig. 2 (b). Then, we compute IoU scores IoU\({}_{c,i}\) for each instance \(i\) and average them to obtain IoU\({}_{c}\) that is instance-aware IoU score of the class \(c\):
\[\text{IoU}_{c,i}=\frac{\text{Pr}_{c,i}\cap\text{GT}_{c,i}}{\text{Pr}_{c,i} \cup\text{GT}_{c,i}},\ \ \ \text{IoU}_{c}=\frac{\sum_{i=0}^{T}\text{IoU}_{c,i}}{T}, \tag{1}\]
where \(T\) is the total number of instances of the class \(c\). Finally, we average the per-instance IoUs to compute instance-aware mIoU (IA-mIoU):
\[\text{IA-mIoU}=\frac{\sum_{c=0}^{N}\text{IoU}_{c}}{N}. \tag{2}\]
The following subsection describes how to split the predictions and ground-truths, and how to assign prediction instances to ground-truth instances.
### Splitting and assigning instances
Although we introduced the concept of instance, it does not exist in the segmentation task. Hence, we assume that the ground-truth segmentation masks can be either split into connected components (blobs) or split by additional instance annotation when available for evaluation. Please note that we introduce the instance labels only for more precise evaluation, not for training.
To fully utilize the instance masks for evaluation, we also have to split the predicted segments into blobs and assign them to overlapping ground-truth instances. There are three types of predictions for the model: 1) one prediction covers one object, 2) one prediction covers multiple objects simultaneously, and 3) prediction fails to cover any target instances. We consider only the first two cases because the last case has no overlapping region between prediction and ground-truth+
Footnote †: False positives in a class \(c\) do not contribute to IoU\({}_{c}\) but they decrease IoU\({}_{\text{background}}\).
The procedure is illustrated in Fig. 3. Both cases start from drawing contour lines from prediction for class \(c\) (\(\text{Pr}_{c}\)) to get connected components (\(\text{Pr}_{c,i}\)). The next step, however, is different for _case 1_ and _case 2_ since the former is a one-to-one correspondence relationship between \(\text{Pr}_{c,i}\) and \(\text{GT}_{c,i}\) and the latter is one-to-many.
For the _case 1_, each connected component is assigned to overlapping target instance in the second step (\(\text{Pr}_{c,1}\rightarrow\text{GT}_{c,1}\) and \(\text{Pr}_{c,2}\rightarrow\text{GT}_{c,2}\)). Then, we can calculate the IoU per instance. On the other hand, for the _case 2_, we have to split the connected component into multiple parts since it overlaps with multiple target instances. In other words, we have to distribute the non-overlapped area to each instances. To do this, we apply weighted clustering algorithm that if cluster (\(i.e.\), target instance) has more overlapped pixels than others, it takes larger unassigned regions. It has following advantages: 1) it does not favor or damage particular instances, 2) it is invariant to locations of the chosen pixels, and 3) it is less bias on the object size.
This algorithm is implemented by adding two steps. We first assign the intersecting regions to the corresponding target instances and compute the ratio of the overlapping area (\(i.e.\), \(\text{GT}_{c,1}\cap\text{Pr}_{c,1}:\text{GT}_{c,2}\cap\text{Pr}_{c,1}=16:8\)) in the second step. In the final step, we distribute the remaining unassigned area to each target instance according to the ratio. The way of distribution of pixels can be not unique, but we focus on reasonable distributions of pixels based on instance size. For the multiple predictions and ground-truths, we would perform the same assignment process for each prediction and its corresponding ground-truth instances. This approach enables instance-aware metric in semantic segmentation tasks, even when the model does not provide instance-level predictions. In the next subsection, we design corner cases to compare the tendencies of mIoU and IA-mIoU clearly.
### Sensitivity analysis on corner cases.
We design four corner cases in Fig. 4. We first set up small and large instances in an image, and then gradually expand the predictions to cover the assigned ground-truth instances. The outcomes show the limitation of mIoU more clearly: the prediction on a large object dominates the over
Figure 3: Two cases for assigning predictions to the corresponding ground-truth instances. Pixels in color are the prediction and boxes with red lines are ground-truth instances. (a) When there is a one-to-one correspondence between prediction and ground-truth instance, each prediction is assigned to the corresponding ground-truth instance. (b) When there is a one-to-many correspondence between prediction and ground-truth instances, non-overlapping regions in step 2 (orange pixels with check pattern) distribute to each instance based on the ratio of blue and yellow pixels with dot pattern.
all performance. The mIoU scores of _case A_ and \(C\) increase exponentially with the improvement of prediction on a large object. On the contrary, the performances for _case B_ and \(D\) barely change even though the predictions on small objects improve. Unlike the mIoU, our metric IA-mIoU steadily increases as the predictions fill the target instances regardless of the instance size. Furthermore, since we split the instances, we acquire more detailed sense of the performance according to their sizes (\(i.e.,\) measuring only specific size of objects).
In addition, Fig. 5 plots the behavior of mIoU and IA-mIoU in _dog_ class of the PASCAL VOC 2012 dataset. Starting from the perfect score, \(i.e.,\) the prediction equals the ground-truth, we remove one instance at a time from the prediction starting with the smallest and progressing to the largest. IA-mIoU drops consistently, while mIoU barely decreases for small instances and rapidly decreases for large instances. We draw the red dashes in Fig. 5 to distinguish the size of instances more clearly.
We hope that it would be beneficial for the community by providing a new comprehensive evaluation metric that can measure the semantic segmentation performance on small objects accurately.
## 3 Dataset analysis and construction
Imbalanced evaluation dataset may cripple the reliability of an evaluation protocol because the performance will vary due to the lack of samples. We believe that any objects with various sizes should not be undervalued because of their small number.
To tackle the imbalanced dataset issue, we suggest a new balanced benchmark dataset for evaluation. We construct PASCAL-B by collecting images and annotations from LVIS [17] and MS COCO [33] datasets which includes at least one of 20 categories2 of the PASCAL VOC classes. Then, we converted the annotations which do not
Figure 4: Sensitivity to the size of instances on corner cases. We plot the behavior of mIoU and IA-mIoU as the prediction gradually grow to fill the ground-truth instance \(L\) (or \(S_{1,2,3}\)). Empty squares are uncovered ground-truth instances and sky blue squares are predictions. Gradual increase of the predictions is marked with orange dashed arrows.
Figure 5: Corner case with real data. mIoU declines quickly as the size of instances gets larger while IA-mIoU drops consistently.
belong to the 20 categories of the PASCAL VOC dataset into the background class. Among the remaining images, a few images have wrong annotations. Therefore, two computer vision experts (authors of this paper) manually filtered out such images for two weeks. Then, we randomly sampled images to ensure the balance over classes and object size distribution. In the end, PASCAL-B consists of 1,137 images with 20 classes. We give some representative images of the PASCAL-B dataset in the supplementary material.
As illustrated in Fig. 6 (b), our dataset is much more balanced in terms of classes and object-size distribution. Compared to PASCAL VOC, our PASCAL-B has fewer outliers, \(i.e.,\) points in gray, and they do not have extremely large values. Also, PASCAL-B keeps a similar number of instances for each size while PASCAL VOC has more large or small instances. In summary, a primary motivation for creating PASCAL-B was to address the issue of imbalanced evaluation datasets commonly encountered in semantic segmentation task. Existing benchmarks suffer from disparities in class or object size distributions, leading to skewed performance evaluations. PASCAL-B addresses this concern by meticulously constructing a dataset that features balanced classes and object sizes. Instead of replacing established benchmarks such as ADE20K [49], COCO [33], or Cityscapes [7], PASCAL-B complements them by offering an alternative approach to assessment. For more details regarding the dataset, please refer to the supplementary material.
## 4 Methods
### Evaluated WSSS methods
We evaluate ten existing methods under various weak-levels of supervision: bounding box supervision (\(i.e.,\) BANA [29] and BBAM [35]), saliency supervision (\(i.e.,\) RCA [50], EDAM [42] and NS-ROM [45]), natural language supervision (\(i.e.,\) CLIM [43]), and image supervision (\(i.e.,\) AMN [30], RIB [26], CDA [38], and IRN [1]). These methods follow the two-stage training pipeline of WSSS. In the first stage, they generate the pseudo masks by their methods. Then, they train a semantic segmentation network with the pseudo masks from the first stage. All the above methods except BANA [35] only focus on stage 1 to produce the high-quality masks by refining the initial seed to improve the performance. A more detailed explanation for the above methods is in the supplementary material.
### Proposed loss function and training strategy
To address the limitation of pixel-wise cross-entropy (CE) loss in Sec. 1, we propose a new loss function for a model to have the capacity of capturing small objects. We first give weights to each pixel according to the size of the object when computing the loss. Since the instance ground-truth masks are not available for training, we find all connected components for each class from pseudo ground-truth masks as in Fig. 7. Then, we get weight \(w_{x,y}\) corresponding to a pixel \((x,y)\) as follows:
\[w_{x,y}=\begin{cases}1,&\text{if }(x,y)\in background,\\ \min(\tau,\frac{\sum_{k=1}^{K}\mathbf{S}_{c,k}}{\mathbf{S}_{c,n}}),&\text{ otherwise}\end{cases} \tag{3}\]
where \(\mathbf{S}_{c,k}\) is the number of pixels in its connected component \(I_{c,k}\), while \(n\) is the index number of instance which pixel \((x,y)\) is included. \(K\) is the number of instances with \(c\)-th class in an image. Through Eq. 3, we assign a larger weight to the pixels of the relatively small instance while preventing the value of weight from getting excessively large by setting up the upper limit \(\tau\). Finally, we multiply weights to cross-entropy loss as in Eq. 4 and we call this loss function \(L_{sw}\) as size-weighted cross-entropy loss.
\[L_{sw}=-\frac{1}{H\times W}\sum_{c=1}^{C}\sum_{x=1}^{H}\sum_{y=1}^{W}Y_{c,x,y} w_{x,y}log(p_{c,x,y}), \tag{4}\]
Figure 6: The distribution of validation set for each dataset: (a) PASCAL VOC and (b) PASCAL-B. We draw the mean (\(i.e.,\) the triangle in yellow) and the variance over classes for each size of instances (\(i.e.,\)_small_, _medium_, and _large_). The point in gray indicates the number of instances for each class. On the top of each figure, we report the ratio of each size of instances to the total number of instances.
Figure 7: Example connected components for the loss function. \(I_{c,k}\) is the _k_-th connected components for \(c\)-th class in an image.
where \(H\) and \(W\) is the height and width of images, respectively, and \(p_{c,x,y}\) is the probability to predict the class of the pixel \((x,y)\) as \(c\).
Even though \(L_{sw}\) can improve the ability of the model to catch small objects, there is a side effect that the model fails to learn extremely large instances with \(L_{sw}\) during the whole training process. Therefore, we apply a new training strategy that adds a regularization term to Eq. 4 by introducing elastic weight consolidation (EWC) [11]. EWC helps model to learn new tasks continually while preserving the information of previous tasks. Following the strategy of EWC, we also divide the training into two tasks. We first train a model using pixel-wise cross-entropy loss which is more beneficial to learn the large object as we analyze in Sec. 1, and call this task as _task A_. During the training for _task A_, model updates the importance of parameters in Fisher information matrix. Then, for the new task, the model is fine-tuned by \(L_{sw}\) and EWC helps to regularize the important parameter for the previous _task A_ based on the matrix. Thus, our final loss function \(L_{sb}\), size-balanced cross-entropy loss, is defined as:
\[L_{sb}=L_{sw}+\sum_{i}\frac{\lambda}{2}F_{i}(\theta_{i}-\theta_{A,i}^{*})^{2}, \tag{5}\]
where \(\theta_{i}\) and \(\theta_{A,i}^{*}\) are \(i\)-th parameter for present task and _task A_, respectively. \(\lambda\) controls the importance of regularization and \(F_{i}\) is the importance of parameter \(i\) in the Fisher information matrix. With \(L_{sb}\), a model can learn the new information for _task B_ (_i.e.,_ learning small objects) while maintaining the previous information from _task A_ (_i.e.,_ learning large objects).
## 5 Experiments
### Experimental setting
**Dataset.** We evaluate each method on three datasets: PASCAL VOC [13], PASCAL-B, and MS COCO [33]. PASCAL VOC and PASCAL-B share the same training set though PASCAL-B is only designed for validation rather than training. PASCAL VOC and PASCAL-B consist of a similar number of images, 1,449 and 1,137, respectively.
**Evaluation metric.** We use mIoU and IA-mIoU to compare the performance of methods. Since our IA-mIoU can measure the small-sized instance only, we provide the IA\({}_{S}\) for the detailed performance of small objects.
**Implementation detail.** We generate pseudo masks for the segmentation networks using the official codes and strictly follow the setting provided in each paper [1, 26, 29, 30, 35, 42, 43, 45, 50]. Then, we use DeeplabV2 with ResNet-101 [4] as segmentation networks. For more detail, please see the supplementary material. All the experiments were done by one GeForce RTX 3090 GPU for PASCAL VOC and two RTX 3090 GPUs for MS COCO.
worthy that all WSSS methods get badly lower scores for small objects (IA\({}_{S}\)) compared to overall scores. It indicates that WSSS methods struggle to capture the small instances accurately as we mentioned in Sec. 1.
In particular, state-of-the-art techniques in terms of mIoU encounter more difficulty in capturing small objects compared to other methods. Consequently, they get lower IA-mIoU while getting the highest mIoU, since IA-mIoU reflects the scores of each instance equally but mIoU relatively neglects the small objects. This indicates mIoU fails to catch the detailed sense of performance on different sizes of objects.
We do the same experiments on the MS COCO dataset in Table 2. According to the result of these experiments, we further demonstrate that existing WSSS methods struggle with small objects and it has been overlooked with mIoU.
PASCAL VOC vs. PASCAL-B.Table 3 compares the performances of models on our newly proposed benchmark, PASCAL-B. The models in Table 3 use the same checkpoint from Table 1 which are trained using the PASCAL VOC training set.
We argue that evaluating methods using imbalanced datasets can lead to biased scores, even with our proposed metric. To better evaluate the ability of models, it is essential to have a sufficient number of samples for evaluation per object-size and per class. However, the imbalance in the PASCAL VOC dataset makes it difficult to validate models since some classes have no small-sized objects, or there are only a few samples available. This lack of data for certain classes limits the opportunities for models to be evaluated on their performance, leading to potential biases in the evaluation process. On the other hand, we address this issue by constructing PASCAL-B which includes a sufficient number of samples for each object-size while keeping a balanced distribution across classes.
In this manner, the results in Table 3 with PASCAL-B provide better comprehensive assessment of WSSS methods compared to the scores in Table 1. When comparing the results of both tables, we observe that ranking order of WSSS methods is barely changed for mIoU and IA-mIoU in Table 1 (Spearman's rho: 0.79). On the other hand, it has totally changed in Table 3 with PASCAL-B dataset (Spearman's rho: 0.38), which indicates that IA-mIoU scores with PASCAL-B evaluates the performance of models differently. We believe that the fundamental reason for this phenomenon lies in the discrepancy of distributions in terms of instance sizes between the two datasets. This suggests that IA-mIoU and PASCAL-B are both necessary to properly evaluate per-size performances.
Applying only the size-weighted cross-entropy loss function \(L_{sw}\) is powerful enough to gain notable improvements on small instances (IA\({}_{S}\)) and IA-mIoU increases by 2.9 points. However, mIoU becomes slightly worse than the baseline. In other words, \(L_{sw}\) alone does not ensure the same performance on the largest instances. \(L_{sb}\) further boosts performance in all aspects by facilitating additional objective, covering small instances, while maintaining the previous objective, covering relatively large instances. Again, IA-mIoU enables detailed analyses by splitting the instances. In short, introducing the size-balanced cross-entropy loss improves the performance on small instances and pairing EWC training strategy preserves the performance on large instances, resulting in overall improvement in both mIoU and IA-mIoU.
## 6 Related Work
### Weakly-supervised semantic segmentation
Weakly-supervised semantic segmentation mainly adopts a two-stage pipeline: pseudo mask generation and training segmentation network. Most recent methods utilize Class Activation Maps (CAMs) [48] to generate a pseudo mask. However, CAMs have limitations in focusing on the most discriminative regions of the object or capturing frequently co-occurring background components. To solve this problem, lots of techniques have been proposed: adversarial erasing [18, 40, 41, 25, 32, 6, 19], seed growing [19, 23, 46], natural language supervision [43], context decoupling [38] and so on [2, 3, 26, 28, 47]. Also, many methods [14, 15, 16, 20, 27, 39, 42, 44, 45] adopt a saliency supervision to refine the prediction map. It is usually utilized to enhance the result in a post-processing step by distinguishing the foreground and background of the object. Recently, Lee et al. [31] try to make use of a saliency map during the training phase to maximize its potential. Besides, there are also some studies using a bounding box as a supervisory signal [29, 35, 36, 21, 37, 24] which is still cheaper than mask annotation. They achieve notable performance in WSSS since a bounding box label provides the exact location of all objects additionally. Our research, however, is interested in getting the better performance of models by improving the segmentation network in the second stage. Though few studies propose methods for segmentation networks, we suggest balanced training considering the size of instances in WSSS.
### Segmentation metrics
Here we briefly review the metrics for semantic segmentation. Pixel accuracy is the most basic metric for the task. It measures the accuracy for each class by computing the ratio of correctly predicted pixels of the class to all pixels of that class. The weakness of this metric is it does not consider false positives. Therefore, mean intersection-over-union (mIoU) replaces the pixel accuracy for semantic segmentation measures. It assesses the performance of models by calculating prediction masks intersection ground-truth masks over prediction masks union ground-truth masks. The mIoU compensates for the shortcoming of pixel accuracy by taking account of false positive. Nonetheless, as we analyze it in the next section, it still suffers from a size imbalance problem. Besides, various metrics [22, 5, 8, 9] are also investigated. Cordts et al. [8] point out the inherent bias of the traditional IoU measure towards larger instances. They proposed instance-level IoU which focuses on adjusting pixel contributions based on instance sizes and class-averaged instance sizes, aiming to refine mIoU. However, our metric IA-mIoU evaluates each instance individually by segmenting predictions into instances, providing a comprehensive assessment that is not influenced by instance size.
## 7 Conclusion
### Contributions
In this paper, we focus on the comprehensive assessment and improvement of weakly-supervised semantic segmentation (WSSS) by proposing a novel metric, dataset, and loss function with an appropriate training strategy. First, we uncover the overlooked issue related to small-sized instances due to the conventional metric (mIoU). To address this, we design the instance-aware mIoU (IA-mIoU) to measure the performance of models more precisely regardless of object-size. Moreover, we point out the imbalance problem in benchmarks of WSSS and introduce a well-balanced dataset for evaluation, PASCAL-B. Lastly, we propose the size-balanced cross-entropy loss to compensate for the imbalance problem of pixel-wise cross-entropy loss. We show the effectiveness of our loss function on ten WSSS methods over three datasets measured by mIoU and IA-mIoU.
### Limitations
Our findings can be applied to fully-supervised semantic segmentation methods. However, due to limited computing power, we were unable to utilize more recent FSS models and evaluate them with datasets such as ADE20K [49] and Cityscapes [7]. Nevertheless, we hope that our study can serve as inspiration for other researchers who have the necessary resources to explore these avenues further.
## 8 Acknowledgment
This research was supported by the National Research Foundation of Korea grant funded by the Korean government (MSIT) (No. 2022R1A2B5B02001467) |
2309.09620 | Turbo Coded OFDM-OQAM Using Hilbert Transform | Orthogonal frequency division multiplexing (OFDM) with offset quadrature
amplitude modulation (OQAM) has been widely discussed in the literature and is
considered a popular waveform for 5th generation (5G) wireless
telecommunications and beyond. In this work, we show that OFDM-OQAM can be
generated using the Hilbert transform and is equivalent to single sideband
modulation (SSB), that has roots in analog telecommunications. The transmit
filter for OFDM-OQAM is complex valued whose real part is given by the pulse
corresponding to the root raised cosine spectrum and the imaginary part is the
Hilbert transform of the real part. The real-valued digital information
(message) are passed through the transmit filter and frequency division
multiplexed on orthogonal subcarriers. The message bandwidth corresponding to
each subcarrier is assumed to be narrow enough so that the channel can be
considered ideal. Therefore, at the receiver, a matched filter can used to
recover the message. Turbo coding is used to achieve bit-error-rate (BER) as
low as $10^{-5}$ at an average signal-to-noise ratio (SNR) per bit close to 0
db. The system has been simulated in discrete time. | Kasturi Vasudevan, Surendra Kota, Lov Kumar, Himanshu Bhusan Mishra | 2023-09-18T09:44:35Z | http://arxiv.org/abs/2309.09620v1 | # Turbo Coded OFDM-OQAM Using Hilbert Transform
###### Abstract
Orthogonal frequency division multiplexing (OFDM) with offset quadrature amplitude modulation (OQAM) has been widely discussed in the literature and is considered a popular waveform for 5th generation (5G) wireless telecommunications and beyond. In this work, we show that OFDM-OQAM can be generated using the Hilbert transform and is equivalent to single sideband modulation (SSB), that has roots in analog telecommunications. The transmit filter for OFDM-OQAM is complex valued whose real part is given by the pulse corresponding to the root raised cosine spectrum and the imaginary part is the Hilbert transform of the real part. The real-valued digital information (message) are passed through the transmit filter and frequency division multiplexed on orthogonal subcarriers. The message bandwidth corresponding to each subcarrier is assumed to be narrow enough so that the channel can be considered ideal. Therefore, at the receiver, a matched filter can used to recover the message. Turbo coding is used to achieve bit-error-rate (BER) as low as \(10^{-5}\) at an average signal-to-noise ratio (SNR) per bit close to 0 db. The system has been simulated in discrete time.
Keywords: OFDM-OQAM, FBMC, GFDM, OFDM, SSB, frequency offset, average SNR per bit, BER, matched filter, Hilbert transform, turbo code.
## Introduction
Orthogonal frequency division multiplexing (OFDM) [1, 2, 3] and OFDM offset quadrature amplitude modulation (OFDM-OQAM) [4] are the preferred modulation techniques for transmission of digital information over frequency selective channels, both fading and non-fading. The variants of OFDM-OQAM [5, 6, 7, 8, 9] are known as filter bank multicarrier (FBMC) [4, 5, 6, 7] and universal filtered multicarrier (UFMC) [10, 11] and generalized frequency division multiplexing (GFDM) [12, 13, 14] in the literature.
One of the key advantages of OFDM-OQAM/FBMC/UFMC over OFDM is its immunity against carrier frequency offsets (CFO). In other words, it may not be necessary for an OFDM-OQAM/FBMC/UFMC system to estimate and cancel the CFO, unlike OFDM. However, it has been shown in [1, 2, 3] that it is possible to estimate and cancel the CFO very accurately with large scope for parallel processing. The other important feature of OFDM-OQAM is the spectral containment of each subcarrier using a transmit filter, which
is absent in OFDM. However, OFDM is more attractive than OFDM-OQAM in terms of implementation simplicity.
In this work, we demonstrate that OFDM-OQAM/FBMC/UFMC can be efficiently implemented using the Hilbert transform at the transmitter and a matched filter at the receiver [15]. It must be noted that using OQAM would improve the symbol density in time-frequency space, at the cost of introducing intersymbol interference (ISI) at the matched filter output and increasing the receiver complexity. Therefore, in this work, we do not use OQAM and yet have a symbol density in time frequency space greater than unity. We however retain the nomenclature "OFDM-OQAM/FBMC/UFMC" since this work deals with multicarrier communications.
We use the following notation. Complex quantities are denoted by a tilde e.g., \(\tilde{p}(t)\), estimates are denoted by hat e.g., \(\hat{b}\) and convolution is denoted by star e.g., \(p(t)\star g(t)\). This article is organized as follows. We first discuss the theory of OFDM-OQAM, followed by derivation of Hilbert transform of the pulse corresponding to root-raised cosine spectrum. The discrete-time implementation issues are discussed next, motivating the need for a modified Hilbert transform. The discrete-time system model used for computer simulations is presented, followed by results and conclusions.
## Theory
At the outset, we note that multicarrier communication is equivalent to frequency division multiplexing. The time-frequency representation of a multicarrier communication system is shown in Figure 1[4]. The symbol-rate is \(1/T\) baud and the subcarrier spacing is \(\mathscr{B}\). The red colour dots denote symbols. The subcarrier frequency can be positive or negative, as will be apparent later.
The complex envelope of a linearly modulated digital signal is given by [16]
\[\tilde{s}(t) =\sum_{k=-\infty}^{\infty}S_{k}\,\tilde{p}(t-kT)\] \[=s_{I}(t)+\mathrm{j}\,s_{Q}(t)\qquad\text{(say)} \tag{1}\]
Figure 1: Symbol density of multicarrier communication system in time-frequency space.
where \(S_{k}\) denotes complex-valued symbols drawn from an \(M\)-ary constellation, \(\tilde{p}(t)\) denotes the (possibly complex-valued) impulse response of the transmit filter, the subscripts "\(I\), \(Q\)" denote in-phase (real) and quadrature (imaginary) components respectively. When the symbols \(S_{k}\) are uncorrelated, the power spectral density of \(\tilde{s}(t)\) in (1) is [16]
\[S_{\tilde{s}}(F)=\frac{P_{\mathrm{av}}}{2T}\left|\tilde{P}(F)\right|^{2} \tag{2}\]
where \(\tilde{P}(F)\) is the Fourier transform of \(\tilde{p}(t)\) and \(P_{\mathrm{av}}\) denotes the average power of the \(M\)-ary constellation. If the transmit filter \(\tilde{p}(t)=p(t)\) is real-valued [16] having a root raised cosine (RRC) spectrum with roll-off factor \(\rho\), \(0<\rho\leq 1\), then the minimum spacing between subcarriers for no aliasing would be
\[\mathscr{B}=\frac{2(1+\rho)}{2T} \tag{3}\]
which is essentially the two-sided bandwidth of \(\tilde{P}(F)\). Therefore, the symbol density in time-frequency space is the inverse of the area of each rectangle in Figure 1[4]:
\[\frac{1}{\mathscr{B}T}=\frac{1}{(1+\rho)}<1. \tag{4}\]
In this article, we propose a complex-valued transmit filter given by
\[\tilde{p}(t)=p(t)+\mathrm{j}\,\hat{p}(t) \tag{5}\]
where \(p(t)\) has an RRC spectrum and \(\hat{p}(t)\) is the Hilbert transform of \(p(t)\). The Fourier transform of \(\tilde{p}(t)\) in (5) is
\[\tilde{P}(F) =P(F)+\mathrm{j}\,(-\mathrm{jsgn}(F))P(F)\] \[=\left\{\begin{array}{ll}2P(F)&\mbox{for }F>0\\ P(0)&\mbox{for }F=0\\ 0&\mbox{for }F<0\end{array}\right. \tag{6}\]
where \(P(F)\) is the Fourier transform of \(p(t)\) and \(\mathrm{sgn}(\cdot)\) is the signum function [17, 18]. The minimum spacing between subcarriers in this case is
\[\mathscr{B}=\frac{1+\rho}{2T} \tag{7}\]
resulting in symbol density in time-frequency space equal to
\[\frac{1}{\mathscr{B}T}=\frac{2}{(1+\rho)}\geq 1. \tag{8}\]
It is emphasized here that when \(\tilde{p}(t)\) is given by (5), the symbols \(S_{k}\) have to be real-valued. The reason is as follows. The complex envelope in (1) can be written as
\[\tilde{s}(t)=\tilde{s}_{1}(t)\star\tilde{p}(t) \tag{9}\]
where "\(\star\)" denotes convolution and
\[\tilde{s}_{1}(t)=\sum_{k=-\infty}^{\infty}S_{k}\,\delta_{D}(t-kT)\]
\[=s_{1,\,I}(t)+\mathrm{j}\,s_{1,\,Q}(t)\qquad\text{(say)} \tag{10}\]
where \(\delta_{D}(\cdot)\) denotes the Dirac-delta function [17, 18]. At the receiver, we use a matched filter given by \(\tilde{p}^{*}(-t)\). The matched filter output would be given by [16]
\[\tilde{y}(t)=\tilde{s}_{1}(t)\star\hat{p}(t)\star\tilde{p}^{*}(-t). \tag{11}\]
Now
\[\tilde{p}(t)\star\tilde{p}^{*}(-t) =[p(t)+\mathrm{j}\,\hat{p}(t)]\star[p(-t)-\mathrm{j}\,\hat{p}(-t)]\] \[=p(t)\star p(-t)+\hat{p}(t)\star\hat{p}(-t)+\mathrm{j}\,\left[ \hat{p}(t)\star p(-t)-p(t)\star\hat{p}(-t)\right]. \tag{12}\]
It can be shown that for real-valued \(p(t)\)[17, 18]
\[\hat{p}(t)\star\hat{p}(-t) =p(t)\star p(-t)\] \[=R_{pp}(t)\] \[\hat{p}(t)\star p(-t) =R_{\hat{p}p}(t)\] \[=-p(t)\star\hat{p}(-t)\] \[=-R_{p\hat{p}}(t). \tag{13}\]
Substituting (12) and (13) in (11) we obtain the matched filter output as
\[\tilde{y}(t)=\sum_{k=-\infty}^{\infty}2S_{k}\left[R_{pp}(t-kT)+\mathrm{j}\,R_ {\hat{p}p}(t-kT)\right]. \tag{14}\]
The condition for zero ISI is
\[R_{pp}(mT)=\delta_{K}(mT) \tag{15}\]
where \(\delta_{K}(\cdot)\) is the Kronecker delta function [16]. However, \(R_{\hat{p}p}(t)\) does not satisfy the zero ISI condition. Hence using (15), the symbol-rate sampler at the matched filter output would yield
\[\tilde{y}(nT) =\sum_{k=-\infty}^{\infty}2S_{k}\left(R_{pp}(nT-kT)+\mathrm{j}\, R_{\hat{p}p}(nT-kT)\right)\] \[=2S_{n}+2\mathrm{j}\,\sum_{k=-\infty}^{\infty}S_{k}R_{\hat{p}p}( nT-kT). \tag{16}\]
It is clear from (16) that implementing the matched filter as given in (11) and (12) with complex-valued symbols \(S_{k}\) is not possible, due to crosstalk (interference between in-phase and quadrature components). Therefore, the symbols \(S_{k}\) have to be real-valued. One possible implementation of the proposed OFDM-OQAM/FBMC/UFMC transmitter is shown in Figure 2.
The corresponding receiver is shown in Figure 3. In the next section, we derive \(\hat{p}(t)\) when \(p(t)\) has an RRC spectrum.
## 3 Hilbert Transform of RRC Spectrum
The RRC spectrum which is the Fourier transform of \(p(t)\) is given by [16]
\[P(F)=\left\{\begin{array}{ll}\frac{1}{\sqrt{2B}}&\text{for }-F_{1}\leq F \leq F_{1}\\ \frac{1}{\sqrt{2B}}\cos\left(\frac{\pi(|F|-F_{1})}{4B-4F_{1}}\right)&\text{for }F_{1}\leq|F|\leq 2B-F_{1}\\ 0&\text{elsewhere}\end{array}\right. \tag{17}\]
where
\[2B \stackrel{{\Delta}}{{=}}\frac{1}{T}\] \[\rho \stackrel{{\Delta}}{{=}}1-\frac{F_{1}}{B}. \tag{18}\]
Therefore
\[\hat{p}(t)=\int_{F=-\infty}^{\infty}-\mathrm{j}\,\mathrm{sgn}(F)P(F)\mathrm{e}^{ \mathrm{j}\,2\pi Ft}\,dF \tag{19}\]
where \(P(F)\) is given by (17) and (18). We have
\[I_{1} =\int_{F=-F_{1}}^{F_{1}}-\mathrm{j}\,\mathrm{sgn}(F)\frac{1}{ \sqrt{2B}}\mathrm{e}^{\mathrm{j}\,2\pi Ft}\,dF\] \[=\frac{1}{\pi t\sqrt{2B}}\left[1-\cos(2\pi F_{1}t)\right]. \tag{20}\]
Similarly
\[I_{2} =\int_{F=F_{1}}^{2B-F_{1}}-\mathrm{j}\,\frac{1}{\sqrt{2B}}\cos \left(\frac{\pi(F-F_{1})}{4B-4F_{1}}\right)\mathrm{e}^{\mathrm{j}\,2\pi Ft}\,dF\] \[\qquad+\int_{F=-(2B-F_{1})}^{-F_{1}}\mathrm{j}\,\frac{1}{\sqrt{2 B}}\cos\left(\frac{\pi(-F-F_{1})}{4B-4F_{1}}\right)\mathrm{e}^{\mathrm{j}\,2\pi Ft }\,dF\] \[=\frac{1}{\sqrt{2B}}\int_{F=B(1-\rho)}^{B(1+\rho)}\left\{\sin \left[2\pi Ft+\frac{\pi(F-F_{1})}{4B-4F_{1}}\right]+\sin\left[2\pi Ft-\frac{ \pi(F-F_{1})}{4B-4F_{1}}\right]\right\}\,dF\]
Figure 3: Proposed OFDM-OQAM receiver for each subcarrier.
Figure 2: Proposed OFDM-OQAM transmitter for each subcarrier.
\[I_{2}=\frac{1}{\sqrt{2B}}\sin(2\pi Bt(1+\rho))\left\{\frac{1}{ \gamma}+\frac{1}{\alpha}\right\}\] \[\qquad-\frac{1}{\sqrt{2B}}\cos(2\pi Bt(1-\rho))\left\{\frac{1}{ \gamma}-\frac{1}{\alpha}\right\}. \tag{24}\]
Now
\[\frac{1}{\gamma}-\frac{1}{\alpha} =\frac{64B^{2}\rho^{2}t}{\pi(1-64B^{2}t^{2}\rho^{2})}\] \[\frac{1}{\gamma}+\frac{1}{\alpha} =\frac{8B\rho}{\pi(1-64B^{2}t^{2}\rho^{2})}. \tag{25}\]
Substituting (25) in (24) we get
\[I_{2} =\frac{1}{\sqrt{2B}}\sin(2\pi Bt(1+\rho))\left\{\frac{8B\rho}{ \pi(1-64B^{2}t^{2}\rho^{2})}\right\}\] \[\qquad-\frac{1}{\sqrt{2B}}\cos(2\pi Bt(1-\rho))\left\{\frac{64B^ {2}\rho^{2}t}{\pi(1-64B^{2}t^{2}\rho^{2})}\right\}. \tag{26}\]
Finally \(\hat{p}(t)\) in (19) is given by
\[\hat{p}(t)=I_{1}+I_{2} \tag{27}\]
where \(I_{1}\), \(I_{2}\) are given by (20) and (26) respectively.
## Discrete time implementation
Theoretically when \(p(t)\) has an RRC spectrum, both \(p(t)\), \(\hat{p}(t)\) have an infinite time span. In practice they have to be truncated, in which case (15) is only approximately valid. Moreover, the transmitter and receiver in Figures 2 and 3 have to be implemented in discrete time. In this section, we explore these issues.
The magnitude response of \(p(mT_{s})\) (RRC) and \(\tilde{p}(mT_{s})\) (HT-RRC) given by
\[\tilde{p}(mT_{s})=p(mT_{s})+\mathrm{j}\,\hat{p}(mT_{s}) \tag{28}\]
is shown in Figure 4 for \(T=1\) sec, \(\rho=0.161\) and sampling frequency \(F_{s}=1/T_{s}=5\) Hz. Both \(p(mT_{s})\) and \(\hat{p}(mT_{s})\) lie in the range \(-M\leq m\leq M\) for \(M=100\). The parameter \(M\) is referred to as the one-sided window length. Note that in Figure 4, \(\tilde{P}(\cdot)\) is the discrete Fourier transform of \(\tilde{p}(\cdot)\) given in (28).
The ratio of signal-to-interference (SIR) power is defined as
\[\mathrm{SIR}=10\log_{10}\left[\frac{R_{gg}^{2}(0)}{\sum_{n\neq 0}R_{gg}^{2}( nT)}\right]\quad\mathrm{db}. \tag{29}\]
where \(g(\cdot)\) is \(p(\cdot)\) or \(\hat{p}(\cdot)\). Note that
\[R_{gg}(mT_{s}) =g(mT_{s})\star g(-mT_{s})\] \[R_{gg}(nT) =R_{gg}(nIT_{s})\] \[I =\frac{T}{T_{s}} \tag{30}\]
where "\(\star\)" denotes (discrete-time) linear convolution and \(I\) is the interpolation factor. The SIR in decibels is shown in Table 1 for different values of the one-sided window length \(M\) when \(T=1\) sec, \(\rho=0.161\), \(F_{s}=5=1/T_{s}\) Hz. We find from Table 1 that the SIR of \(R_{\hat{p}\hat{p}}(t)\) is much lower than \(R_{pp}(t)\) for the same values of \(M\). This is probably because the signum function in the Hilbert transform (see (19)) has a discontinuity at \(F=0\). In the next section, we discuss the modified Hilbert transform that avoids this discontinuity.
can be written as
\[\hat{p}_{\rm m}(t) =\int_{F=-\infty}^{\infty}\tilde{H}(F)P(F){\rm e}^{{\rm j}\,2\pi Ft }\,dF\] \[=I_{11}+I_{12}+I_{2} \tag{32}\]
where \(I_{2}\) is given by (26) and
\[I_{11} =\int_{F=-aF_{1}}^{aF_{1}}\tilde{H}(F)P(F){\rm e}^{{\rm j}\,2\pi Ft }\,dF\] \[=\frac{-1}{\sqrt{2B}}\int_{F=-aF_{1}}^{aF_{1}}{\rm e}^{{\rm j}\, \pi F/(2aF_{1})}{\rm e}^{{\rm j}\,2\pi Ft}\,dF\] \[=\frac{-2aF_{1}}{\sqrt{2B}}{\rm sinc}\,(aF_{1}A_{1}) \tag{33}\]
where
\[A_{1}=2t+\frac{1}{2aF_{1}}\]
Figure 5: Frequency response of the modified Hilbert transformer for \(a=0.25\), \(\rho=0.161\), \(T=1\) sec.
\[\mathrm{sinc}(x)=\frac{\sin(\pi x)}{\pi x}. \tag{34}\]
Similarly \(I_{12}\) in (32) is equal to
\[I_{12} =\frac{-\mathrm{j}}{\sqrt{2B}}\int_{aF_{1}}^{F_{1}}\mathrm{e}^{ \mathrm{j}2\pi Ft}\,dF+\frac{\mathrm{j}}{\sqrt{2B}}\int_{-F_{1}}^{-aF_{1}} \mathrm{e}^{\mathrm{j}2\pi Ft}\,dF\] \[=\frac{2F_{1}(1+a)}{\sqrt{2B}}\mathrm{sinc}(F_{1}t(1+a))\sin(\pi F _{1}t(1-a)). \tag{35}\]
The magnitude response of \(p(mT_{s})\) (RRC) and \(\tilde{p}_{\mathrm{m}}(mT_{s})\) (mHT-RRC) given by
\[\tilde{p}_{\mathrm{m}}(mT_{s})=p(mT_{s})+\mathrm{j}\,\hat{p}_{\mathrm{m}}(mT_{ s}) \tag{36}\]
is shown in Figure 6 for \(T=1\) sec, \(\rho=0.161\) and sampling frequency \(F_{s}=1/T_{s}=5\) Hz. Both \(p(mT_{s})\) and \(\hat{p}_{\mathrm{m}}(mT_{s})\) lie in the range \(-M\leq m\leq M\) for \(M=100\). Note that in Figure 6, \(\tilde{P}_{\mathrm{m}}(\cdot)\) is the discrete Fourier transform of \(\tilde{p}_{\mathrm{m}}(\cdot)\) given in (36).
We find from Table 2 that the SIR of \(R_{\hat{p}_{\mathrm{m}}\hat{p}_{\mathrm{m}}}(nT)\) is comparable to \(R_{pp}(nT)\).
## Discrete time system model
The discrete time system model used for computer simulations is shown in Figure 7. The input \(b_{k}\) is arranged into frames of length \(L_{d1}\) bits and given to the rate-1/2 turbo code [1, 2, 3]. The output of the turbo code of length \(L_{d}=2L_{d1}\) bits is mapped to binary phase shift
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{\(M\)} & \multicolumn{2}{c|}{SIR (db)} \\ \cline{2-3} & \(R_{\hat{p}_{\mathrm{m}}\hat{p}_{\mathrm{m}}}(nT)\) & \(R_{pp}(nT)\) \\ \hline
100 & 52.787 & 56.921 \\
80 & 45.168 & 49.426 \\
40 & 33.298 & 40.602 \\
20 & 17.93 & 25.212 \\ \hline \end{tabular}
\end{table}
Table 2: SIR of \(R_{\hat{p}_{\mathrm{m}}\hat{p}_{\mathrm{m}}}(nT)\) and \(R_{pp}(nT)\).
keyed (BPSK) symbols \(S_{i,\,k}=\pm 1\), demultiplexed and transmitted simultaneously over \(N\) subcarriers. The frequency of the \(i^{th}\) subcarrier is given by
\[\omega_{i}=2\pi i/N\quad\text{radians}\quad\text{for }0\leq i\leq N-1 \tag{37}\]
where \(N=I\) is the total number of subcarriers and \(I\) is given by (30). The overall OFDM-OQAM signal is given by
\[\tilde{s}_{o}(mT_{s})=\sum_{i=0}^{N-1}\tilde{s}_{i}(mT_{s})\mathrm{e}^{ \mathrm{j}\,\omega_{i}m} \tag{38}\]
where
\[\tilde{s}_{i}(mT_{s})=\sum_{k}S_{i,\,k}\tilde{p}_{\mathrm{m}}(mT_{s}-kT) \tag{39}\]
where \(\tilde{p}_{\mathrm{m}}(\cdot)\) is given by (36). The spectrum of the complex-valued overall OFDM-OQAM signal \(\tilde{s}_{o}(mT_{s})\) is shown in Figure 8, where
\[\omega=2\pi FT_{s}\quad\text{radians} \tag{40}\]
where \(F\) is the frequency in Hz. Recall that the sampling frequency \(1/T_{s}\) Hz maps to \(2\pi\) in the digital frequency domain. The variance per dimension of complex-valued additive white
Figure 8: Spectrum of the overall OFDM-OQAM signal \(\tilde{s}_{o}(mT_{s})\).
Figure 7: Discrete time system model for OFDM-OQAM.
Gaussian noise (AWGN) is
\[\frac{1}{2}E\left[\left|\tilde{w}(mT_{s})\right|^{2}\right]=\sigma_{w}^{2}. \tag{41}\]
The in-phase and quadrature components of \(\tilde{w}(mT_{s})\) are assumed to be independent. At the receiver we have for the \(i^{th}\) subcarrier
\[x_{i,\,k}=S_{i,\,k}+z_{i,\,k} \tag{42}\]
where \(z_{i,\,k}\) denotes real-valued samples of AWGN with variance \(\sigma_{w}^{2}/2\)[16]. We assume that both \(p(\cdot)\) and \(\hat{p}_{\text{m}}(\cdot)\) have unit energy, that is
\[\sum_{m=-M}^{M}p^{2}(mT_{s}) =1\] \[\sum_{m=-M}^{M}\hat{p}_{\text{m}}^{2}(mT_{s}) =1. \tag{43}\]
Note that from (12), (13) and (16), \(x_{i,\,k,\,I}\), \(x_{i,\,k,\,Q}\) in Figure 7 have to be summed (or averaged). Since \(S_{i,\,k}\) carries half a bit of information, the average signal-to-noise ratio (SNR) per bit is defined as [16]
\[\text{SNR}_{\text{av},\,b} =\frac{2E\left[S_{i,\,k}^{2}\right]}{\text{2D noise variance}}\] \[=\frac{2\times 1}{2\times\sigma_{w}^{2}/2}\] \[=\frac{2}{\sigma_{w}^{2}}. \tag{44}\]
The discrete-time simulation parameters are given in Table 3. The transmit filters are given by \(p(mT_{s}-0.5T_{s}),\,\hat{p}_{\text{m}}(mT_{s}-0.5T_{s})\), for \(-M\leq m\leq M\), for integer \(m\) and \(M=8N,\,16N\). This is because, we require the filter length to be an integer multiple of \(N\), for the sake of implementation simplicity. The computer simulation results for the bit-error-rate
\begin{table}
\begin{tabular}{|c|c|} \hline Parameter & Value \\ \hline \(N=I\) & \\ (Number of & \\ subcarriers = & 256 \\ interpolation & \\ factor) & \\ \(L_{d1}\) & 1024 \\ \(L_{d}\) & 2048 \\ \(a\) & 0.25 \\ \(\rho\) & 0.161 \\ \(M\) & 8N, 16\(N\) \\ \(T\) & 1 sec \\ \(T_{s}\) & \(T/N\) sec \\ \hline \end{tabular}
\end{table}
Table 3: Simulation parameters.
(BER) vs the average SNR per bit (\(\text{SNR}_{\text{av},\,b}\)) are presented in Figure 9. The theoretical BER is obtained from [19]. The BER results for \(M=8N\) are denoted by "sim8" and "theory8". The BER results for \(M=16N\) are denoted by "sim16" and "theory16". Observe the close match between theory and simulations.
## 6 Conclusions
This article discusses the implementation of an OFDM-OQAM/FBMC/UFMC system in discrete-time, using Hilbert transform. A simple matched filter receiver is sufficient for detecting the symbols. In the present work, an AWGN channel is considered. Frequency selective fading channels along with carrier frequency offsets can be considered in future works.
|
2302.14265 | Neural Operators for Bypassing Gain and Control Computations in PDE
Backstepping | We introduce a framework for eliminating the computation of controller gain
functions in PDE control. We learn the nonlinear operator from the plant
parameters to the control gains with a (deep) neural network. We provide
closed-loop stability guarantees (global exponential) under an NN-approximation
of the feedback gains. While, in the existing PDE backstepping, finding the
gain kernel requires (one offline) solution to an integral equation, the neural
operator (NO) approach we propose learns the mapping from the functional
coefficients of the plant PDE to the kernel function by employing a
sufficiently high number of offline numerical solutions to the kernel integral
equation, for a large enough number of the PDE model's different functional
coefficients. We prove the existence of a DeepONet approximation, with
arbitrarily high accuracy, of the exact nonlinear continuous operator mapping
PDE coefficient functions into gain functions. Once proven to exist, learning
of the NO is standard, completed "once and for all" (never online) and the
kernel integral equation doesn't need to be solved ever again, for any new
functional coefficient not exceeding the magnitude of the functional
coefficients used for training. We also present an extension from approximating
the gain kernel operator to approximating the full feedback law mapping, from
plant parameter functions and state measurement functions to the control input,
with semiglobal practical stability guarantees. Simulation illustrations are
provided and code is available on github. This framework, eliminating real-time
recomputation of gains, has the potential to be game changing for adaptive
control of PDEs and gain scheduling control of nonlinear PDEs.
The paper requires no prior background in machine learning or neural
networks. | Luke Bhan, Yuanyuan Shi, Miroslav Krstic | 2023-02-28T02:56:47Z | http://arxiv.org/abs/2302.14265v1 | # Neural Operators for Bypassing Gain and Control Computations in PDE Backstepping
###### Abstract
We introduce a framework for eliminating the computation of controller gain functions in PDE control. We learn the nonlinear operator from the plant parameters to the control gains with a (deep) neural network. We provide closed-loop stability guarantees (global exponential) under an NN-approximation of the feedback gains. While, in the existing PDE backstepping, finding the gain kernel requires (one offline) solution to an integral equation, the neural operator (NO) approach we propose learns the mapping from the functional coefficients of the plant PDE to the kernel function by employing a sufficiently high number of offline numerical solutions to the kernel integral equation, for a large enough number of the PDE model's different functional coefficients. We prove the existence of a Deep-ONet approximation, with arbitrarily high accuracy, of the exact nonlinear continuous operator mapping PDE coefficient functions into gain functions. Once proven to exist, learning of the NO is standard, completed "once and for all" (never online) and the kernel integral equation doesn't need to be solved ever again, for any new functional coefficient not exceeding the magnitude of the functional coefficients used for training. We also present an extension from approximating the gain kernel operator to approximating the full feedback law mapping, from plant parameter functions and state measurement functions to the control input, with semiglobal practical stability guarantees. Simulation illustrations are provided and code is available on github. This framework, eliminating real-time recomputation of gains, has the potential to be game changing for adaptive control of PDEs and gain scheduling control of nonlinear PDEs.
The paper requires no prior background in machine learning or neural networks.
## I Introduction
ML/AI is often (not entirely unjustifiably) thought of as an "existential threat" to model-based sciences, from physics to conventional control theory. In recent years, a framework has emerged [47, 48, 51, 52], initiated by George Karniadakis, his coauthors, and teams led by Anima Anandkumar and Andrew Stuart, which promises to unite the goals of physics and learning, rather than presenting learning as an alternative or substitute to first-principles physics. In this framework, often referred to as neural operators (NO), which is formulated as learning of mappings from function spaces into function spaces, and is particularly suitable for PDEs, solution/"flow" maps can be learned after a sufficiently large number of simulations for different initial conditions. (In some cases, parameters of models can also be identified from experiments.)
#### dynamics of plant parameters to control gains and learning of those maps
One can't but ask what the neural operator reasoning can offer to control theory, namely, to the design of controllers, observers, and online parameter estimators. This paper is the first venture in this direction, a breakthrough with further possibilities, and a blueprint (of a long series of steps) to learn PDE control designs and prove their stability.
In control systems (feedback controllers, observers, identifiers), various kinds of nonlinear maps arise, some from vector into vector spaces, others from vector or function spaces into function spaces. Some of the maps have time as an argument (making the domain infinite) and others are mappings from compact domains into compact image sets, such as mappings converting system coefficients into controller coefficients, such as the mapping \(K(A,B)\) for the closed-loop system \(\dot{x}=Ax+Bu,\ u=Kx\) (under either pole placement or LQR).
While learning nonlinear maps for various design problems for nonlinear ODEs would be worth a study, we focus in this initial work one step beyond, on a benchmark PDE control class. Our focus on an uncomplicated--but unstable--PDE control class is for pedagogical reasons. Combining the operator learning with _PDE backstepping_ is complex enough even for the simplest-looking among PDE stabilization problems.
PDE backstepping control with the gain computation obviated using neural operatorsConsider 1D hyperbolic partial integro-differential equation systems of the general form \(v_{t}(x,t)=v_{x}(x,t)+\lambda(x)v(x,t)+g(x)v(0,t)+\int_{0}^{x}f(x,y)v(y,t)dy\) on the unit interval \(x\in[0,1]\), which are transformable, using an invertible backstepping "pre-transformation" introduced in [6] into the simple PDE
\[u_{t}(x,t) = u_{x}(x,t)+\beta(x)u(0,t) \tag{1}\] \[u(1,t) = U(t). \tag{2}\]
Our goal is the design of a PDE backstepping boundary control
\[U(t)=\int_{0}^{1}k(1-y)u(y,t)dy. \tag{3}\]
Physically, (1) is a "transport process (from \(x=1\) towards \(x=0\)) with recirculation" of the outlet variable \(u(0,t)\). Recirculation causes instability when the coefficient \(\beta(x)\) is positive and large. This instability is prevented by the backstepping boundary feedback (3) with the gain function \(k(\cdot)\) as a kernel in the spatial integration of the measured state \(u(y,t)\). (The full state does not need to be measured, as explained in Remark 1 at the end of Section IV.)
Backstepping produces the gain kernel \(k\) for a given \(\beta\). The mapping \(\mathcal{K}:\beta\mapsto k\) is nonlinear, continuous, and we learn it.
Why do we care to learn \(\mathcal{K}\)? The kernel _function_\(k\) can always be computed for a particular \(\beta\), so what is the interest in learning the _functional mapping/operator?_ Once \(\mathcal{K}\) is learned, \(k\) no longer needs to be sought, for a new \(\beta\), as a solution to a partial differential or integral equation. For the next/new
\(\beta\), finding \(k\) is simply a "function evaluation" of the learned mapping \(\mathcal{K}\). This provides benefits in both adaptive control where, at each time step, the gain estimate \(\hat{k}\) has to be computed for a new parameter update \(\hat{\beta}\), and in gain scheduling for nonlinear PDEs where the gain has to be recomputed at each current value of the state.
As well known, learning (ML, in general, and its operator learning varieties: DeepONet, FNO, LOCA, NOMAD, etc.) comes with an upfront price. Large data sets need to be first produced, and then large (possibly "deep") neural networks need to be trained. There is no exception to this in the approach we propose. For a large sample set of recirculation functions \(\beta_{i}\), we need to first solve for the corresponding backstepping kernels \(k_{i}\). After that, a NN approximation of \(\mathcal{K}\) needs to be trained on that data set of the \((\beta_{i},k_{i})\) pairs.
One can stop at producing the NN approximation of the mapping \(\mathcal{K}\) and proceed with a heuristic use of the approximated gains \(\hat{k}\). But we don't stop there. We ask whether the PDE system will be still stabilized with the NN-approximated gain kernel \(\hat{k}\). Our main theoretical result is affirmative. With a large enough data set of solved pairs \((\beta_{i},k_{i})\), and a large enough trained (deep) NN, closed-loop stability is guaranteed for a new \(\beta\), not in the training set.
When ML is applied in the control context (as RL or other approaches), it is usually regarded as a model-free design. Our design, summarized in Figure 1, is not model-free; it is model-based. It is only that the computational portion of this model-based (PDE backstepping) design is obviated through ML.
Our learning is offline; not as in adaptive control [1; 6].
Neural operator literature_--a brief summary_Neural operators are NN-parameterized maps for learning relationships between function spaces. They originally gained popularity due to their success in mapping PDE solutions while remaining discretization-invariant. Generally, nonlinear operators consist of three components: an encoder, an approximator, and a reconstructor [44]. The encoder is an interpolation from an infinite-dimensional function space to a finite-dimensional vector representation. The approximator aims to mimic the infinite map using a finite-dimensional representation of both the domain function space and the target function space. The reconstructor then transforms the approximation output into the infinite-dimensional target function space. The implementation of both the approximator and the reconstructor is generally coupled and can take many forms. For example, the original DeepONet [52] contains a "branch" net that represents the approximation network and a "trunk" net that builds a basis for the target function space. The outputs of the two networks are then taken in linear combination with each other to form the operator. FNO [48] utilizes the approximation network in a Fourier domain where the reconstruction is done on a basis of the trigonometric polynomials. LOCA [37] integrates the approximation network and reconstruction step with a unified attention mechanism. NOMAD [68] extends the linear reconstructor map in DeepONet to a nonlinear map that is capable of learning on nonlinear submanifolds in function spaces. There have been many more extensions to the neural operator architectures omitted here as they are usually designed around domain-specific enhancements [83][49][63]. Another line of work, called physics-informed neural networks (PINNs) [36], [66], which can be used as generic solvers of PDEs by adding physics constraint loss to neural networks. However, PINNs need to be re-trained for new recirculation function \(\beta\), thus not providing as much acceleration for the computation of the backstepping kernels as the neural operators.
Advances in learning-based controlAmong the first in demonstrating the stability of learning-based model predictive controllers (MPC) were the papers [2; 67], followed in several directions. First, for nonlinear systems, deep learning-based approaches consist of jointly learning the controller and(or) Lyapunov functions via NNs [10; 11; 12; 13; 14; 20; 21]. [10] proposed a method for learning control policies and NN Lyapunov functions using an empirical Lyapunov loss and then validating using formal verification. [11; 12] generalize the method to learning Lyapunov functions for piecewise linear
Figure 1: An algorithmic representation of our design paradigm of employing neural operators in boundary control of PDEs. Three major step clusters are performed: (1) derivation of the integral equations for the backstepping kernels, performed only once; (2) learning of the mapping from the plant parameter functions into the backstepping kernel functions, also performed only once; and (3) implementation of the controller for specific plant parameters. The task in the top box has been completed in [40]. In this paper, the task in the middle box is introduced and stability guarantees for the task in the bottom box are provided.
and hybrid systems, and [13] for learning regions of attraction of nonlinear systems. In addition, [59; 76] have explored how learning-based control will affect nominal systems with known Lyapunov functions, and [22; 62] studied the problem of learning stability certificates and stable controllers directly from data. In a similar vein, [4] has developed a provable stable data-driven algorithm based on system measurements and prior knowledge for linear time-invariant systems.
In a separate, but related direction, many reinforcement learning (RL) [7; 74] control approaches have been developed over the past few years. On the one side, model-based RL has been studied due to its superior sample efficiency and interpretable guarantees. The main focus has been on learning the system dynamics and providing closed-loop guarantees in _finite-time_ for both linear systems [15; 23; 29; 42; 77] (and references within), and nonlinear systems [5; 35; 43; 71]. For model-free RL methods, [30; 56; 60; 90] proved the convergence of policy optimization, a popular model-free RL method, to the optimal controller for linear time-invariant systems, [58; 61] for linear time-varying systems, [75] for partially observed linear systems. See [32] for a recent review of policy optimization methods for continuous control problems such as the LQR, \(H_{\infty}\) control, risk-sensitive control, LQG, and output feedback synthesis. For nonlinear systems, [16; 17; 19; 70] investigated policy optimization with stability guarantees in which the stability constraints are derived from control Lyapunov functions. In addition to policy optimization methods, [8; 78; 46; 79] have studied and proved the stability and asymptotic convergence of other model-free RL algorithms such as actor-critic methods [46; 79] and Q-learning [78] in control affine systems. In the domain of cyber-physical systems (CPS), a theoretical framework has been developed for learning-based control to handle partially observable systems [53].
Many advances have been made in learning-based control in games and multi-agent systems [31; 31; 50; 54; 55; 57; 64; 65; 80; 88; 89]. Convergence is characterized for various learning-based methods to Nash equilibria in zero-sum linear quadratic games [88], continuous games [55], Stackelberg games [31; 57], Markov games [54; 87], and multi-agent learning over networked systems [50; 64; 65]. A recent review for learning-based control in games is in [89].
We focus on learning-based control for PDE systems. In our previous work [69], we demonstrate the empirical success of using NOs for accelerating PDE backstepping observers, without theoretical guarantees. This work represents the first step towards using NOs for provably bypassing gain computations (with exponential stability guarantees) or directly learning the controller (with practical stability) in PDE backstepping.
Backstepping control of first-order hyperbolic PDEsThe PDE system (1), (2) is the simplest open-loop unstable PDE of any kind which can be of interest to the researcher working on PDE stabilization by boundary control. This system is treated here as a technical benchmark, as was done as well in [6] and a number of other references offering methodological advances in PDE stabilization. System (1), (2) is a particular case of a single-PDE hyperbolic class in [40] for which PDE backstepping was first introduced in the hyperbolic setting. Coupled systems of first-order hyperbolic PDEs are of greater interest because they arise in fluid flows, traffic flows, elastic structures, and other applications. The first result on backstepping for a _pair_ of coupled hyperbolic PDEs was in [18]. The extension from two to \(n+1\) hyperbolic PDEs, with actuation of only one and with counterconvection of \(n\) other PDEs was introduced in [27]. An extension from \(n+1\) to \(n+m\) coupled PDEs, with actuation on \(m\) "homodirectional" PDEs, was provided in [33; 34]. Redesigns that are robust to delays were provided in [3]. An extension from coupled hyperbolic PDEs to cascades with ODEs was presented in [28]. An extension from hyperbolic PDE-ODE cascades to "sandwiched" ODE-PDE-ODE systems was presented in [81] and an event-triggered design for such systems was given in [82]. The extension of PDE backstepping to output-feedback regulation with disturbances is proposed in [25; 26]. For coupled hyperbolic PDEs with unknown parameters, a comprehensive collection of adaptive control designs was provided in the book [1]. Applications of backstepping to coupled hyperbolic PDE models of traffic are introduced in [84; 85].
Paper outline and contributionsAfter a brief introduction to the backstepping design in Section II, for system (1), (2), in Section III we prove that the backstepping kernel operator is locally Lipschitz, between the spaces of continuous functions, with which we satisfy a sufficient condition for the existence of a neural operator approximation of a nonlinear operator to arbitrarily high accuracy--stated at the section's end in a formal result and illustrated with an example of approximating the operator \(k=\mathcal{K}(\beta)\). In Section IV we present the first of our main results: the closed-loop stabilization (not merely practical but exponential) with a DeepONet-approximated backstepping gain kerne 1 function. In Section V we present simulation results that illustrate stabilization under DeepONet-approximated gains. Then, in Section VI we pose the question of whether we can not only approximate the gain kernel mapping \(\beta(x)\mapsto k(x)\), as in Sections III and IV, but the entire feedback law mapping \((\beta(x),u(x,t))\mapsto\int_{0}^{1}k(1-y)u(y,t)dy\) at each time instant \(t\); we provide an affirmative answer and a guarantee of semiglobal practical exponential stability under such a DeepONet approximation. In Section VII we illustrate this feedback law approximation with a theory-confirming simulation. Then, in Section VIII, we present the paper's most general result, which we leave for the end for pedagogical reasons, since it deals with Volterra operator kernel functions of two variables, \((x,y)\), on a triangular domain, and requires continuity of mappings between spaces of functions that are not just continuous but continuously differentiable, so that not only the backstepping kernel is accurately approximable but also the kernel's spatial derivatives, as required for closed-loop stability. We close with a numerical illustration for this general case in Section IX.
In summary, the paper's contributions are the PDE stabilization under DeepONet approximations of backstepping gain kernels (Theorems 2 and 4) and under the approximation of backstepping feedback laws (Theorem 3). Our stabilization results also hold for any other neural operators with a universal approximation property (shown for LOCA [37] and for FNO on the periodic domain [38]).
NotationWe denote convolution operations as
\[(a*b)(x)=\int_{0}^{x}a(x-y)b(y)dy \tag{4}\]
In the sequel, we suppresses the arguments \(x\) and \(t\) wherever clear from the context. For instance, we write (1), (2) compactly as \(u_{t}=u_{x}+\beta u(0)\) and \(u(1)=U\), where, from the context, the boundary values \(u(0),u(1)\) depend on \(t\) as well.
## 2 Backstepping Design for a Transport PDE with 'Recirculation'
Consider the PDE system (1), (2). We employ the following backstepping transformation:
\[w=u-k*u, \tag{5}\]
i.e., \(w(x,t)=u(x,t)-\int_{0}^{x}k(x-y)u(y,t)dy\), to convert the plant into the target system
\[w_{t} = w_{x} \tag{6}\] \[w(1) = 0 \tag{7}\]
with the help of feedback
\[U=(k*u)(1), \tag{8}\]
namely, \(U(t)=\int_{0}^{1}k(1-y)u(y,t)dy\). To yield the target system, \(k\) must satisfy the integral/convolution equation
\[k(x)=-\beta(x)+\int_{0}^{x}\beta(x-y)k(y)dy \tag{9}\]
for \(x\in[0,1]\). Note that, while this integral equation is linear in \(k\) for a given \(\beta\), the mapping from \(\beta\) to \(k\) is actually nonlinear, due to the product in the convolution of \(\beta\) with \(k\).
## 3 Accuracy of Approximation of Backstepping Kernel Operator with DeepONet
An \(n\)-layer NN \(f^{\mathcal{N}}:\mathbb{R}^{d_{1}}\to\mathbb{R}^{d_{n}}\) is given by
\[f^{\mathcal{N}}(x,\theta):=(l_{n}\circ l_{n-1}\circ...\circ l_{2}\circ l_{1}) (x,\theta) \tag{10}\]
where layers \(l_{i}\) start with \(l_{0}=x\in\mathbb{R}^{d_{1}}\) and continue as
\[l_{i+1}(l_{i},\theta_{i+1}):=\sigma(W_{i+1}l_{i}+b_{i+1}),\quad i=1,\ldots,n-1 \tag{11}\]
\(\sigma\) is a nonlinear activation function, and weights \(W_{i+1}\in\mathbb{R}^{d_{i+1}\times d_{i}}\) and biases \(b_{i+1}\in\mathbb{R}^{d_{i+1}}\) are parameters to be learned, collected into \(\theta_{i}\in\mathbb{R}^{d_{i+1}(d_{i}+1)}_{i}\), and then into \(\theta=[\theta_{1}^{\mathrm{T}},\ldots,\theta_{n}^{\mathrm{T}}]^{\mathrm{T}}\in \mathbb{R}^{\sum_{i=1}^{n}d_{i+1}(d_{i}+1)}\). Let \(\vartheta^{(k)},\theta^{(k)}\in\mathbb{R}^{\sum_{i=1}^{n}d_{k,(i+1)}(d_{k},i+1)}\) denote a sequence of NN weights.
An neural operator (NO) for approximating a nonlinear operator \(\mathcal{G}:\mathcal{U}\mapsto\mathcal{V}\) is defined as
\[\mathcal{G}_{\mathbb{N}}(\mathbf{u}_{m})(y)=\sum_{k=1}^{p}g^{\mathcal{N}}( \mathbf{u}_{m};\vartheta^{(k)})f^{\mathcal{N}}(y;\theta^{(k)}) \tag{12}\]
where \(\mathcal{U},\mathcal{V}\) are function spaces of continuous functions \(u\in\mathcal{U},v\in\mathcal{V}\). \(\mathbf{u}_{m}\) is the evaluation of function \(u\) at points \(x_{i}=x_{1},...,x_{m}\), \(p\) is the number of chosen basis components in the target space, \(y\in Y\) is the location of the output function \(v(y)\) evaluations, and \(g^{\mathcal{N}}\), \(f^{\mathcal{N}}\) are NNs termed branch and trunk networks. Note, \(g^{\mathcal{N}}\) and \(f^{\mathcal{N}}\) are not limited to feedforward NNs 10, but can also be of convolutional or recurrent.
**Theorem 1**: _(DeepONet universal approximation theorem [24, Theorem 2.1]). Let \(X\subset\mathbb{R}^{d_{x}}\) and \(Y\subset\mathbb{R}^{d_{y}}\) be compact sets of vectors \(x\in X\) and \(y\in Y\), respectively. Let \(\mathcal{U}:X\to U\subset\mathbb{R}^{d_{u}}\) and \(\mathcal{V}:Y\to V\subset\mathbb{R}^{d_{u}}\) be sets of continuous functions \(u(x)\) and \(v(y)\), respectively. Let \(\mathcal{U}\) be also compact. Assume the operator \(\mathcal{G}:\mathcal{U}\to\mathcal{V}\) is continuous. Then, for all \(\epsilon>0\), there exist \(m^{*},p^{*}\in\mathbb{N}\) such that for each \(m\geq m^{*}\), \(p\geq p^{*}\), there exist \(\theta^{(k)},\vartheta^{(k)}\), neural networks \(f^{\mathcal{N}}(\cdot;\theta^{(k)}),g^{\mathcal{N}}(\cdot;\vartheta^{(k)}),k=1,\ldots,p\), and \(x_{j}\in X,j=1,\ldots,m\), with corresponding \(\mathbf{u}_{m}=(u(x_{1}),u(x_{2}),\cdots,u(x_{m}))^{\mathrm{T}}\), such that_
\[|\mathcal{G}(u)(y)-\mathcal{G}_{\mathbb{N}}(\mathbf{u}_{m})(y)|<\epsilon \tag{13}\]
_for all functions \(u\in\mathcal{U}\) and all values \(y\in Y\) of \(\mathcal{G}(u)\in\mathcal{V}\)._
**Definition 1**: _(backstepping kernel operator). A mapping \(\mathcal{K}:\beta\mapsto k\) of \(C^{0}[0,1]\) into itself, where \(k=\mathcal{K}(\beta)\) satisfies_
\[\mathcal{K}(\beta)=-\beta+\beta*\mathcal{K}(\beta), \tag{14}\]
_namely, in the Laplace transform notation,_
\[k=\mathcal{K}(\beta):=\mathscr{L}^{-1}\left\{\frac{\mathscr{L}\{\beta\}}{ \mathscr{L}\{\beta\}-1}\right\} \tag{15}\]
_is referred to as the backstepping kernel operator._
**Lemma 1**: _(Lipschitzness of backstepping kernel operator \(\mathcal{K}\)). The kernel operator \(\mathcal{K}:\beta\mapsto k\) in Definition 1 is Lipschitz. Specifically, for any \(B>0\) the operator \(\mathcal{K}\) satisfies_
\[||\mathcal{K}(\beta_{1})-\mathcal{K}(\beta_{2})||_{\infty}\leq C||\beta_{1}- \beta_{2}||_{\infty} \tag{16}\]
_with the Lipschitz constant_
\[C=\mathrm{e}^{3B} \tag{17}\]
_for any pair of functions \((\beta_{1},\beta_{2})\) such that \(\|\beta_{1}\|_{\infty},\|\beta_{2}\|_{\infty}\leq B\), where \(\|\cdot\|_{\infty}\) is the supremum norm over the argument of \(\beta\) and \(k\)._
Start with the iteration \(k^{0}=-\beta,k^{n+1}=k^{0}+\beta*k^{n},\ n\geq 0\) and consider the iteration
\[\Delta k^{n+1}=\beta*\Delta k^{n},\qquad\Delta k^{0}=k^{0}=-\beta \tag{18}\]
for the difference \(\Delta k^{n}=k^{n}-k^{n-1}\), which sums to
\[k=\sum_{n=1}^{\infty}\Delta k^{n}. \tag{19}\]
Next, for \(\bar{\beta}=\|\beta\|_{\infty}\) and all \(x\in[0,1]\),
\[|\Delta k^{n}(x)|\leq\frac{\bar{\beta}^{n+1}x^{n}}{n!}, \tag{20}\]
which is established by induction by postulating \(\left|\Delta k^{n-1}(x)\right|\leq\frac{\bar{\beta}^{n}x^{n-1}}{(n-1)!}\) and by computing, from (18),
\[|\Delta k^{n}(x)| = \left|\int_{0}^{x}\beta(x-y)\Delta k^{n-1}(y)dy\right| \tag{21}\] \[\leq \bar{\beta}\int_{0}^{x}\frac{\bar{\beta}^{n}y^{n-1}}{(n-1)!}|dy \leq\frac{\bar{\beta}^{n+1}x^{n}}{n!}.\]
And then, (20) and (19) yield
\[|k(x)|\leq\bar{\beta}\mathrm{e}^{\beta x}. \tag{22}\]
Next, for \(k_{1}=\mathcal{K}(\beta_{1})\) and \(k_{2}=\mathcal{K}(\beta_{2})\) it is easily verified that
\[k_{1}-k_{2}=\beta_{1}*(k_{1}-k_{2})-\delta\beta+\delta\beta*k_{2} \tag{23}\]
where \(\delta\beta=\beta_{1}-\beta_{2}\). Define the iteration
\[\delta k^{n+1} = \beta_{1}*\delta k^{n} \tag{24}\] \[\delta k^{0} = -\delta\beta+\delta\beta*k_{2} \tag{25}\]
which verifies \(k_{1}-k_{2}=\sum_{n=1}^{\infty}\delta k^{n}\). Noting that (22) ensures that \(k_{2}=\mathcal{K}(\beta_{2})\) verifies \(|k_{2}(x)|\leq\bar{\beta}_{2}\mathrm{e}^{\bar{\beta}_{2}x}\), from (25),
\[|\delta k^{0}(x)|\leq\left(1+\bar{\beta}_{2}\mathrm{e}^{\bar{\beta}_{2}x} \right)\overline{\delta\beta}\leq\mu_{2}\overline{\delta\beta} \tag{26}\]
where \(\mu_{2}:=1+\bar{\beta}_{2}\mathrm{e}^{\bar{\beta}_{2}}\) and \(\overline{\delta\beta}=\|\beta_{1}-\beta_{2}\|_{\infty}\), it can be shown by induction, by mimicking the chain of inequalities (21), that, for all \(x\in[0,1]\),
\[|\delta k^{n}(x)|\leq\mu_{2}\overline{\delta\beta}\frac{\bar{\beta}_{1}^{n}x^ {n}}{n!} \tag{27}\]
and therefore it follows that, for all \(x\in[0,1]\),
\[|k_{1}(x)-k_{2}(x)| \leq \left(1+\bar{\beta}_{2}\mathrm{e}^{\bar{\beta}_{2}}\right)\mathrm{ e}^{\bar{\beta}_{1}x}\|\beta_{1}-\beta_{2}\|_{\infty} \tag{28}\] \[\leq \mathrm{e}^{3B}\|\beta_{1}-\beta_{2}\|_{\infty}.\]
Hence, local Lipschitzness is proven with (17).
**Corollary 1**: _(to Theorem 1)._ _Consider the backstepping kernel operator \(\mathcal{K}\) in Definition 1. For all \(B>0\) and \(\epsilon>0\), there exist \(p^{*}(B,\epsilon),m^{*}(B,\epsilon)\in\mathbb{N}\), with an increasing dependence on \(B\) and \(1/\epsilon\), such that for each \(p\geq p^{*}\) and \(m\geq m^{*}\) there exist \(\theta^{(k)},\vartheta^{(k)}\), neural networks \(f^{\mathcal{N}}(\cdot;\theta^{(k)}),g^{\mathcal{N}}(\cdot;\vartheta^{(k)}),k= 1,\ldots,p\), and \(x_{j}\in K_{1},j=1,\ldots,m\), with corresponding \(\beta_{m}=(\beta(x_{1}),\beta(x_{2}),\cdots,\beta(x_{m}))^{\mathrm{T}}\), such that
\[|\mathcal{K}(\beta)(x)-\mathcal{K}_{\mathbb{N}}(\beta_{m})(x)|<\epsilon \tag{29}\]
holds for all Lipschitz \(\beta\) with the property that \(\|\beta\|_{\infty}\leq B\).
So the backstepping kernel is approximable, qualitatively, but how many neurons and how much data are needed for a given \(\epsilon\)? We recall a result on the minimum-sized DeepONet.
**Proposition 1**: _(DeepONet size for kernel operator approximation [24, Theorem 3.3 and Remark 3.4]). If the kernel operator defined in (14) is Lipschitz (or at least Holder) continuous, a DeepONet that approximates it to a required error tolerance \(\epsilon>0\) indicated by (29) employs the number of data point evaluations for \(\beta\) on the order of_
\[m\sim\epsilon^{-1}, \tag{30}\]
_the number of basis components in the interpolation when reconstructing into \(C^{0}[0,1]\) on the order of_
\[p\sim\epsilon^{-\frac{1}{2}}, \tag{31}\]
_the numbers of layers \(L_{g^{\mathcal{N}}}\) in the branch network and of neurons \(N_{g^{\mathcal{N}}}\) in each layer of the branch network on the order given, respectively, by_
\[N_{g^{\mathcal{N}}}\cdot L_{g^{\mathcal{N}}}\sim\left(\frac{1}{\epsilon}\right) ^{\frac{1}{\epsilon}}, \tag{32}\]
_and the total size of the trunk network on the order of_
\[|\theta^{(k)}|\sim\left(\frac{3}{2}\log\frac{1}{\epsilon}\right)^{2}. \tag{33}\]
**Example 1**: _In Figure 2 we present two examples of approximation of \(k\) using a DeepONet approximation of \(\mathcal{K}(\beta)\) for given \(\beta_{1}\) and \(\beta_{2}\), which are taken as Chebyshev polynomials \(\beta(x)=6\cos(\gamma\cos^{-1}(x))\). They are trained on approximating kernels from 900 samples with \(\gamma\in\text{uniform}[2,8]\)._
## 4 Stability under Kernel Approximation with DeepONet
For our stability study under an approximate (imperfect) kernel, we begin with a derivation of the target PDE system under a backstepping transformation employing a DeepONet approximation of the backstepping kernel.
For a given \(\beta\), let \(\hat{k}=\tilde{\mathcal{K}}(\beta)\), where \(\tilde{\mathcal{K}}=\mathcal{K}_{\mathbb{N}}\), denote an NO approximation of the exact backstepping kernel \(k\) whose existence is established in Corollary 1 for DeepONet. Let
\[\tilde{k}=k-\hat{k} \tag{34}\]
denote the approximation error. Finally, let the backstepping transformation with the approximate kernel \(\hat{k}\) be
\[\hat{w}=u-\hat{k}*u. \tag{35}\]
With routine calculations, employing the approximate backstepping transformation and the feedback
\[U=(\hat{k}*u)(1) \tag{36}\]
we arrive at the target system
\[\hat{w}_{t} = \hat{w}_{x}+\delta\hat{w}(0) \tag{37}\] \[\hat{w}(1) = 0, \tag{38}\]
where the function \(\delta(x)\) is defined as
\[\delta=-\tilde{k}+\beta*\tilde{k}. \tag{39}\]
Next, we proceed with a Lyapunov analysis.
**Lemma 2**: _(a Lyapunov estimate)._ _Given arbitrarily large \(B>0\), for all Lipschitz \(\beta\) with \(\|\beta\|_{\infty}\leq B\), and for all neural operators \(\hat{\mathcal{K}}\) with \(\epsilon\in(0,\epsilon^{*})\), where_
\[\epsilon^{*}(B)=\frac{c\mathrm{e}^{-c/2}}{1+B} \tag{40}\]
_the Lyapunov functional_
\[V(t)=\int_{0}^{1}\mathrm{e}^{cx}\hat{w}^{2}(x,t)dx,\qquad c>0. \tag{41}\]
_satisfies the following estimate along the solutions of the target system (37), (38),_
\[V(t)\leq V(0)\mathrm{e}^{-c^{*}t}, \tag{42}\]
_for_
\[c^{*}=c-\frac{e^{c}}{c}\epsilon^{2}\left(1+B\right)^{2}>0. \tag{43}\]
_The accuracy required of the NO \(\hat{\mathcal{K}}\), and given by (40), is maximized with \(c=2\) and has the value \(\epsilon^{*}(B)=\frac{2}{c(1+B)}\)._
Proof.: Several steps of calculation (chain rule, substitution, integration by parts) result in
\[\dot{V} = -\hat{w}^{2}(0)-c\int_{0}^{1}\mathrm{e}^{cx}\hat{w}^{2}(x,t)dx \tag{44}\] \[+\hat{w}(0)\int_{0}^{1}\delta(x)\mathrm{e}^{cx}\hat{w}(x)dx\] \[\leq -\frac{1}{2}w^{2}(0)-c\int_{0}^{1}\mathrm{e}^{cx}\hat{w}^{2}(x,t)dx\] \[+\left(\int_{0}^{1}\delta(x)\mathrm{e}^{cx}\hat{w}(x)dx\right)^{2}\]
With the Cauchy-Schwartz inequality
\[\left(\int_{0}^{1}\delta(x)\mathrm{e}^{cx}\hat{w}(x)dx\right)^{2} \tag{45}\] \[\leq\int_{0}^{1}\delta^{2}(x)\mathrm{e}^{cx}dx\int_{0}^{1}\mathrm{ e}^{cx}\hat{w}(x)^{2}dx\]
we get
\[\dot{V} \leq -\frac{1}{2}w^{2}(0)-\left(c-\int_{0}^{1}\delta^{2}(x)\mathrm{e}^ {cx}dx\right)V \tag{46}\]
The function \(\delta\) in (39) is bounded by \(|\delta(x)|\leq(1+||\beta||_{\infty})\,||\tilde{k}||_{\infty}\) which, in turn, using (29), yields
\[|\delta(x)|\leq(1+\tilde{\beta})\epsilon=:\bar{\delta}. \tag{47}\]
Then, substituting this into (37), we obtain:
\[\dot{V} \leq -\frac{1}{2}w^{2}(0)-\left(c-\epsilon^{2}\left(1+\bar{\beta} \right)^{2}\int_{0}^{1}\mathrm{e}^{cx}dx\right)V \tag{48}\] \[\leq -\frac{1}{2}w^{2}(0)-\left(c-\frac{e^{c}}{c}\epsilon^{2}\left(1+ \bar{\beta}\right)^{2}\right)V\] \[\leq -\frac{1}{2}w^{2}(0)-\left(c-\frac{e^{c}}{c}\epsilon^{2}\left(1+ \bar{B}\right)^{2}\right)V\]
For \(0\leq\epsilon\leq\epsilon^{*}\), where \(\epsilon^{*}\) is defined in (40), we have
\[\dot{V}\leq-\frac{1}{2}w^{2}(0)-c^{*}V \tag{49}\]
for some \(c^{*}>0\) in (43).
The size of the NO and of the dataset needs to increase with \(\bar{\beta}\), i.e., with the potential instability in the open-loop system.
**Lemma 3**: _(bound on inverse approximate kernel). The kernel \(\hat{l}\) of the inverse to the backstepping transformation (35),_
\[u=\hat{w}+\hat{l}\ast\hat{w}, \tag{50}\]
_satisfies, for all \(x\in[0,1]\), the estimate_
\[|\hat{l}(x)|\leq\left(\bar{\beta}+(1+\bar{\beta})\epsilon\right)\mathrm{e}^{( 1+\bar{\beta})cx}. \tag{51}\]
Proof.: It is easily shown that \(\hat{l}\) obeys the integral equation
\[\hat{l}=-\beta+\delta+\delta\ast\hat{l}. \tag{52}\]
Figure 2: Examples of \(\beta\), \(\hat{k}\) for Chebyshev polynomials defined as \(\beta=6\cos(\gamma\cos^{-1}(x))\) with \(\gamma=3\), \(7.35\) on the left and right respectively. The \(\gamma\) parameter controls the wave frequency of \(\beta\) and therefore affects the resulting kernel. Additionally, the DeepONet absolute approximation error of \(\hat{k}\) and \(k\) is shown. The DeepONet approximates the “smoother” function on the left with better precision than the large, oscillating function on the right.
Using the successive approximation approach, we get that the following bound holds for all \(x\in[0,1]\):
\[|\hat{l}(x)|\leq\left(\bar{\beta}+\bar{\delta}\right)\mathrm{e}^{\bar{\delta}x}. \tag{53}\]
With (47), we get (51).
**Theorem 2**: _(Closed-loop stability robust to DeepONet approximation of backstepping kernel). Let \(B>0\) be arbitrarily large and consider the closed-loop system consisting of (1), (2) with any Lipschitz \(\beta\) such that \(\|\beta\|_{\infty}\leq B\), and the feedback (36) with the NO gain kernel \(\hat{k}=\hat{\mathcal{K}}(\beta)\) of arbitrary desired accuracy of approximation \(\epsilon\in(0,\epsilon^{*})\) in relation to the exact backstepping kernel \(k\), where \(\epsilon^{*}(B)\) is defined in (40). This closed-loop system obeys the exponential stability estimate_
\[\|u(t)\|\leq M\mathrm{e}^{-c^{*}t/2}\|u(0)\|,\qquad\forall t\geq 0 \tag{54}\]
_with the overshoot coefficient_
\[M=\left(1+\left(\bar{\beta}+(1+\bar{\beta})\epsilon\right)\mathrm{e}^{(1+\bar {\beta})\epsilon}\right)\left(1+\bar{\beta}\mathrm{e}^{\bar{\beta}}\right) \mathrm{e}^{c/2}. \tag{55}\]
First, we note that \(V\) Lemma 2 satisfies
\[\frac{1}{\left(1+\|\hat{l}\|_{\infty}\right)^{2}}\|u\|^{2}\leq V\leq\mathrm{e }^{c}\left(1+\|\hat{k}\|_{\infty}\right)^{2}\|u\|^{2}. \tag{56}\]
Since, by Lemma 2, \(V(t)\leq V(0)\mathrm{e}^{-c^{*}t}\), we get, for all \(t\geq 0\),
\[\|u(t)\| \leq \left(1+\|\hat{l}\|_{\infty}\right)\left(1+\|\hat{k}\|_{\infty} \right)\mathrm{e}^{c/2} \tag{57}\] \[\times\mathrm{e}^{-c^{*}t/2}\|u(0)\|.\]
Then, noting, with Theorem 1, (22), and Lemma 3 that
\[\|\hat{k}\|_{\infty} \leq \|k\|_{\infty}+\epsilon\leq\bar{\beta}\mathrm{e}^{\bar{\beta}}+\epsilon \tag{58}\] \[\|\hat{l}\|_{\infty} \leq \left(\bar{\beta}+(1+\bar{\beta})\epsilon\right)\mathrm{e}^{(1+ \bar{\beta})\epsilon} \tag{59}\]
we finally arrive at the exponential stability estimate (54).
**Remark 1**: _Full-state measurement \(u(x,t)\) is employed in the feedback law (36) but can be avoided by employing only the measurement of the outlet signal, \(u(0,t)\), from which the full state \(u(x,t)\) is observable, the observer_
\[\ddot{u}_{t} = \ddot{u}_{x}+\beta u(0) \tag{60}\] \[\hat{u}(1) = U \tag{61}\]
_and the observer-based controller_
\[U=(\hat{k}*\ddot{u})(1), \tag{62}\]
_which can avoid solving the PDE (60), (61) online by employing its explicit solution as an arbitrary function \(\ddot{u}(x,t)=\ddot{u}_{0}(x)\) for \(t+x\in[0,1)\) and_
\[\ddot{u}(x,t)=U(t+x-1)+\int_{t+x-1}^{t}\beta(t+x-\tau)u(0,\tau)d\tau \tag{63}\]
_for \(t+x\geq 1\). A closed-loop stability result as in Theorem 2 can be established for this observer-based controller._
## 5 Simulations: Stabilization with NO-Approximated Gain Kernel \(\beta\mapsto\mathcal{K}(\beta)\)
Continuing with Example 1, in Figure 3 we show that the system is open-loop unstable for both \(\beta\)s and we present tests with the learned kernels in closed-loop simulations up to \(t=2\). In both cases, the PDE settles (nearly perfectly) by \(t=1\), as expected from the target system with the perfect kernel \(k\). The small ripple in the right simulation is due to the use of the approximated kernel \(\hat{k}\). The simulations confirm the theoretical guarantee that an NO-approximated kernel can successfully emulate a backstepping kernel while maintaining stability.
The NO architecture in \(\hat{\mathcal{K}}\) consists of about 680 thousand parameters with a training time of \(1\) minute (using an Nvidia RTX 3090Ti GPU) on a dataset of \(900\) different \(\beta\) defined as the Chebyshev polynomials \(\beta=6\cos(\gamma\cos^{-1}(x))\) where \(\gamma\sim\) uniform(2, 10). We choose \(\beta\) of this form due to the rich set of PDEs and kernel functions constructed by varying only a single parameter. The resulting training relative \(L_{2}\) error \(4e-3\) and the testing relative \(L_{2}\) loss on \(100\) instances sampled from the same distribution was \(5e-3\). If a wider distribution of \(\gamma\) is chosen, the mapping can be learned but requires both a larger network and more data for the same accuracy.
## 6 Approximating the Full Feedback Law Map \((\beta,u)\mapsto U\)
We have so far pursued only the approximation of operator \(\mathcal{K}(\beta)\), while treating the feedback operator (8), given by \(U=(k*u)(1)=(\mathcal{K}(\beta)*u)(1)\), as straightforward to compute--m merely an integral in \(x\), i.e., a simple inner product between the functions \(\mathcal{K}(\beta)(1-x)\) and the state measurement \(u(x,t)\).
It is of theoretical (if not practical) interest to explore the neural approximation of the mapping from \((\beta,u)\) into the scalar control input \(U\). Such a mapping is clearly from a much larger space of functions \((\beta,u)\) into scalars (i.e., the mapping is _functional_) and is, therefore, considerably more training-intensive and learning-intensive. Nevertheless, since it is legitimate to ask how one would approximate not just the feedback gain kernel but the entire feedback law map, we examine this option in this section.
We emphasize that we are approximating just the feedback operator \((\mathcal{K}(\beta)*u)(1)\), whose second argument is the current state \(u\) as a function of \(x\), not the entire trajectory \(u(x,t)\). We do not train the NO using a trajectory-dependent cost \(\int_{0}^{t_{\mathrm{f}}}\left(\int_{0}^{1}u^{2}(x,t)dx+U^{2}(t)\right)dt\) for different initial conditions \(u_{0}\), as, e.g., in the application of RL to the hyperbolic PDEs of traffic flow in [86]. Instead, we perform the training simply on the kernel integral equation (14) and the convolution operation (8) for sample functions \(\beta\) and \(u\) of \(x\).
The form of stability we achieve in this section is less strong than in Theorem 2. While Theorem 2 guarantees global exponential stability, here we achieve only _semiglobal practical_ exponential stability. Because in this section we do not just train a multiplicative gain \(\mathcal{K}(\beta)\) but a feedback of \(u\) as well, the approximation error is not just multiplicative but additive, which is the cause of the exponential stability being _practical_. Because the data set involves samples \(u\) of bounded magnitude, stability is _semiglobal_ only.
Nevertheless, in comparison to the training on closed-loop solutions over a finite time horizon for the traffic flow in [86], where the finite horizon precludes the possibility of stability guarantees, the semiglobal practical exponential stability achieved here is a rather strong result.
We start by establishing the Lipschitzness of the backstepping feedback map.
**Lemma 4**: _Consider the feedback (8), namely,_
\[U=(\mathcal{K}(\beta)*u)(1), \tag{64}\]
_and the associated map \(\mathcal{U}:(\beta,u)\mapsto U\) from \(C^{0}([0,1]^{2})\) into \(\mathbb{R}\). For arbitrary \(B_{\beta},B_{u}>0\), the mapping \(\mathcal{U}\) is Lipschitz on
Figure 3: Top row showcases open-loop instability for the recirculation functions \(\beta\) that are the same as in Fig. 2, with \(\gamma=3,7.35\) on the left and right respectively. Additionally, the bottom two rows highlight examples of PDE closed-loop state response and errors between the response with “perfect gain” \(k\) and “approximate gain” \(k\). \(\beta\) corresponds to the same values in Figure 2. For the more “fluctuating” plant parameter \(\beta\), on the right of Figure 2, the control task is more challenging and, consequently, the state approximation error is also higher (bottom right).
any set of \(x\)-dependent Lipschitz functions \((\beta,u)\) such that \(\|\beta\|_{\infty}\leq B_{\beta},\|u\|_{\infty}\leq B_{u}\), with a Lipschitz constant
\[C_{\mathcal{U}}=B_{\beta}\mathrm{e}^{B_{\beta}}+B_{u}\mathrm{e}^{3B_{\beta}}. \tag{65}\]
Let \(U_{1}=\mathcal{U}(\beta_{1},u_{1})=(\mathcal{K}(\beta_{1})*u_{1})(1)\) and \(U_{2}=\mathcal{U}(\beta_{2},u_{2})=(\mathcal{K}(\beta_{2})*u_{2})(1)\). A calculation gives
\[|U_{1}-U_{2}|=|(\mathcal{K}(\beta_{1})*u_{1})(1)-(\mathcal{K}( \beta_{2})*u_{2})(1)|\] \[\leq\|\mathcal{K}(\beta_{1})\|_{\infty}\|u_{1}-u_{2}\|_{\infty}+ \|u_{2}\|_{\infty}\|\mathcal{K}(\beta_{1})-\mathcal{K}(\beta_{2})\|_{\infty}. \tag{66}\]
Let \(\|\beta_{1}\|_{\infty},\|\beta_{2}\|_{\infty}\leq B_{\beta}\) and \(\|u_{1}\|_{\infty},\|u_{2}\|_{\infty}\leq B_{u}\). Recall that \(\|\mathcal{K}(\beta)\|_{\infty}\leq B_{\beta}\mathrm{e}^{B_{\beta}}\) and \(\|\mathcal{K}(\beta_{1})-\mathcal{K}(\beta_{2})\|_{\infty}\leq\mathrm{e}^{3B_ {\beta}}\|\beta_{1}-\beta_{2}\|_{\infty}\). Then we get
\[|\mathcal{U}(\beta_{1},u_{1})-\mathcal{U}(\beta_{2},u_{2})|\] \[\leq\left(B_{\beta}\mathrm{e}^{B_{\beta}}+B_{u}\mathrm{e}^{3B_{ \beta}}\right)\|(\beta_{1}-\beta_{2},u_{1}-u_{2})\|_{\infty}. \tag{67}\]
Taking the backstepping transformation \(w=u-k*u\), where \(k=\mathcal{K}(\beta)\) is the exact backstepping kernel for \(\beta\), we get
\[w_{t} = w_{x} \tag{68}\] \[w(1) = U-(\mathcal{K}(\beta)*u)(1) \tag{69}\]
Let now \(\hat{\mathcal{U}}\) be the NO version of the mapping \(\mathcal{U}(\beta,u)=(\mathcal{K}(\beta)*u)(1)\). Taking the NO control \(U=\hat{\mathcal{U}}(\beta,u)\), we obtain the boundary condition \(w(1)=\hat{\mathcal{U}}(\beta,u)-(\mathcal{K}(\beta)*u)(1)\), namely, the target system
\[w_{t} = w_{x} \tag{70}\] \[w(1) = \hat{\mathcal{U}}(\beta,u)-\mathcal{U}(\beta,u) \tag{71}\]
Due to the Lipschitzness of \(\mathcal{U}\), based on the DeepONet approximation accuracy theorem, we get the following.
**Lemma 5**: _For all \(B_{\beta},B_{u}>0\) and \(\epsilon\), there exists an NO \(\hat{\mathcal{U}}\) such that_
\[|\mathcal{U}(\beta,u)-\hat{\mathcal{U}}(\beta,u)|<\epsilon \tag{72}\]
_for all \(\beta,u\in C^{0}[0,1]\) that are Lipschitz in \(x\) and such that \(\|\beta\|_{\infty}\leq B_{\beta},\|u\|_{\infty}\leq B_{u}\)._
Next, we state and then prove the main result.
**Theorem 3**: _(Semiglobal practical stability under DeepONet approximation of backstepping feedback law). If \(\epsilon<\epsilon^{*}\), where_
\[\epsilon^{*}(B_{\beta},B_{u},c):=\frac{\sqrt{c}B_{u}}{\mathrm{e}^{c/2}\left(1+ B_{\beta}\right)}>0, \tag{73}\]
_and \(\|u(0)\|\leq B_{u}^{0}\), where_
\[B_{u}^{0}(\epsilon,B_{\beta},B_{u},c):=\frac{1}{1+B_{\beta}\mathrm{e}^{B_{ \beta}}}\left(\frac{B_{u}}{\mathrm{e}^{c/2}\left(1+B_{\beta}\right)}-\frac{ \epsilon}{\sqrt{c}}\right)>0, \tag{74}\]
_the closed-loop solutions under the NO approximation of the PDE backstepping feedback law, i.e.,_
\[u_{t}(x,t) = u_{x}(x,t)+\beta(x)u(0,t) \tag{75}\] \[u(1,t) = \hat{\mathcal{U}}(\beta,u)(t) \tag{76}\]
_satisfy the semiglobal practical exponential stability_ estimate
\[\|u(t)\| \leq \left(1+B_{\beta}\right)\left(1+B_{\beta}\mathrm{e}^{B_{\beta}} \right)\mathrm{e}^{c/2}\mathrm{e}^{-ct/2}\|u(0)\| \tag{77}\] \[+\left(1+B_{\beta}\right)\frac{\mathrm{e}^{c/2}}{\sqrt{c}} \epsilon,\qquad\forall t\geq 0.\]
The estimate (77) is semiglobal because the radius \(B_{u}^{0}\) of the ball of initial conditions in \(L^{2}[0,1]\) is made arbitrarily large by increasing \(B_{u}\), and by increasing, in accordance with the increase of \(B_{u}\), the training set size and the number of NN nodes. Nevertheless, though semiglobal, the attraction radius \(B_{u}^{0}\) in (74) is much smaller than the magnitude \(B_{u}\) of the samples of \(u\) in the training set.
The residual value,
\[\limsup_{t\to}\|u(t)\|\leq\left(1+B_{\beta}\right)\frac{\mathrm{e}^{c/2}}{ \sqrt{c}}\epsilon \tag{78}\]
is made arbitrarily small by decreasing \(\epsilon\), and by increasing, in accordance with the decrease of \(\epsilon\), the training set size and the number of NN nodes. As the magnitude \(B_{\beta}\) of the (potentially destabilizing) gain samples \(\beta\) used for training grows, the residual error grows.
(of Theorem 3) To make the notation concise, denote \(\hat{\mathcal{U}}=\mathcal{U}-\hat{\mathcal{U}}\) and note that this mapping satisfies \(|\hat{\mathcal{U}}(\beta,u)|=|w(1)|\leq\epsilon\) for all \(\|\beta\|_{\infty}\leq B_{\beta},\|u\|_{\infty}\leq B_{u}\). Note also that \(\hat{\mathcal{U}}\) depends on \(\epsilon,B_{\beta},B_{u}\) through the number of training data and NO size. Consider now the Lyapunov functional \(V(t)=\int_{0}^{1}\mathrm{e}^{cx}w^{2}(x,t)dx\). Its derivative is
\[\dot{V} = \mathrm{e}^{c}w^{2}(1)-w^{2}(0)-c\int_{0}^{1}\mathrm{e}^{cx}w^{2}(x,t)dx \tag{79}\] \[\leq -cV+\mathrm{e}^{c}w^{2}(1)\]
which yields
\[V(t) \leq V(0)\mathrm{e}^{-ct}+\frac{\mathrm{e}^{c}}{c}\sup_{0\leq\tau\leq t }w^{2}(1,\tau) \tag{80}\] \[\leq V(0)\mathrm{e}^{-ct}+\frac{\mathrm{e}^{c}}{c}\sup_{0\leq\tau\leq t }\left(\tilde{\mathcal{U}}(\beta,u)(\tau)\right)^{2}.\]
Using the facts that
\[\frac{1}{\left(1+\|l\|_{\infty}\right)^{2}}\|u\|^{2}\leq V\leq\mathrm{e}^{c} \left(1+\|k\|_{\infty}\right)^{2}\|u\|^{2}. \tag{81}\]
and \(\|k\|_{\infty},\|l\|_{\infty}\leq B_{\beta}\mathrm{e}^{B_{\beta}}\), \(\|l\|_{\infty}\leq B_{\beta}\) we get
\[\|u(t)\| \leq \left(1+B_{\beta}\right)\left(1+B_{\beta}\mathrm{e}^{B_{\beta}} \right)\mathrm{e}^{c/2}\mathrm{e}^{-ct/2}\|u(0)\| \tag{82}\] \[+\left(1+B_{\beta}\right)\frac{\mathrm{e}^{c/2}}{\sqrt{c}}\sup_{0 \leq\tau\leq t}\left|\tilde{\mathcal{U}}(\beta,u)(\tau)\right|.\]
The conclusions of the theorem are directly deduced from this estimate and the bound \(|\tilde{\mathcal{U}}|<\epsilon\) in Lemma 5.
The NO \(\hat{\mathcal{U}}:(\beta,u)\mapsto U\) is complex, and therefore computationally burdensome in real time. Why not instead precompute the neural operator \(\hat{\mathcal{K}}:\beta\mapsto\hat{k}\) and also find a DeepONet \(\hat{\Omega}\) approximation of the _bilinear_ map \(\Omega:(k,u)\mapsto U\), which is simply the convolution \(\Omega(k,u)(t)=\int_{0}^{1}k(1-x)u(x,t)dy\), and then compute just \(\hat{\Omega}(\hat{k},u)(t)\) in real time, after computing \(\hat{k}=\hat{\mathcal{K}}(\beta)\) offline? This is certainly possible. Why haven't we developed the theory for this approach? Simply because the theory for such a "composition-of-operators" approach, for \(\hat{\Omega}(\hat{\mathcal{K}}(\beta),u)\), would be hardly any different, but just notationally more involved, than the theory that we provide here for the one-shot neural operator \(\hat{\mathcal{U}}(\beta,u)\).
## VII Simulations: Practical Stabilization with No-Approximated Feedback Law \((\beta,u)\to U\)
Learning the map \((\beta,u)\mapsto U\) is harder than \(\beta\mapsto k\) due to the combination of two functions, \(\beta\), and \(u\). We can learn the mapping using a training set defined by \(\beta\) as in Figure 2 with \(\gamma\in\text{uniform}(2,6)\) and random values of \(u\). We present results with the learned mapping in Figure 4 where the learned control contains significant error. Due to this, we see that the PDE in the right of Figure 4 contains a significant ripple past the time \(T=1\) whereas the analytically controlled PDE is stabilized, as stipulated by the target system, by \(T=1\). When compared to the operator approximation for gain kernel in Figure 3 left, the PDE error is at least twice as large confirming the theoretical results in Theorem 3 and Theorem 2.
Furthermore, the network architecture, as presented in Figure 5 requires significant enhancement over a traditional DeepONet. To learn this mapping, we emulate the operator structure where the map \((\beta,u)\) requires two DeepONet layers for the integral operators adjoined with linear layers for the multiplicative operation. Additionally, to make the network feasible, we use a smaller spatial resolution than in Section V and a larger dataset. The dataset requires a combination of both \(\beta\) and \(u\) and thus consists of 50000 instances. Therefore a network of approximately 415 thousand parameters takes approximately 20 minutes to train. We achieved a training relative \(L_{2}\) error of \(7.2e-3\) and a testing relative \(L_{2}\) error of \(3.3e-2\). This demonstrates, to the practical user, that the map \((\beta,u)\) requires more training data and significant architectural enhancements boosting training time, yet the error in Figure 4 is larger compared to employing the learned map \(\beta\mapsto k\).
## VIII Extension to Hyperbolic PIDEs
We present the "general case" for a class of hyperbolic partial integro-differential equations (PIDE) of the form
\[u_{t}(x,t) = u_{x}(x,t)+g(x)u(0,t) \tag{83}\] \[+\int_{0}^{x}f(x,y)u(y,t)dy,\qquad x\in[0,1)\] \[u(1,t) = U(t). \tag{84}\]
We have left this generalization for the end of the paper for pedagogical reasons--in order not to overwhelm and daze the reader--since the case where the Volterra operator kernel \(f(x,y)\) is a function of two variables complicates the treatment considerably. The backstepping transformation is no longer a convolution with a function of a single variable, the gain mapping is no longer of \(C^{0}\) functions on \([0,1]\) but of \(C^{1}\) functions on the triangle \(\{0\leq y\leq x\leq 1\}\), and the DeepONet theorem requires estimates of the derivatives of the backstepping kernel.
While [6, (11)-(17)] shows that (1), (2) can be transformed into (83), (84), this transformation involves a nonlinear mapping \(f\mapsto\beta\), which itself would have to be learned to produce an approximation of the complete kernel mapping \((g,f)\mapsto k\) as a composition of two mappings. This is why the results of the previous sections do not provide a solution to the general case (83), (84), but only a pedagogical introduction, and this is why a generalization in this section is necessary.
To find the mapping from the PIDE coefficients \((g,f)\) to the kernel \(k\) of the backstepping controller
\[U(t)=\int_{0}^{1}k(1,y)u(y,t)dy, \tag{85}\]
we take the backstepping transform
\[w(x,t)=u(x,t)-\int_{0}^{x}k(x,y)u(y,t)dy, \tag{86}\]
which is not a simple convolution as in (5), with a kernel depending on a single argument, and the same target system as in (6), (7), \(w_{t}=w_{x},\ w(1)=0\), which gives the kernel integral equation derived in [40] as
\[k(x,y) = F_{0}(x,y)+F(g,f,k)(x,y), \tag{87}\]
where
\[F_{0}(x,y):=-g(x-y)-\int_{0}^{y}f(x-y+\xi,\xi)d\xi \tag{88}\] \[F(g,f,\kappa)(x,y):=\int_{0}^{x-y}g(\xi)\kappa(x-y,\xi)d\xi\] \[+\int_{0}^{y}\int_{0}^{x-y}f(\xi+\eta,\eta)\kappa(x-y+\eta,\xi+ \eta)d\xi d\eta. \tag{89}\]
Denote \(\mathcal{T}=\{0\leq y\leq x\leq 1\}\) as the domain of the functions \(f\) and \(k\). Further, denote
\[\bar{g}=\sup_{[0,1]}|g|,\ \ \ \overline{g^{\prime}}=\sup_{[0,1]}|g^{ \prime}| \tag{90}\] \[\bar{f}=\sup_{\mathcal{T}}|f|,\ \ \ \overline{f_{x}}=\sup_{\mathcal{T}}|f_{x}|. \tag{91}\]
It was proven in [40] that
\[|k(x,y)|\leq\left(\bar{g}+\bar{f}\right)\mathrm{e}^{\bar{g}+\bar{f}}=:\bar{k} \left(\bar{g},\bar{f}\right). \tag{92}\]
For the partial derivatives
\[k_{x} = F_{0}^{x}+F(g,f,k_{x}) \tag{93}\] \[k_{y} = F_{0}^{y}-F(g,f,k_{x}) \tag{94}\]
where
\[F_{0}^{x}(x,y)=-\int_{0}^{y}f_{x}(x-y+\xi,\xi)d\xi+\phi_{0}(x,y) \tag{95}\] \[F_{0}^{y}(x,y)=f_{x}(x,y)+\int_{y}^{x}f(\sigma,y)k(x,\sigma)d \sigma-\phi_{0}(x,y)\] (96) \[\phi_{0}(x,y)=-g^{\prime}(x-y)+g(x-y)k(x-y,x-y)\] \[+\int_{0}^{y}f(x-y+\eta,\eta)k(x-y+\eta,x-y+\eta)d\eta \tag{97}\]
it is proven using the same approach (successive approximation, infinite series, induction) that, on the triangle \(\mathcal{T}\),
\[|k_{x}(x,y)| \leq \left(\overline{f_{x}}+\overline{\phi_{0}}\right)\mathrm{e}^{\bar {g}+\bar{f}}=:\overline{k_{x}}\left(\bar{g},\overline{g^{\prime}},\bar{f}, \overline{f_{x}}\right) \tag{98}\] \[|k_{y}(x,y)| \leq \overline{f_{x}}+\bar{f}\bar{k}+\overline{\phi_{0}}+\left(\bar{g} +\bar{f}\right)\overline{k_{x}} \tag{99}\]
where
\[\overline{\phi_{0}}(\bar{g},\overline{g^{\prime}},\bar{f}):=\overline{g^{\prime} }+\left(\bar{g}+\bar{f}\right)\bar{k}. \tag{100}\]
Hence, along with the existence, uniqueness, and continuous differentiability of \(k\)[40], we have proven the following.
**Lemma 6**: _The map \(\mathcal{Q}:C^{1}([0,1]\times\mathcal{T})\to C^{1}([0,1]\times\mathcal{T})\) defined by \(k=\mathcal{Q}(g,f)\), and representing the solution of (87), is continuous. In addition, \(|k|,|k_{x}|,|k_{y}|\) are bounded, respectively, as in (92), (98), (99), in terms of the bounds on \(|g|,|g^{\prime}|,|f|,|f_{x}|\)._
From the continuity of the map \(\mathcal{Q}\) on the Banach space \(C^{1}([0,1]\times\mathcal{T})\), the following result is inferred from the DeepONet theorem.
**Lemma 7**: _For all \(\epsilon>0\) and \(B_{g},B_{g^{\prime}},B_{f},B_{f_{x}}>0\) there exists an NO \(\hat{\mathcal{Q}}\) such that, for all \((x,y)\in\mathcal{T}\),_
\[\left|\hat{\mathcal{Q}}(g,f)(x,y)-\mathcal{Q}(g,f)(x,y)\right|\] \[+\left|\frac{\partial}{\partial y}\left(\hat{\mathcal{Q}}(g,f)(x,y)-\mathcal{Q}(g,f)(x,y)\right)\right|\] \[+\left|\frac{\partial}{\partial y}\left(\hat{\mathcal{Q}}(g,f)(x,y)-\mathcal{Q}(g,f)(x,y)\right)\right|<\epsilon \tag{101}\]
_for all functions \(g\in C^{1}([0,1])\) and \(f\in C^{1}(\mathcal{T})\) whose derivatives are Lipschitz and which satisfy \(\|g\|_{\infty}\leq B_{g}\), \(\|g^{\prime}\|_{\infty}\leq B_{g^{\prime}}\), \(\|f\|_{\infty}\leq B_{f}\), \(\|f_{x}\|_{\infty}\leq B_{f_{x}}\)._
Denoting \(\tilde{k}=k-\tilde{k}=\mathcal{K}(g,f)-\tilde{\mathcal{K}}(g,f)\), (101) can be written as \(\left|\tilde{k}(x,y)\right|+\left|\tilde{k}_{x}(x,y)\right|+\left|\tilde{k}_{y }(x,y)\right|<\epsilon\).
Now take the backstepping transformation
\[\hat{w}(x,t)=u(x,t)-\int_{0}^{x}\hat{k}(x,y)u(y,t)dy. \tag{102}\]
With the control law
\[U(t)=\int_{0}^{1}\hat{k}(1,y)u(y,t)dy, \tag{103}\]
the target system becomes
\[\hat{w}_{x}(x,t) = \hat{w}_{t}(x,t)+\delta(x)\hat{w}(0,t) \tag{104}\] \[+\int_{0}^{x}\delta_{1}(x,y)u(y,t)dy\] \[\hat{w}(1,t) = 0, \tag{105}\]
where
\[\delta_{0}(x) = -\tilde{k}(x,0)+\int_{0}^{x}g(y)\tilde{k}(x,y)dy \tag{106}\] \[\delta_{1}(x,y) = -\tilde{k}_{x}(x,y)-\tilde{k}_{y}(x,y)\] (107) \[+\int_{y}^{x}f(\xi,y)\tilde{k}(x,\xi)d\xi\]
Fig. 4: Examples of PDE closed-loop state response and errors between the response with “perfect control” \(U\) and “approximate control” \(\hat{U}\). \(\beta\) is same as in 2 with \(\gamma=3\).
satisfy
\[\|\delta_{0}\|_{\infty} \leq (1+\bar{g})\epsilon \tag{108}\] \[\|\delta_{1}\|_{\infty} \leq (2+\bar{f})\epsilon. \tag{109}\]
Since the state \(u\) appears under the integral in the \(\hat{w}\)-system (104), in the Lyapunov analysis we need the inverse backstepping transformation
\[u(x,t)=\hat{w}(x,t)+\int_{0}^{x}\hat{l}(x,y)\hat{w}(y,t)dy. \tag{110}\]
It is shown in [41] that the direct and inverse backstepping kernels satisfy in general the relationship
\[\hat{l}(x,y)=\hat{k}(x,y)+\int_{y}^{x}\hat{k}(x,\xi)\hat{l}(\xi,y)dy. \tag{111}\]
The inverse kernel satisfies the following conservative bound
\[\|\hat{l}\|_{\infty}\leq\|\hat{k}\|_{\infty}\mathrm{e}^{\|\hat{k}\|_{\infty}}. \tag{112}\]
Since \(\|k-\hat{k}\|_{\infty}<\epsilon\), we have that \(\|\hat{k}\|_{\infty}\leq\|k\|_{\infty}+\epsilon\). With (92) we get \(\|\hat{k}\|_{\infty}\leq\bar{k}(\bar{g},\bar{f})+\epsilon\) and hence
\[\|\hat{l}\|_{\infty}\leq\left(\bar{k}+\epsilon\right)\mathrm{e}^{\bar{k}+ \epsilon}. \tag{113}\]
Going back to (110), we get
\[\|u\|\leq\left(1+\left(\bar{k}+\epsilon\right)\mathrm{e}^{\bar{k}+\epsilon} \right)\|\hat{w}\|. \tag{114}\]
Mimicking and generalizing the steps of the proofs of Lemma 2 and Theorem 2, we get the following exponential stability result. (We omit the explicit but conservative and exceedingly complicated and uninformative estimates of the overshoot coefficient, the decay rate, and the upper bound \(\epsilon^{*}\) on the approximation accuracy needed to guarantee stability under the gain approximation.)
**Theorem 4**: _Let \(B_{g},B_{g^{\prime}},B_{f},B_{f_{x}}>0\) be arbitrarily large and consider the system (83), (84) with any \(g\in C^{1}([0,1])\) and \(f\in C^{1}(\mathcal{T})\) whose derivatives are Lipschitz and which satisfy \(\|g\|_{\infty}\leq B_{g}\), \(\|g^{\prime}\|_{\infty}\leq B_{g^{\prime}}\), \(\|f\|_{\infty}\leq B_{f}\), \(\|f_{x}\|_{\infty}\leq B_{f_{x}}\). There exists a sufficiently small \(\epsilon^{*}(B_{g},B_{g^{\prime}},B_{f},B_{f_{x}})>0\) such that the feedback law (103) with the NO gain kernel \(\hat{k}=\hat{\mathcal{Q}}(g,f)\) of arbitrary desired accuracy of approximation \(\epsilon\in(0,\epsilon^{*})\) in relation to the exact backstepping kernel \(k\) ensures that there exist \(M,c^{*}>0\) such that the closed-loop system satisfies the exponential stability bound_
\[\|u(t)\|\leq M\mathrm{e}^{-c^{*}t/2}\|u(0)\|,\qquad\forall t\geq 0. \tag{115}\]
Simulations: Stabilization of PIDE with No-Approximated Gain Kernel \(f\mapsto\mathcal{Q}(f)\) Dependent on \((x,y)\)
For clarity, we consider the systems of the form (83) with \(g=0\), so that the focus is solely on the mapping of two-dimensional plant kernels \(f(x,y)\) into two-dimensional backstepping kernels \(k(x,y)\), which are governed by the (double) integral equation
\[k(x,y)=-\int_{0}^{y}f(x-y+\xi,\xi)d\xi\] \[+\int_{0}^{y}\int_{0}^{x-y}f(\xi+\eta,\eta)k(x-y+\eta,\xi+\eta)d \xi d\eta. \tag{116}\]
We illustrate in this section the NO approximation \(\hat{\mathcal{Q}}\) of the nonlinear operator \(\mathcal{Q}:f\mapsto k\) mapping \(C^{1}(\mathcal{T})\) into itself. First, in Figure 6 we present the construction of the two-dimensional function \(f\) via a product of Chebyshev polynomials and highlight the PDE's open-loop instability. Then, we showcase the corresponding learned kernel and the error in Figure 7. The pointwise error for the learned kernel peaks at around \(10\%\) of \(k\), as it "ripples" in the right of Figure 7. The
Figure 5: Network architecture for the map (\(\boldsymbol{\beta}\), \(\boldsymbol{u}\)) \(\mapsto\)_U_ presented in Section 9. The network first solves the kernel function using a DeepONet layer, then utilizes linear layers to multiply \(\boldsymbol{k}\) with the PDE state \(\boldsymbol{u}\), and concludes by learning a second neural operator layer for the nonlinear integral operation yielding the final control output \(U\).
learned kernel \(\hat{k}\) achieves stabilization in Figure 8 (right), but not by \(t=1\), as it would with perfect \(k\) in (6), (7), but only exponentially, as guaranteed for the learned \(\hat{k}\) in Theorem 4.
For this 2D problem (\(f\) and \(k\) are functions of \(x\) and \(y\)), we design the branch network of the NO with convolutional neural networks (CNNs) as they have had large success in handling 2D inputs [39, 45]. The network consists of 70 million parameters (due to the CNNs), yet only takes around 5 minutes to train. On 900 instances, the network achieves a relative \(L_{2}\) training error of \(1.3e-3\) and a relative \(L_{2}\) testing error of \(1.8e-3\) on 100 instances.
## 10 Conclusions
What is achievedPIN, DeepONet, FNO, LOCA, NOMAD--they have all been used with success to approximate solution maps of PDEs. What we introduce is a novel framework: for approximating the solution maps for _integral equations_ (87), (88), (89), or simply (9), for the feedback gain functions \(k\) in _control of PDEs_.
We provide the guarantees that (i) any desired level of accuracy of NO approximation of the backstepping gain kernel is achieved for any \(\beta\) that satisfies \(\|\beta\|_{\infty}\leq B\) for arbitrarily large given \(B>0\), and (ii) the PDE is stabilized with an NO-approximated gain kernel for any \(\|\beta\|_{\infty}\leq B\).
These results generalize to a class of PDEs with functional coefficients \((g,f)\) that depends on two variables, \((x,y)\), and result in kernels \(k\) that are also functions of \((x,y)\).
For a given \(B>0\) and any chosen positive \(\epsilon<\epsilon^{*}(B)\), the determination of the NO approximate operator \(\hat{\mathcal{K}}(\cdot)\) is done offline, once only, and such a \(\hat{\mathcal{K}}(\cdot)\), which depends on \(B\) and \(\epsilon\), is usable "forever," so to speak, for any recirculation kernel that does not violate \(\|\beta\|_{\infty}\leq B\).
When the entire PDE backstepping feedback law--rather than just its gain kernel--is being approximated, globality and perfect convergence are lost, but only slightly. Decay remains exponential, over infinite time, and stability is semiglobal.
What is gained by making a particular controller class with theoretical guarantees the object of learning:By now it is probably clear to the reader that what we present here is a method for learning an entire _class_ of model-based controllers, by learning the gains \(\hat{k}=\hat{\mathcal{K}}(\beta)\), or \(\hat{k}=\mathcal{Q}(g,f)\), for any plant parameters \(\beta\) or \((g,f)\). What does one profit from learning a particular class of controllers backed up by theory? Suppose that, instead of learning the _PDE backstepping_ gain mapping \(\mathcal{K}(\cdot)\), we were trying to find _any_ gain function \(k(x)\) that meets some performance objective. This goal could be formulated as a finite-time minimization of \(\int_{0}^{t_{t}}\left(\int_{0}^{1}u^{2}(x,t)dx+U^{2}(t)\right)dt\), for a given \(\beta\), over a set of gain functions \(k\) for a ball of initial conditions \(u_{0}(x)=u(x,0)\) around the origin. Not only would this be a much larger search, over \((k,u_{0})\), but such a finite-time minimization could ensure only finite-time performance, not exponential stability.
Our achievement of global exponential stability (not "practical"/approximate, but with an actual convergence of the state to zero) relies crucially--in each of the lemmas and theorems that we state--on the theoretical steps from the PDE backstepping toolkit (backstepping transform, target system, integral equation for kernel, successive infinite-series approximation, Lyapunov analysis). It is only by assigning the NO a service role in an otherwise model-based design that stability is assured. Stability assurance is absent from learning approaches in which the feedback law design is left to ML and a finite-time cost, as in RL for the traffic flow PDEs [86].
Future researchOf immediate interest are the extensions of the results of this paper to parabolic PDEs in [72], as well as extensions from the approximations of controller kernels to the NO approximations of PDE backstepping observer kernels [73], with guarantees of observer convergence, and with observer-based stabilization (separation principle).
|
2309.14103 | The Upper Clique Transversal Problem | A clique transversal in a graph is a set of vertices intersecting all maximal
cliques. The problem of determining the minimum size of a clique transversal
has received considerable attention in the literature. In this paper, we
initiate the study of the ''upper'' variant of this parameter, the upper clique
transversal number, defined as the maximum size of a minimal clique
transversal. We investigate this parameter from the algorithmic and complexity
points of view, with a focus on various graph classes. We show that the
corresponding decision problem is NP-complete in the classes of chordal graphs,
chordal bipartite graphs, cubic planar bipartite graphs, and line graphs of
bipartite graphs, but solvable in linear time in the classes of split graphs,
proper interval graphs, and cographs, and in polynomial time for graphs of
bounded cliquewidth. We conclude the paper with a number of open questions. | Martin Milanič, Yushi Uno | 2023-09-25T12:52:38Z | http://arxiv.org/abs/2309.14103v3 | # Upper Clique Transversals in Graphs+
###### Abstract
A _clique transversal_ in a graph is a set of vertices intersecting all maximal cliques. The problem of determining the minimum size of a clique transversal has received considerable attention in the literature. In this paper, we initiate the study of the "upper" variant of this parameter, the _upper clique transversal number_, defined as the maximum size of a minimal clique transversal. We investigate this parameter from the algorithmic and complexity points of view, with a focus on various graph classes. We show that the corresponding decision problem is \(\mathsf{NP}\)-complete in the classes of chordal graphs, chordal bipartite graphs, and line graphs of bipartite graphs, but solvable in linear time in the classes of split graphs and proper interval graphs.
**Keywords:** clique transversal, upper clique transversal number, vertex cover
**MSC (2020):** 05C69, 05C85, 05C75, 05C76, 68R10
## 1 Introduction
A set of vertices of a graph \(G\) that meets all maximal cliques of \(G\) is called a _clique transversal_ in \(G\). Clique transversals in graphs have been studied by Payan in 1979 [41], by Andreae, Schughart, and Tuza in 1991 [4], by Erdos, Gallai, and Tuza in 1992 [21], and also extensively researched in the more recent literature (see, e.g., [3, 5, 10, 14, 16, 20, 24, 30, 31, 32, 33, 42]). What most of these papers have in common is that they are interested in questions regarding the _clique transversal number_ of a graph, that is, the minimum size of a clique transversal of the graph. For example, Chang, Farber, and Tuza showed in [14] that computing the clique transversal number for split graphs is \(\mathsf{NP}\)-hard, and Guruswami and Pandu Rangan showed in [24] that the problem is \(\mathsf{NP}\)-hard for cocomparability, planar, line, and total graphs, and solvable in polynomial time for Helly circular-arc graphs, strongly chordal graphs, chordal graphs of bounded clique size, and cographs.
In this paper, we initiate the study of the "upper" version of this graph invariant, the _upper clique transversal number_, denoted by \(\tau_{c}^{+}(G)\) and defined as the maximum size of a minimal clique transversal, where a clique transversal in a graph \(G\) is said to be _minimal_ if it does not contain any other clique transversal. The corresponding decision problem is defined as follows.
Upper Clique Transversal (UCT)
_Input:_ A graph \(G\) and an integer \(k\).
_Question:_ Does \(G\) contain a minimal clique transversal \(S\) such that \(|S|\geq k\)?
Our study contributes to the literature on upper variants of graph minimization problems, which already includes the upper vertex cover (also known as maximum minimal vertex cover; see [11, 17, 48]), upper feedback vertex set (also known as maximum minimal feedback vertex set; see [19, 29]), upper edge cover (see [28]), upper domination (see [2, 6, 27]), and upper edge domination (see [38]).
**Our results.** We provide a first set of results on the algorithmic complexity of Upper Clique Transversal. Since clique transversals have been mostly studied in the class of chordal graphs and related classes, we also find it natural to first focus on this interesting graph class and its subclasses. In this respect, we provide an \(\NP\)-completeness result as well as two very different linear-time algorithms. We show that UCT is \(\NP\)-complete in the class of chordal graphs, but solvable in linear time in the classes of split graphs and proper interval graphs. Note that the result for split graphs is in contrast with the aforementioned \(\NP\)-hardness result for computing the clique transversal number in the same class of graphs [14]. In addition, we provide \(\NP\)-completeness proofs for two more subclasses of the class of perfect graphs, namely for chordal bipartite graphs, and for line graphs of bipartite graphs.
The diagram in Figure 1 summarizes the relationships between various graph classes studied in this paper and indicates some boundaries of tractability of the UCT problem. We define those graph classes in the corresponding later sections in the paper. For further background and references on graph classes, we refer to [13].
**Our approach.** Our approach is based on connections with a number of graph parameters. For example, the \(\NP\)-completeness proofs for the classes of chordal bipartite graphs and of line graphs of bipartite graphs are based on the fact that for triangle-free graphs without isolated vertices, minimal clique transversals are exactly the minimal vertex covers, and they are closely related with minimal edge covers via the line graph operator. In particular, if \(G\) is a triangle-free
Figure 1: The complexity of UCT in various graph classes studied in this paper.
graph without isolated vertices, then the upper clique transversal number of \(G\) equals the upper vertex cover number of \(G\), that is, the maximum size of a minimal vertex cover. Since the upper vertex cover number of a graph \(G\) plus the independent domination number of \(G\) equals the order of \(G\), there is also a connection with the independent dominating set problem. Let us note that, along with a linear-time algorithm for computing a minimum independent set in a tree [9], the above observations suffice to justify the polynomial-time solvability of the upper clique transversal problem on trees, as indicated in Figure 1. The NP-completeness proof for the class of chordal graphs is based on a reduction from Spanning Star Forest, the problem of computing a spanning subgraph with as many edges as possible that consists of disjoint stars; this problem, in turn, is known to be closely related to the dominating set problem.
The linear-time algorithm for computing the upper clique transversal number of proper interval graphs relies on a linear-time algorithm for the maximum induced matching problem in bipartite permutation graphs due to Chang [15]. More precisely, we prove that the upper clique transversal number of a given graph cannot exceed the maximum size of an induced matching of a derived bipartite graph, the _vertex-clique incidence graph_, and show, using new insights on the properties of the matching computed by Chang's algorithm, that for proper interval graphs, the two quantities are the same.
The linear-time algorithm for computing the upper clique transversal number of a split graph is based on a characterization of minimal clique transversals of split graphs. A clique transversal that is an independent set is also called a _strong independent set_ (or _strong stable set_; see [37] for a survey). It is not difficult to see that every strong independent set is a minimal clique transversal. We show that every split graph has a maximum minimal clique transversal that is independent (and hence, a strong independent set).
**Structure of the paper.** In Section 2 we introduce the relevant graph theoretic background. Hardness results are presented in Section 3. Linear-time algorithms for UCT in the classes of split graphs and proper interval graphs are developed in Sections 4 and 5, respectively. We conclude the paper in Section 6.
## 2 Preliminaries
Throughout the paper, graphs are assumed to be finite, simple, and undirected. We use standard graph theory terminology, following West [45]. A graph \(G\) with vertex set \(V\) and edge set \(E\) is often denoted by \(G=(V,E)\); we write \(V(G)\) and \(E(G)\) for \(V\) and \(E\), respectively. The set of vertices adjacent to a vertex \(v\in V\) is the _neighborhood_ of \(v\), denoted \(N(v)\); its cardinality is the _degree_ of \(v\), denoted \(\deg(v)\). The _closed neighborhood_ is the set \(N[v]\), defined as \(N(v)\cup\{v\}\). An _independent set_ in a graph is a set of pairwise non-adjacent vertices; a _clique_ is a set of pairwise adjacent vertices. An independent set (resp., clique) in a graph \(G\) is _maximal_ if it is not contained in any other independent set (resp., clique). A _clique transversal_ in a graph is a subset of vertices that intersects all the maximal cliques of the graph. A _dominating set_ in a graph \(G=(V,E)\) is a set \(S\) of vertices such that every vertex not in \(S\) has a neighbor in \(S\). An _independent dominating set_ is a dominating set that is also an independent set. The _(independent) domination number_ of a graph \(G\) is the minimum size of an (independent) dominating set in \(G\). Note that a set \(S\) of vertices in a graph \(G\) is an independent dominating set if and only if \(S\) is a maximal independent set. In particular, the independent domination number of a graph is a well-defined invariant leading to a decision problem called Independent Dominating Set.
The _clique number_ of \(G\) is denoted by \(\omega(G)\) and defined as the maximum size of a clique in \(G\). An _upper clique transversal_ of a graph \(G\) is a minimal clique transversal of maximum
size. The _upper clique transversal number_ of a graph \(G\) is denoted by \(\tau_{c}^{+}(G)\) and defined as the maximum size of a minimal clique transversal in \(G\). A _vertex cover_ in \(G\) is a set \(S\subseteq V(G)\) such that every edge \(e\in E(G)\) has at least one endpoint in \(S\). A vertex cover in \(G\) is _minimal_ if it does not contain any other vertex cover. These notions are illustrated in Figure 2. Note that if \(G\) is a triangle-free graph without isolated vertices, then the maximal cliques of \(G\) are exactly its edges, and hence the clique transversals of \(G\) coincide with its vertex covers.
## 3 Intractability of UCT for some graph classes
In this section we prove that Upper Clique Transversal is -complete in the classes of chordal graphs, chordal bipartite graphs, and line graphs of bipartite graphs. First, let us note that for the class of all graphs, we do not know whether the problem is in. If \(S\) is a minimal clique transversal in \(G\) such that \(|S|\geq k\), then a natural way to verify this fact would be to certify separately that \(S\) is a clique transversal and that it is a minimal one. Assuming that \(S\) is a clique transversal, one can certify minimality simply by exhibiting for each vertex \(u\in S\) a maximal clique \(C\) in \(G\) such that \(C\cap S=\{u\}\). However, unless, we cannot verify the fact that \(S\) is a clique transversal in polynomial time. This follows from a result of Zang [47], showing that it is -complete to check, given a weakly chordal graph \(G\) and an independent set \(S\), whether \(S\) is a clique transversal in \(G\). A graph \(G\) is _weakly chordal_ if neither \(G\) nor its complement contain an induced cycle of length at least five.
We do not know whether -complete Transversal is in when restricted to the class of weakly chordal graphs. However, for their subclasses chordal graphs and chordal bipartite graphs, membership of UCT in is a consequence of the following proposition.
**Proposition 3.1**.: _Let \(\mathcal{G}\) be a graph class such that every graph \(G\in\mathcal{G}\) has at most polynomially many maximal cliques. Then, Upper Clique Transversal is in for graphs in \(\mathcal{G}\)._
Proof.: Given a graph \(G\in\mathcal{G}\) and an integer \(k\), a polynomially verifiable certificate of the existence of a minimal clique transversal \(S\) in \(G\) such that \(|S|\geq k\) is any such set \(S\). Indeed, in this case we can enumerate all maximal cliques of \(G\) in polynomial time by using any of the
Figure 2: Upper clique transversal and related notions.
output-polynomial algorithms for this task (e.g., [34]). In particular, we can verify that \(S\) is a clique transversal of \(G\) in polynomial time. We can also verify minimality in polynomial time, by determining whether for each vertex \(u\in S\) there exists a maximal clique \(C\) in \(G\) such that \(C\cap S=\{u\}\).
A _star_ is a graph that has a vertex that is adjacent to all other vertices, and there are no other edges. A _spanning star forest_ in a graph \(G=(V,E)\) is a spanning subgraph \((V,F)\) consisting of vertex-disjoint stars. Some of our hardness results will make use of a reduction from Spanning Star Forest, the problem that takes as input a graph \(G\) and an integer \(\ell\), and the task is to determine whether \(G\) contains a spanning star forest \((V,F)\) such that \(|F|\geq\ell\).
**Theorem 3.2**.: Spanning Star Forest _is \(\mathsf{NP}\)-complete in the class of bipartite graphs with minimum degree at least \(2\)._
Proof.: Membership in \(\mathsf{NP}\) is clear. Spanning Star Forest is \(\mathsf{NP}\)-complete due to its close relationship with Dominating Set, the problem that takes as input a graph \(G\) and an integer \(k\), and the task is to determine whether \(G\) contains a dominating set \(S\) such that \(|S|\leq k\). The connection between the spanning star forests and dominating sets is as follows: a graph \(G\) has a spanning star forest with at least \(\ell\) edges if and only if \(G\) has a dominating set with at most \(|V|-\ell\) vertices (see [22, 40]). Dominating Set is known to be \(\mathsf{NP}\)-complete in the class of bipartite graphs (see, e.g., [8]) and even in the class of chordal bipartite graphs, as shown by Muller and Brandstadt [39]. The graphs constructed in the \(\mathsf{NP}\)-hardness reduction from [39] do not contain any vertices of degree zero or one, hence the claimed result follows.
We present the hardness results in increasing order of difficulty of the proofs, starting with the class of chordal bipartite graphs. A _chordal bipartite_ graph is a bipartite graph in which all induced cycles are of length four.
**Theorem 3.3**.: Upper Clique Transversal _is \(\mathsf{NP}\)-complete in the class of chordal bipartite graphs._
Proof.: Proposition 3.1 implies that UCT is in \(\mathsf{NP}\) when restricted to any class of bipartite graphs. To prove \(\mathsf{NP}\)-hardness, we make a reduction from Independent Dominating Set in chordal bipartite graphs, the problem that takes as input a chordal bipartite graph \(G\) and an integer \(k\), and the task is to determine whether \(G\) contains a maximal independent set \(I\) such that \(|I|\leq\ell\). As proved in [18], this problem is \(\mathsf{NP}\)-complete. We may assume without loss of generality that the input graph does not have any isolated vertices. Then, given a set \(I\subseteq V(G)\), the following statements are equivalent:
1. \(I\) is a (maximal) independent set in \(G\),
2. \(V(G)\setminus I\) is a (minimal) vertex cover in \(G\), and
3. \(V(G)\setminus I\) is a (minimal) clique transversal in \(G\).
Statements (i) and (ii) are equivalent for any graph, while the equivalence between statements (ii) and (iii) follows from the fact that the maximal cliques in \(G\) are precisely its edges. It follows that \(G\) has a maximal independent set \(I\) such that \(|I|\leq\ell\) if and only if \(G\) has a minimal clique transversal \(S\) such that \(|S|\geq k\) where \(k=|V(G)|-\ell\). This completes the proof.
We next consider the class of line graphs of bipartite graphs. The _line graph_ of a graph \(G\) is the graph \(H\) with \(V(H)=E(G)\) in which two distinct vertices are adjacent if and only if they share an endpoint as edges in \(G\).
**Lemma 3.4**.: _Let \(G\) be a triangle-free graph with minimum degree at least \(2\) and let \(H\) be the line graph of \(G\). Then, the maximal cliques in \(H\) are exactly the sets \(E_{v}\) for \(v\in V(G)\), where \(E_{v}\) is the set of edges in \(G\) that are incident with \(v\)._
Proof.: Since \(G\) is triangle-free, any clique in \(H\) corresponds to a set of edges in \(G\) having a common endpoint. Furthermore, since \(G\) is of minimum degree at least \(2\), any two sets \(E_{u}\) and \(E_{v}\) for \(u\neq v\) are incomparable with respect to inclusion.
An _edge cover_ of a graph \(G\) is a set \(F\) of edges such that every vertex of \(G\) is incident with some edge of \(F\).
**Lemma 3.5**.: _Let \(G\) be a triangle-free graph with minimum degree at least \(2\) and let \(H\) be the line graph of \(G\). Then, a set \(F\subseteq E(G)\) is a clique transversal in \(H\) if and only if \(F\) is an edge cover in \(G\). Consequently, a set \(F\subseteq E(G)\) is a minimal clique transversal in \(H\) if and only if \(F\) is a minimal edge cover in \(G\)._
Proof.: Immediate from the definitions and Lemma 3.4.
Using Theorem 3.2 and Lemma 3.5, we can now prove the following.
**Theorem 3.6**.: Upper Clique Transversal _is \(\mathsf{NP}\)-complete in the class of line graphs of bipartite graphs._
Proof.: To argue that the problem is in \(\mathsf{NP}\), we show that every line graph of a bipartite graph has at most polynomially many maximal cliques. Let \(G\) be a line graph of a bipartite graph. Fix a bipartite graph \(H\) such that \(G=L(H)\). Clearly, we may assume that \(H\) has no isolated vertices. Since \(H\) is triangle-free, any clique in \(G\) corresponds to a set of edges in \(H\) having a common endpoint, and consequently any maximal clique in \(G\) corresponds to an inclusion-maximal set of edges in \(H\) having a common endpoint. The number of such sets is bounded by the number of vertices in \(H\). Since
\[|V(H)|=\sum_{v\in V(H)}1\leq\sum_{v\in V(H)}\deg(v)=2|E(H)|=2|V(G)|\,,\]
it follows that the number of maximal cliques in \(G\) is at most \(2|V(G)|\). By Proposition 3.1, the problem is in \(\mathsf{NP}\).
To prove \(\mathsf{NP}\)-hardness, we make a reduction from Spanning Star Forest in the class of bipartite graphs with minimum degree at least \(2\). By Theorem 3.2, this problem is \(\mathsf{NP}\)-complete. Let \(H\) be the line graph of \(G\). By Lemma 3.5, a set \(F\subseteq E(G)\) is a minimal clique transversal in \(H\) if and only if \(F\) is a minimal edge cover in \(G\). Therefore, the graph \(G\) contains a minimal edge cover with at least \(\ell\) edges if and only if its line graph, \(H\), contains a minimal clique transversal with at least \(\ell\) vertices. As observed by Hedetniemi [26], the maximum size of a minimal edge cover equals the maximum number of edges in a spanning star forest (in fact, a set of edges in a graph without isolated vertices is a minimal edge cover if and only if it is a spanning star forest, see Manlove [35]). Therefore, the graph \(G\) contains a minimal edge cover with at least \(\ell\) edges if and only if \(G\) contains a spanning star forest with at least \(\ell\) edges. The claimed \(\mathsf{NP}\)-hardness result follows from Theorem 3.2.
We now prove intractability of UCT in the class of chordal graphs. A graph is _chordal_ if it does not contain any induced cycles on at least four vertices.
We first recall a known result on maximal cliques in chordal graphs.
**Theorem 3.7** (Berry and Pogorelcnik [7]).: _A chordal graph \(G=(V,E)\) has at most \(|V|\) maximal cliques, which can be computed in time \(\mathcal{O}(|V|+|E|)\)._
**Theorem 3.8**.: Upper Clique Transversal _is_ NP_-complete in the class of chordal graphs._
Proof.: Membership in NP follows from Theorem 3.7 and Proposition 3.1.
To prove NP-hardness, we reduce from Spanning Star Forest. Let \(G=(V,E)\) and \(\ell\) be an input instance of Spanning Star Forest. We may assume without loss of generality that \(G\) has an edge and that \(\ell\geq 2\), since if any of these assumptions is violated, then it is trivial to verify if \(G\) has a spanning star forest with at least \(\ell\) edges.
We construct a chordal graph \(G^{\prime}\) as follows. We start with a complete graph with vertex set \(V\). For each edge \(e=\{u,v\}\in E\), we introduce two new vertices \(x^{e}\) and \(y^{e}\), and make \(x^{e}\) adjacent to \(u\), to \(v\), and to \(y^{e}\). The obtained graph is \(G^{\prime}\). We thus have \(V(G^{\prime})=V\cup X\cup Y\), where \(X=\{x^{e}:e\in E\}\) and \(Y=\{y^{e}:e\in E\}\). See Fig. 3 for an example. Clearly, \(G^{\prime}\) is chordal. Furthermore, let \(k=\ell+|E|\).
To complete the proof, we show that \(G\) has a spanning star forest of size at least \(\ell\) if and only if \(G^{\prime}\) has a minimal clique transversal of size at least \(k\).
First, assume that \(G\) has a spanning star forest \((V,F)\) such that \(|F|\geq\ell\). Since \((V,F)\) is a spanning forest in which each component is a star, each edge of \(F\) is incident with a vertex of degree one in \((V,F)\). Let \(S\) be a set obtained by selecting from each edge in \(F\) one vertex of degree one in \((V,F)\). Then every edge of \(F\) has one endpoint in \(S\) and the other one in \(V\setminus S\). In particular, \(|S|=|F|\geq\ell\). Let \(S^{\prime}=S\cup\{x^{e}:e\in E\setminus F\}\cup\{y^{f}:f\in F\}\). See Fig. 4 for an example.
Clearly, the size of \(S^{\prime}\) is at least \(\ell+|E|=k\). We claim that \(S^{\prime}\) is a minimal clique transversal of \(G^{\prime}\). There are three kinds of maximal cliques in \(G^{\prime}\): the set \(V\), sets of the form \(\{u,v,x^{e}\}\) for all \(e=\{u,v\}\in E\), and sets of the form \(\{x^{e},y^{e}\}\) for all \(e\in E\). Since \(|S|=|F|\geq\ell\geq 2\), the set
Figure 4: Transforming a spanning star forest \((V,F)\) in \(G\) into a minimal clique transversal \(S^{\prime}\) in \(G^{\prime}\).
Figure 3: Transforming \(G\) to \(G^{\prime}\).
\(S\) is non-empty, and thus the set \(S^{\prime}\) intersects \(V\). Furthermore, since \(S\) contains one endpoint of each edge in \(F\), set \(S^{\prime}\) intersects all cliques of the form \(\{u,v,x^{f}\}\) for all \(f=\{u,v\}\in F\). For all \(e=\{u,v\}\in E\setminus F\), set \(S^{\prime}\) contains vertex \(x^{e}\) and thus also intersects the clique \(\{u,v,x^{e}\}\), as well as the clique \(\{x^{e},y^{e}\}\). Finally, for each \(f\in F\), we have that \(y^{f}\in S^{\prime}\) and hence \(S^{\prime}\) intersects \(\{x^{f},y^{f}\}\). Thus, \(S^{\prime}\) is a clique transversal of \(G^{\prime}\).
To argue minimality, we need to show that for every \(u\in S^{\prime}\) there exists a maximal clique in \(G^{\prime}\) missed by \(S^{\prime}\setminus\{u\}\). Suppose first that \(u\in V\). Then \(u\in S\) and there is an edge \(f\in F\) such that \(u\) is an endpoint of \(f\). Let \(v\) be the other endpoint of \(f\). Then \(v\not\in S\) and thus also \(v\not\in S^{\prime}\). Note also that \(x^{f}\not\in S^{\prime}\). In particular, this implies that the set \(\{u,v,x^{f}\}\) is a maximal clique of \(G^{\prime}\) missed by \(S^{\prime}\setminus\{u\}\). Next, suppose that \(u\in X\). Then \(u=x^{e}\) for some edge \(e\in E\setminus F\) and \(y^{e}\not\in S^{\prime}\), hence the set \(\{x^{e},y^{e}\}\) is a maximal clique of \(G^{\prime}\) missed by \(S^{\prime}\setminus\{u\}\). Finally, suppose that \(u\in Y\). Then \(u=x^{f}\) for some edge \(f\in F\). Then \(x^{f}\not\in S^{\prime}\), therefore the set \(\{x^{f},y^{f}\}\) is a maximal clique of \(G^{\prime}\) missed by \(S^{\prime}\setminus\{u\}\). This shows that \(S^{\prime}\) is a minimal clique transversal of \(G^{\prime}\), as claimed.
For the converse direction, let \(S^{\prime}\) be a minimal clique transversal of \(G^{\prime}\) such that \(|S^{\prime}|\geq k\). First we show that \(S^{\prime}\cap Y\neq\emptyset\). Suppose for a contradiction that \(S^{\prime}\cap Y=\emptyset\). Then \(X\subseteq S^{\prime}\), since otherwise the maximal clique \(\{x^{e},y^{e}\}\) of \(G^{\prime}\) would be missed by \(S^{\prime}\) for every \(x^{e}\in X\setminus S^{\prime}\). Furthermore, since \(V\) is a maximal clique in \(G^{\prime}\), there is a vertex \(u\in S^{\prime}\) such that \(u\in V\). Since the set \(X\cup\{u\}\) is a clique transversal in \(G^{\prime}\), the minimality of \(S^{\prime}\) implies that \(S^{\prime}=X\cup\{u\}\). Using the fact that \(k=\ell+|E|\) and \(k\leq|S^{\prime}|=|X|+1=|E|+1\), we then obtain that \(\ell\leq 1\). This contradicts our assumption that \(\ell\geq 2\) and shows that \(S^{\prime}\cap Y\neq\emptyset\).
Let \(S=S^{\prime}\cap V\). Recall that for every edge \(e\in E\) we denote by \(x^{e}\) the unique vertex in \(X\) that is adjacent in \(G^{\prime}\) to both endpoints of \(e\). We claim that for each vertex \(u\in S\) there exists a vertex \(v\in V\) such that \(e=\{u,v\}\in E\) and \(S^{\prime}\cap\{u,v,x^{e}\}=\{u\}\). Let \(u\in S\) and suppose for a contradiction that for all vertices \(v\) such that \(e=\{u,v\}\in E\) we have \(S^{\prime}\cap\{u,v,x^{e}\}\neq\{u\}\). This implies that the set \(S^{\prime}\setminus\{u\}\) intersects all maximal cliques in \(G^{\prime}\) of the form \(\{u,v,x^{e}\}\) for some \(e=\{u,v\}\in E\). Since \(S^{\prime}\) is a minimal clique transversal of \(G^{\prime}\), we infer that the maximal clique of \(G^{\prime}\) missed by \(S^{\prime}\setminus\{u\}\) is \(V\). In particular, we have \(S=S^{\prime}\cap V=\{u\}\), which in turn implies that for all vertices \(v\in V\) such that \(e=\{u,v\}\in E\) we have \(S^{\prime}\cap\{u,v,x^{e}\}=\{u,x^{e}\}\). Since \(S^{\prime}\cap Y\neq\emptyset\), there exists an edge \(e=\{w,z\}\) of \(G\) such that \(y^{e}\in S^{\prime}\). Then \(x^{e}\not\in S^{\prime}\), and therefore \(u\) is not an endpoint of \(e\). However, since, \(x^{e}\not\in S^{\prime}\) but \(S^{\prime}\) intersects the maximal clique \(\{w,z,x^{e}\}\), it follows that an endpoint of \(e\) belongs to \(S\). This contradicts the fact that \(S=\{u\}\) and \(u\) is not an endpoint of \(e\).
By the above claim, we can associate to each vertex \(u\in S\) a vertex \(v(u)\in V\) such that \(e=\{u,v(u)\}\in E\) and \(S^{\prime}\cap\{u,v(u),x^{e}\}=\{u\}\). For each \(u\in S\), let us denote by \(e(u)\) the corresponding edge \(\{u,v(u)\}\), and let \(F=\{e(u):u\in S\}\) (see Fig. 5). We next claim that the mapping \(u\mapsto e(u)\) is one-to-one, that is, for all \(u_{1},u_{2}\in S\), if \(e(u_{1})=e(u_{2})\) then \(u_{1}=u_{2}\). Suppose that \(e(u_{1})=e(u_{2})\) for some \(u_{1}\neq u_{2}\). Then \(e(u_{1})=e(u_{2})=\{u_{1},u_{2}\}\), \(v(u_{1})=u_{2}\), and \(v(u_{2})=u_{1}\). Furthermore, \(\{u_{1}\}=S^{\prime}\cap\{u_{1},v(u_{1}),x^{e(u_{1})}\}=S^{\prime}\cap\{u_{2},v (u_{2}),x^{e(u_{2})}\}=\{u_{2}\}\), which is in contradiction with \(u_{1}\neq u_{2}\). Since the mapping \(u\mapsto e(u)\) is one-to-one, we have \(|F|=|S|\). Furthermore, every vertex in \(S\) has degree one in \((V,F)\). Therefore, the graph \((V,F)\) is a spanning star forest of \(G\).
Since \(S^{\prime}\) is a minimal clique transversal of \(G^{\prime}\), for each edge \(e\in E\) exactly one of \(x^{e}\) and \(y^{e}\) belongs to \(S^{\prime}\). Therefore, \(|F|=|S|=|S^{\prime}|-|E|\geq k-|E|=\ell\). Thus, \(G\) has a spanning star forest of size at least \(\ell\).
## 4 A linear-time algorithm for UCT in split graphs
A _split graph_ is a graph that has a _split partition_, that is, a partition of its vertex set into a clique and an independent set. We denote a split partition of a split graph \(G\) as \((K,I)\) where \(K\) is a clique, \(I\) is an independent set, \(K\cap I=\emptyset\), and \(K\cup I=V(G)\). We may assume without loss of generality that \(I\) is a maximal independent set. Indeed, if this is not the case, then \(K\) contains a vertex \(v\) that has no neighbors in \(I\), and \((K\setminus\{v\},I\cup\{v\})\) is a split partition of \(G\) such that \(I\cup\{v\}\) is a maximal independent set. In what follows, we repeatedly use the structure of maximal cliques of split graphs. If \(G\) is a split graph with a split partition \((K,I)\), then the maximal cliques of \(G\) are as follows: the closed neighborhoods \(N[v]\), for all \(v\in I\), and the clique \(K\), provided that it is a maximal clique, that is, every vertex in \(I\) has a non-neighbor in \(K\).
Given a graph \(G\) and a set of vertices \(S\subseteq V(G)\), we denote by \(N(S)\) the set of all vertices in \(V(G)\setminus S\) that have a neighbor in \(S\). Moreover, given a vertex \(v\in S\), an _\(S\)-private neighbor_ of \(v\) is any vertex \(w\in N(S)\) such that \(N(w)\cap S=\{v\}\). The following proposition characterizes minimal clique transversals of split graphs.
**Proposition 4.1**.: _Let \(G\) be a split graph with a split partition \((K,I)\) such that \(I\) is a maximal independent set and let \(S\subseteq V(G)\). Let \(K^{\prime}=K\cap S\) and \(I^{\prime}=I\cap S\). Then, \(S\) is a minimal clique transversal of \(G\) if and only if the following conditions hold:_
1. \(K^{\prime}\neq\emptyset\) _if_ \(K\) _is a maximal clique._
2. \(I^{\prime}=I\setminus N(K^{\prime})\)_._
3. _Every vertex in_ \(K^{\prime}\) _has a_ \(K^{\prime}\)_-private neighbor in_ \(I\)_._
Proof.: Assume first that \(S\) is a minimal clique transversal of \(G\). We prove that \(S\) satisfies each of the three conditions. Condition (i) follows from the fact that \(S\) is a clique transversal.
To show condition (ii), we first show the inclusion \(I\setminus S\subseteq N(K^{\prime})\), which is equivalent to \(I\setminus N(K^{\prime})\subseteq I\setminus(I\setminus S)=I\cap S=I^{\prime}\). Consider an arbitrary vertex \(v\in I\setminus S\). Since \(N[v]\) is a maximal clique in \(G\) and \(S\) is a clique transversal not containing \(v\), set \(S\) must contain a neighbor \(w\) of \(v\). As \(N(v)\subseteq K\), we conclude that \(w\) belongs to \(K^{\prime}\). The converse inclusion, \(I^{\prime}\subseteq I\setminus N(K^{\prime})\), is equivalent to the condition that there are no edges between \(I^{\prime}\) and \(K^{\prime}\). Suppose for a contradiction that \(G\) contains an edge \(uv\) with \(u\in I^{\prime}\) and \(v\in K^{\prime}\). Since \(N[u]\) is
Figure 5: Transforming a minimal clique transversal \(S^{\prime}\) in \(G^{\prime}\) into a spanning star forest \((V,F)\) in \(G\).
the only maximal clique of \(G\) containing \(u\) and \(\{u,v\}\subseteq S\cap N[u]\), it follows that \(S\setminus\{u\}\) is a clique transversal of \(G\), contradicting the minimality of \(S\). This establishes (ii).
To show condition (iii), consider an arbitrary vertex \(v\in K^{\prime}\). If \(K^{\prime}=\{v\}\), then any neighbor of \(v\) in \(I\) is a \(K^{\prime}\)-private neighbor of \(v\), and \(v\) has a neighbor in \(I\) since \(I\) is a maximal independent set. Thus we may assume that \(|K^{\prime}|\geq 2\). Suppose for a contradiction that \(v\) does not contain any \(K^{\prime}\)-private neighbor in \(I\). The maximal cliques of \(G\) containing \(v\) are \(N[w]\) for \(w\in N(v)\cap I\) and possibly \(K\) (if \(K\) is a maximal clique). For every \(w\in N(v)\cap I\), the assumption on \(v\) implies that there exists a vertex \(v^{\prime}\in K^{\prime}\setminus\{v\}\) adjacent to \(w\); hence \(\{v,v^{\prime}\}\subseteq S\cap N[w]\). Moreover, we had already justified that \(|K^{\prime}|\geq 2\). It follows that the set \(S\setminus\{v\}\) intersects all maximal cliques in \(G\); this contradicts the minimality of \(S\) and shows condition (iii).
Assume now that \(S\) is a set of vertices satisfying conditions (i)-(iii). We prove that \(S\) is a minimal clique transversal by verifying both conditions in the definition. Consider an arbitrary maximal clique \(C\) of \(G\). If \(C=N[v]\) for some \(v\in I\), then either \(v\in S\), in which case \(v\in S\cap C\), or \(v\in I\setminus S\), in which case condition (ii) guarantees that \(v\) has a neighbor \(w\in K^{\prime}\); hence \(w\in S\cap C\) and \(S\) intersects \(C\). If \(C=K\), then \(S\cap C\neq\emptyset\) by condition (i). Hence \(S\) is a clique transversal. To show minimality, suppose for a contradiction that \(S\) contains a vertex \(v\) such that \(S\setminus\{v\}\) is also a clique transversal of \(G\). Suppose that \(v\in I\). Since the set \(S\setminus\{v\}\) intersects the maximal clique \(N[v]\), there is a vertex \(w\in(S\setminus\{v\})\cap N[v]\). Since \(w\neq v\), we have \(w\in N(v)\) and hence \(w\in K\). In particular, \(w\in K^{\prime}\) and thus \(v\in N(K^{\prime})\cap I^{\prime}\); this contradicts condition (ii). It follows that \(v\not\in I\) and hence \(v\in K^{\prime}\). Condition (iii) implies that \(v\) has a \(K^{\prime}\)-private neighbor \(w\in I\). Since \(N(w)\subseteq K\) and \(w\) is a \(K^{\prime}\)-private neighbor of \(v\), we have \(S\cap N(w)=N(w)\cap S=N(w)\cap K^{\prime}=\{v\}\), which implies \((S\setminus\{v\})\cap N(w)=\emptyset\). Moreover, condition (ii) implies that \(w\not\in S\); hence \((S\setminus\{v\})\cap\{w\}=\emptyset\). It follows that \((S\setminus\{v\})\cap N[w]=((S\setminus\{v\})\cap N(w))\cup((S\setminus\{v\}) \cap\{w\})=\emptyset\). Since the set \(S\setminus\{v\}\) misses the maximal clique \(N[w]\), it is not a clique transversal, a contradiction.
Proposition 4.1 leads to the following result about maximum minimal clique transversals in split graphs. We denote by \(\alpha(G)\) the _independence number_ of a graph \(G\), that is, the maximum size of an independent set in \(G\).
**Theorem 4.2**.: _Let \(G\) be a split graph with a split partition \((K,I)\) such that \(I\) is a maximal independent set. Then:_
1. _If_ \(K\) _is not a maximal clique in_ \(G\)_, then_ \(I\) _is a maximum minimal clique transversal in_ \(G\)_; in particular, we have_ \(\tau_{c}^{+}(G)=\alpha(G)\) _in this case._
2. _If_ \(K\) _is a maximal clique in_ \(G\)_, then for every vertex_ \(v\in K\) _with the smallest number of neighbors in_ \(I\)_, the set_ \(\{v\}\cup(I\setminus N(v))\) _is a maximum minimal clique transversal in_ \(G\)_; in particular, we have_ \(\tau_{c}^{+}(G)=\alpha(G)-\delta_{G}(I,K)+1\) _in this case, where_ \(\delta_{G}(I,K)=\min\{|N(v)\cap I|:v\in K\}\)_._
_Consequently, every split graph \(G\) satisfies \(\tau_{c}^{+}(G)\leq\alpha(G)\)._
Proof.: Let \(S\) be a minimal clique transversal of \(G\) that is of maximum possible size and, subject to this condition, contains as few vertices from \(K\) as possible. Let \(K^{\prime}=K\cap S\) and \(I^{\prime}=I\cap S\). If \(K^{\prime}=\emptyset\), then \(K\) is not a maximal clique in \(G\), and we have \(S=I\), implying \(\tau_{c}^{+}(G)=|S|=\alpha(G)\). Suppose now that \(K^{\prime}\neq\emptyset\). We first show that \(|K^{\prime}|=1\). Suppose for a contradiction that \(|K^{\prime}|\geq 2\) and let \(v\in K^{\prime}\). Let \(I_{v}\) denote the set of \(K^{\prime}\)-private neighbors of \(v\) in \(I\) and let \(S^{\prime}=(S\setminus\{v\})\cup I_{v}\). Let \(I_{v}\) denote the set of \(K^{\prime}\)-private neighbors of \(v\) in \(I\) and let \(S^{\prime}=(S\setminus\{v\})\cup I_{v}\). By Proposition 4.1, conditions (i)-(iii) hold for \(S\). We claim that set \(S^{\prime}\) also satisfies conditions (i)-(iii) from Proposition 4.1. Since \(S^{\prime}\cap K=K^{\prime}\setminus\{v\}\), the assumption
\(|K^{\prime}|\geq 2\) implies that \(S^{\prime}\cap K\neq\emptyset\), thus condition (i) holds for \(S^{\prime}\). Since condition (ii) holds for \(S\), we have
\[I\cap S^{\prime}=I^{\prime}\cup I_{v}=(I\setminus N(K^{\prime}))\cup I_{v}=I \setminus N(K^{\prime}\setminus\{v\})=I\setminus N(S^{\prime}\cap K)\,,\]
that is, condition (ii) holds for \(S^{\prime}\). Finally, since \(S^{\prime}\cap K\subseteq K^{\prime}\), condition (iii) for \(S\) immediately implies condition (iii) for \(S^{\prime}\). It follows that \(S^{\prime}\) is a minimal clique transversal in \(G\). Furthermore, since \(v\in K^{\prime}\), vertex \(v\) has an \(K^{\prime}\)-private neighbor in \(I\), that is, the set \(I_{v}\) is nonempty. This implies that \(|S^{\prime}|\geq|S|\); in particular, \(S^{\prime}\) is a maximum minimal clique transversal in \(G\). However, \(S^{\prime}\) contains strictly fewer vertices from \(K\) than \(S\), contradicting the choice of \(S\). This shows that \(|K^{\prime}|=1\), as claimed.
Let \(w\) be the unique vertex in \(K^{\prime}\). Since Condition (ii) from Proposition 4.1 holds for \(S\), we have \(I^{\prime}=I\setminus N(w)\). Hence \(S=\{w\}\cup(I\setminus N(w))\) and \(|S|=1+|I|-|N(w)\cap I|\). Since \(w\in K\), we have \(|N(w)\cap I|\geq\delta_{G}(I,K)\) and hence \(\tau_{c}^{+}(G)=|S|\leq\alpha(G)-\delta_{G}(I,K)+1\). On the other hand, for every vertex \(z\in K\) the set \(X_{z}:=\{z\}\cup(I\setminus N(z))\) satisfies conditions (i)-(iii) from Proposition 4.1. Conditions (i) and (ii) hold by the definition of \(X_{z}\). Since \(I\) is a maximal independent set in \(G\), vertex \(z\not\in I\) has a neighbor in \(I\), and any neighbor of \(z\) in \(I\) is trivially an \((X_{z}\cap K)\)-private neighbor of \(z\). Thus Condition (iii) holds, too. It follows that \(X_{z}\) is a minimal clique transversal in \(G\). Choosing \(z\) to be a vertex in \(K\) with the smallest number of neighbors in \(I\), we obtain a set \(X_{z}\) of size \(\alpha(G)-\delta_{G}(I,K)+1\). Thus \(\tau_{c}^{+}(G)\geq|X_{z}|=\alpha(G)-\delta_{G}(I,K)+1\) and since we already proved that \(\tau_{c}^{+}(G)\leq\alpha(G)-\delta_{G}(I,K)+1\), any such \(X_{z}\) is optimal.
Since \(I\) is a maximal independent set and \(K\) is nonempty, we have \(\delta_{G}(I,K)\geq 1\). Thus, \(\tau_{c}^{+}(G)\leq\alpha(G)\). Suppose that \(K\) is not a maximal clique in \(G\). Then \(I\) is a minimal clique transversal in \(G\) and therefore \(\tau_{c}^{+}(G)\geq|I|=\alpha(G)\geq\tau_{c}^{+}(G)\). Hence equalities must hold throughout and \(I\) is a maximum minimal clique transversal. Finally, suppose that \(K\) is a maximal clique in \(G\). Then every minimal clique transversal \(S\) in \(G\) satisfies \(S\cap K\neq\emptyset\). In this case, the above analysis shows that for every vertex \(v\in K\) with the smallest number of neighbors in \(I\), the set \(\{v\}\cup(I\setminus N(v))\) is a maximum minimal clique transversal in \(G\).
**Corollary 4.3**.: Upper Clique Transversal _can be solved in linear time in the class of split graphs._
Proof.: Let \(G=(V,E)\) be a given split graph. Hammer and Simeone showed that split graphs can be characterized by their degree sequences; furthermore, that characterization yields a linear-time algorithm to compute a split partition \((K,I)\) of \(G\) (see [25]). If there exists a vertex in \(K\) that is not adjacent to \(I\), then we move it to \(I\). Thus, in linear time we can compute a split partition \((K,I)\) of \(G\) such that \(I\) is a maximal independent set. Clearly, \(K\) is a maximal clique if and only if no vertex in \(I\) is adjacent to all vertices of \(G\). If \(K\) is not a maximal clique, then the algorithm simply returns \(I\). If \(K\) is a maximal clique, then the algorithm first computes, for each vertex \(v\in K\), the number of neighbors of \(v\) in \(I\). For a vertex \(v\in K\) with the smallest number of neighbors in \(I\), the set \(\{v\}\cup(I\setminus N(v))\) is returned.
**Remark 4.4**.: Recall that a _strong independent set_ in a graph \(G\) is an independent clique transversal. If \(I\) is a strong independent set in \(G\), then for every vertex \(v\in I\), every maximal clique \(K\) containing \(v\) satisfies \(K\cap I=\{v\}\); it follows that every strong independent set is a minimal clique transversal. Theorem 4.2 implies that every split graph has a maximum minimal clique transversal that is, it is a strong independent set. Consequently, the problem of computing a maximum minimal clique transversal of a split graph \(G\) reduces to the problem of computing a maximum strong independent set in \(G\). A linear-time algorithm for a more general problem, that of computing a maximum weight strong independent set in a vertex-weighted chordal graph, was developed by Wu [46]. This gives an alternative proof of Corollary 4.3.
A linear-time algorithm for UCT in proper interval graphs
A graph \(G=(V,E)\) is an _interval graph_ if it has an _interval representation_, that is, if its vertices can be put in a one-to-one correspondence with a family \((I_{v}:v\in V)\) of closed intervals on the real line such that two distinct vertices \(u\) and \(v\) are adjacent if and only if the corresponding intervals \(I_{u}\) and \(I_{v}\) intersect. If \(G\) has a _proper interval representation_, that is, an interval representation in which no interval contains another, then \(G\) is said to be a _proper interval graph_.
Our approach towards a linear-time algorithm for Upper Clique Transversal in the class of proper interval graphs is based on a relation between clique transversals in \(G\) and induced matchings in the so-called vertex-clique incidence graph of \(G\). This relation is valid for arbitrary graphs.
### UCT via induced matchings in the vertex-clique incidence graph
Given a graph \(G=(V,E)\), we denote by \(B_{G}\) the _vertex-clique incidence graph_ of \(G\), a bipartite graph defined as follows. The vertex set of \(B_{G}\) consists of two disjoint sets \(X\) and \(Y\) such that \(X=V\) and \(Y=\mathcal{C}_{G}\), where \(\mathcal{C}_{G}\) is the set of maximal cliques in \(G\). The edge set of \(B_{G}\) consists of all pairs \(x\in X\) and \(C\in\mathcal{C}_{G}\) that satisfy \(x\in C\). An _induced matching_ in a graph \(G\) is a set \(M\) of pairwise disjoint edges such that the set of endpoints of edges in \(M\) induces no edges other than those in \(M\). Given two disjoint sets of vertices \(A\) and \(B\) in a graph \(G\), we say that _A dominates \(B\) in \(G\)_ if every vertex in \(B\) has a neighbor in \(A\). Given a matching \(M\) in a graph \(G\) and a vertex \(v\in V(G)\), we say that \(v\) is _M-saturated_ if it is an endpoint of an edge in \(M\).
Clique transversals and minimal clique transversals of a graph \(G\) can be expressed in terms of the vertex-clique incidence graph as follows.
**Lemma 5.1**.: _Let \(G\) be a graph, let \(B_{G}=(X,Y;E)\) be its vertex-clique incidence graph, and let \(S\subseteq V(G)\). Then:_
1. \(S\) _is a clique transversal in_ \(G\) _if and only if_ \(S\) _dominates_ \(Y\) _in_ \(B_{G}\)_._
2. \(S\) _is a minimal clique transversal in_ \(G\) _if and only if_ \(S\) _dominates_ \(Y\) _in_ \(B_{G}\) _and there exists an induced matching_ \(M\) _in_ \(B_{G}\) _such that_ \(S\) _is exactly the set of M-saturated vertices in_ \(X\)_._
Proof.: The first statement follows immediately from the definitions.
For the second statement, we prove each of the two implications separately. Assume first that \(S\) is a minimal clique transversal in \(G\). Since \(S\) is a clique transversal in \(G\), it dominates \(Y\) in \(B_{G}\). Furthermore, the minimality of \(S\) implies that for every vertex \(s\in S\) there exists a maximal clique \(y_{s}\in Y(=\mathcal{C}_{G})\) such that \(y_{s}\cap S=\{s\}\). Let \(M=\{\{s,y_{s}\}\mid s\in S\}\). We claim that \(M\) is an induced matching \(M\) in \(B_{G}\) such that \(S\) is exactly the set of _M_-saturated vertices in \(X\). First, note that each \(s\in S\) is adjacent in \(B_{G}\) to \(y_{s}\), since \(s\) belongs to the maximal clique \(y_{s}\). Second, \(M\) is a matching in \(B_{G}\) since every \(s\in S\) is by construction incident with only one edge in \(M\), and if \(y_{s_{1}}=y_{s_{2}}\) for two vertices \(s_{1},s_{2}\in S\), then \(\{s_{1}\}=y_{s_{1}}\cap S=y_{s_{2}}\cap S=\{s_{2}\}\) and thus \(s_{1}=s_{2}\). Third, \(M\) is an induced matching in \(B_{G}\), since otherwise \(B_{G}\) would contain an edge of the form \(\{s_{1},y_{s_{2}}\}\) for two distinct vertices \(s_{1},s_{2}\in S\), which would imply that \(s_{1}\) belongs to the maximal clique \(y_{s_{2}}\), contradicting the fact that \(y_{s_{2}}\cap S=\{s_{2}\}\). Finally, the fact that \(S\) is exactly the set of _M_-saturated vertices in \(X\) follows directly from the definition of \(M\).
For the converse direction, assume that \(S\) dominates \(Y\) in \(B_{G}\) and there exists an induced matching \(M\) in \(B_{G}\) such that \(S\) is exactly the set of _M_-saturated vertices in \(X\). The fact that \(S\) dominates \(Y\) in \(B_{G}\) implies that \(S\) is a clique transversal in \(G\). To see that \(S\) is a minimal clique transversal, we will show that for every \(s\in S\), the set \(S\setminus\{s\}\) misses a maximal clique in \(G\). Let \(s\in S\). By the assumptions on \(M\), vertex \(s\) has a unique neighbor \(y_{s}\) in \(B_{G}\) such that
\(\{s,y_{s}\}\) is an edge of \(M\). Furthermore, since \(M\) is an induced matching in \(B_{G}\), vertex \(y_{s}\) is not adjacent in \(B_{G}\) to any vertex in \(S\setminus\{s\}\). Thus, the set \(S\setminus\{s\}\) misses the maximal clique \(y_{s}\). We conclude that \(S\) is a minimal clique transversal.
The _induced matching number_ of a graph \(G\) is the maximum size of an induced matching in \(G\). Lemma 5.1 immediately implies the following.
**Corollary 5.2**.: _For every graph \(G\), the upper clique transversal number of \(G\) is at most the induced matching number of \(B_{G}\)._
As another consequence of Lemma 5.1, we obtain a sufficient condition for a set of vertices in a graph to be a minimal clique transversal of maximum size.
**Corollary 5.3**.: _Let \(G\) be a graph, let \(B_{G}=(X,Y;E)\) be its vertex-clique incidence graph, and let \(S\subseteq V(G)\). Suppose that \(S\) dominates \(Y\) in \(B_{G}\) and there exists a maximum induced matching \(M\) in \(B_{G}\) such that \(S\) is exactly the set of M-saturated vertices in \(X\). Then, \(S\) is a minimal clique transversal in \(G\) of maximum size._
To apply Corollary 5.3 to proper interval graphs, we first state several characterizations of proper interval graphs in terms of their vertex-clique incidence graphs, establishing in particular a connection with bipartite permutation graphs.
### Characterizing proper interval graphs via their vertex-clique incidence graphs
We first recall some concepts and results from the literature. A bipartite graph \(G=(X,Y;E)\) is said to be _biconvex_ if there exists a _biconvex ordering_ of (the vertex set of) \(G\), that is, a pair \((<_{X},<_{Y})\) where \(<_{X}\) is a linear ordering of \(X\) and \(<_{Y}\) is a linear ordering of \(Y\) such that for every \(x\in X\), the vertices in \(Y\) adjacent to \(x\) appear consecutively with respect to the ordering \(<_{Y}\), and, similarly, for every \(y\in Y\), the vertices in \(X\) adjacent to \(y\) appear consecutively with respect to the ordering \(<_{X}\).
We will need the following property of biconvex graphs. Let \((<_{X},<_{Y})\) be a biconvex ordering of a biconvex graph \(G=(X,Y;E)\). Two edges \(e\) and \(f\) of \(G\) are said to _cross_ (each other) if there exist vertices \(x_{1},x_{2}\in X\) and \(y_{1},y_{2}\in Y\) such that \(\{e,f\}=\{\{x_{1},y_{2}\},\{x_{2},y_{1}\}\}\), \(x_{1}<_{X}x_{2}\), and \(y_{1}<_{Y}y_{2}\). A biconvex ordering \((<_{X},<_{Y})\) of a biconvex graph \(G=(X,Y;E)\) is said to be _induced-crossing-free_ if for any two crossing edges \(e=\{x_{1},y_{2}\}\) and \(f=\{x_{2},y_{1}\}\), either \(x_{1}\) is adjacent to \(y_{1}\) or \(x_{2}\) is adjacent to \(y_{2}\).
**Theorem 5.4** (Abbas and Stewart [1]).: _Every biconvex graph has an induced-crossing-free biconvex ordering._
Given a bipartite graph \(G=(X,Y;E)\), a _strongly induced-crossing-free ordering_ (or simply a _strong ordering_) of \(G\) is a pair \((<_{X},<_{Y})\) of linear orderings of \(X\) and \(Y\) such that for any two crossing edges \(e=\{x_{1},y_{2}\}\) and \(f=\{x_{2},y_{1}\}\), vertex \(x_{1}\) is adjacent to \(y_{1}\) and vertex \(x_{2}\) is adjacent to \(y_{2}\).
A _permutation graph_ is a graph \(G=(V,E)\) that admits a permutation model, that is, vertices of \(G\) can be ordered \(v_{1},\ldots,v_{n}\) such that there exists a permutation \((a_{1},\ldots,a_{n})\) of the set \(\{1,\ldots,n\}\) such that for all \(1\leq i<j\leq n\), vertices \(v_{i}\) and \(v_{j}\) are adjacent in \(G\) if and only if \(a_{i}>a_{j}\). A _bipartite permutation graph_ is a graph that is both a bipartite graph and a permutation graph.
The following characterization of bipartite permutation graphs follows from Theorem 1 in [43] and its proof.
**Theorem 5.5** (Spinrad, Brandstadt, and Stewart [43]).: _The following statements are equivalent for a bipartite graph \(G=(X,Y;E)\):_
1. \(G\) _is a bipartite permutation graph._
2. \(G\) _has a strong ordering._
3. \(G\) _has a strong biconvex ordering._
Theorem 5.5 implies the following property of bipartite permutation graphs equipped with a strong ordering.
**Corollary 5.6**.: _Let \(G=(X,Y;E)\) be a bipartite permutation graph, let \((<_{X},<_{Y})\) be a strong ordering of \(G\), and let \(M\) be an induced matching in \(G\). Then, no two edges in \(M\) cross._
We will also use the following well-known characterization of proper interval graphs (see, e.g., Gardi [23]).
**Theorem 5.7**.: _A graph \(G\) is a proper interval graph if and only if there exists an ordering \(\sigma=(v_{1},\ldots,v_{n})\) of the vertices of \(G\) and an ordering \(\tau=(C_{1},\ldots,C_{k})\) of the maximal cliques of \(G\) such that for each \(i\in\{1,\ldots,n\}\) the maximal cliques containing vertex \(v_{i}\) appear consecutively in the ordering \(\tau\), and for each \(j\in\{1,\ldots,k\}\) clique \(C_{j}\) consists of consecutive vertices with respect to ordering \(\sigma\)._
The following theorem gives several characterizations of proper interval graphs in terms of their vertex-clique incidence graphs.
**Theorem 5.8**.: _Let \(G\) be a graph. Then, the following statements are equivalent:_
1. \(G\) _is a proper interval graph._
2. \(B_{G}\) _is a biconvex graph._
3. \(B_{G}\) _is a bipartite permutation graph._
4. \(B_{G}\) _has a strong ordering._
5. \(B_{G}\) _has a strong biconvex ordering._
6. \(B_{G}\) _has an induced-crossing-free biconvex ordering._
Proof.: Theorem 5.7 implies the equivalence between statements 1 and 2. Equivalence between statements 2 and 6 follows from Theorem 5.4. Equivalence among statements 3, 4, and 5 follows from Theorem 5.5. Clearly, statement 5 implies statement 2.
Finally, we show that statement 1 implies statement 4. Fix a proper interval representation \((I_{v}:v\in V(G))\) of \(G\). Let \(B_{G}=(X,Y;E)\) where \(X=V(G)\) and \(Y=\mathcal{C}_{G}\). Let \(<_{X}\) be the ordering of \(X\) corresponding to the left-endpoint order of the intervals. (Note that since no interval properly contains another, the left-endpoint order and the right-endpoint order are the same.) As shown in [23], every maximal clique \(C\in\mathcal{C}_{G}(=Y)\) consists of consecutive vertices with respect to \(<_{X}\). Since the cliques are maximal, no two cliques in \(Y\) have the same first vertex with respect to \(<_{X}\), hence there is a unique and well defined ordering \(<_{Y}\) of \(Y\) that orders the cliques in increasing order of their first vertices in the vertex order. We claim that the pair \((<_{X},<_{Y})\) is a strong ordering of \(B_{G}\). Consider any two crossing edges \(e=\{x_{1},y_{2}\}\) and \(f=\{x_{2},y_{1}\}\). We may assume that \(x_{1}<_{X}x_{2}\) and \(y_{1}<_{Y}y_{2}\). Since \(y_{1}<_{Y}y_{2}\), we have \(s_{1}<_{X}s_{2}\), where \(s_{i}\) is the first vertex of \(y_{i}\) for \(i\in\{1,2\}\). Furthermore, since \(x_{1}\) and \(y_{2}\) are adjacent in
\(B_{G}\), vertex \(x_{1}\) belongs to \(y_{2}\), and thus \(s_{2}\leq_{X}x_{1}\). Consequently, \(s_{1}<_{X}s_{2}\leq_{X}x_{1}<_{X}x_{2}\). Thus, since \(x_{2}\) belongs to \(y_{1}\), also \(x_{1}\) belongs to \(y_{1}\). This implies that \(x_{1}\) and \(y_{1}\) are adjacent in \(B_{G}\). Finally, since \(y_{1}<_{Y}y_{2}\), clique \(y_{2}\) ends strictly after clique \(y_{1}\), and since \(x_{2}\) belongs to \(y_{1}\), we conclude that \(x_{2}\) also belongs to \(y_{2}\). Thus, \(x_{2}\) and \(y_{2}\) are adjacent in \(B_{G}\). It follows that the pair \((<_{X},<_{Y})\) is a strong ordering of \(B_{G}\), as claimed.
### Maximum induced matchings in bipartite permutation graphs, revisited
Our goal is to show that if \(G\) is a proper interval graph, then the sufficient condition given by Corollary 5.3 is satisfied, namely, there exists a maximum induced matching \(M\) in \(B_{G}\) such that the set \(S\) of \(M\)-saturated vertices in \(X\) dominates \(Y\) in \(B_{G}\). By Corollary 5.3, this will imply \(\tau_{c}^{+}(G)=|S|=|M|\). We show the claimed property of \(B_{G}\) as follows. First, by applying Theorem 5.8, we infer that the graph \(B_{G}\) is a bipartite permutation graph. Second, by construction, \(B_{G}\) does not have any isolated vertices and no two distinct vertices in \(Y\) have comparable neighborhoods in \(X\). It turns out that these properties are already enough to guarantee the desired conclusion. We show this by a careful analysis of the linear-time algorithm due to Chang from [15] for computing a maximum induced matching in bipartite permutation graphs. The linear time complexity also relies on the following result.
**Theorem 5.9** (Sprague [44] and Spinrad, Brandstadt, and Stewart [43]).: _A strong biconvex ordering of a given bipartite permutation graph can be computed in linear time._
**Theorem 5.10**.: _Given a bipartite permutation graph \(G=(X,Y;E)\), there is a linear-time algorithm that computes a maximum induced matching \(M\) in \(G\) such that, if \(G\) has no isolated vertices and no two vertices in \(Y\) have comparable neighborhoods in \(G\), then the set of M-saturated vertices in \(X\) dominates \(Y\)._
Proof.: Let \(G=(X,Y;E)\) be a bipartite permutation graph. We consider two cases. First, assume first that \(G\) either contains an isolated vertex or two vertices in \(Y\) with comparable neighborhoods in \(G\). In this case, it suffices to show that there is a linear-time algorithm that computes a maximum induced matching in \(G\). We may assume without loss of generality that \(G\) is connected; otherwise, we compute in linear time the connected components of \(G\) using breadth-first search, solve the problem on each component, and combine the solutions. Assuming \(G\) is connected, we compute a maximum induced matching \(M\) in \(G\) in linear time using Chang's algorithm [15].
Assume now that \(G\) has no isolated vertices and no two vertices \(y,y^{\prime}\in Y\) have comparable neighborhoods in \(G\), that is, \(N(y)\subseteq N(y^{\prime})\) or \(N(y^{\prime})\subseteq N(y)\), if and only if \(y=y^{\prime}\). Again, we first argue that it suffices to consider the case of connected graphs. In the general case, we proceed as follows. First, the connected components of \(G\) can be computed in linear time using breadth-first search. Second, since no two vertices in \(Y\) have comparable neighborhoods in \(G\), the same is also true for each connected component. Third, assume that each connected component \(C=(X_{C},Y_{C};E_{C})\) has a maximum induced matching \(M_{C}\) such that, if no two vertices in \(Y_{C}\) have comparable neighborhoods in \(G\), then the set of \(M_{C}\)-saturated vertices in \(X_{C}\) dominates \(Y_{C}\). Thus, the union of all such maximum induced matchings \(M_{C}\) yields a maximum induced matching \(M\) in \(G\) such that the set of \(M\)-saturated vertices in \(X\) dominates \(Y\).
Assume now that \(G\) is connected. As shown by Chang [15], a maximum induced matching \(M\) of \(G\) can be computed in linear time. We show that the set of \(M\)-saturated vertices in \(X\) dominates \(Y\). To do that, we first explain Chang's algorithm. The algorithm is based on a strong biconvex ordering \((<_{X},<_{Y})\) of \(G\), which can be computed in linear time (see Theorem 5.9). Let \(x_{1},\ldots,x_{s}\) be the ordering of \(X\) such that for all \(i,j\in\{1,\ldots,s\}\), we have \(i<j\) if and only
if \(x_{i}<_{X}x_{j}\). Similarly, let \(y_{1},\ldots,y_{t}\) be the ordering of \(Y\) such that for all \(i,j\in\{1,\ldots,t\}\), we have \(i<j\) if and only if \(y_{i}<_{Y}y_{j}\). For each vertex \(v\in X\), let \(\min(v)\) and \(\max(v)\) denote the smallest and the largest \(i\) such that \(y_{i}\) is adjacent to \(v\), respectively; for vertices in \(Y\), \(\min(v)\) and \(\max(v)\) are defined similarly. The pseudocode is given as Algorithm 1.
```
Input: A connected bipartite permutation graph \(G=(X,Y;E)\) with \(E\neq\emptyset\). Output: A maximum induced matching \(M\) of \(G\).
1 compute a strong biconvex ordering \((<_{X},<_{Y})\) of \(B_{G}\);
2 compute the values \(\min(v)\) and \(\max(v)\) for all \(v\in V(G)\);
3\(M\leftarrow\{\{x_{s},y_{t}\}\}\); // the vertices \(x_{s}\) and \(y_{t}\) are adjacent in \(G\)
4 let \(i=s\) and \(j=t\);
5while\(\min(x_{i})\neq 1\) and \(\min(y_{j})\neq 1\)do
6 let \(p=\min(y_{j})\) and \(q=\min(x_{i})\); // note that \(p\geq 2\)and \(q\geq 2\)
7if\(\min(x_{p})<q\) and \(\min(y_{q})<p\)then
8\(M\gets M\cup\{\{x_{p-1},y_{q-1}\}\}\);
9\(i\gets p-1\);
10\(j\gets q-1\);
11if\(\min(x_{p})=q\) and \(\min(y_{q})<p\)then
12\(M\gets M\cup\{\{x_{\max(y_{q-1})},y_{q-1}\}\}\);
13\(i\leftarrow\max(y_{q-1})\);
14\(j\gets q-1\);
15if\(\min(x_{p})<q\) and \(\min(y_{q})=p\)then
16\(M\gets M\cup\{\{x_{p-1},y_{\max(x_{p-1})}\}\}\);
17\(i\gets p-1\);
18\(j\leftarrow\max(x_{p-1})\);
19 // exactly one the of above three if statements is true
20return\(M\);
```
**Algorithm 1**Computing a maximum induced matching of a connected bipartite permutation graph
Let \(M\) be the matching computed by the above algorithm and suppose for a contradiction that there exists a vertex \(y\in Y\) that is not adjacent to any \(M\)-saturated vertex in \(X\). Clearly, \(y\) is not an endpoint of a matching edge. By construction, no two edges of \(M\) cross. Thus, we may order the edges of \(M\) linearly as \(M=\{\{x_{i_{1}},y_{j_{1}}\},\ldots,\{x_{i_{r}},y_{j_{r}}\}\}\) so that \(i_{1}<\cdots<i_{r}=s\) and \(j_{1}<\cdots<j_{r}=t\). Note that the algorithm added the edges to \(M\) in the order \(\{x_{i_{r}},y_{j_{r}}\},\{x_{i_{r-1}},y_{j_{r-1}}\},\ldots,\{x_{i_{1}}y_{j_{1}}\}\). Since \(i_{r}=s\) and \(j_{r}=t\), there exists a smallest integer \(k\in\{1,\ldots,r\}\) such that \(y<_{Y}y_{j_{k}}\). Furthermore, since no two vertices in \(Y\) have comparable neighborhoods, there exists a vertex \(x\in X\) adjacent to \(y\) but not to \(y_{j_{k}}\). The edge \(\{x_{i_{k}},y_{j_{k}}\}\) belongs to the matching \(M\), and hence the vertex \(x_{i_{k}}\) is adjacent to \(y_{j_{k}}\) but not to \(y\), since no neighbor of \(y\) is \(M\)-saturated. Next, observe that \(x<_{X}x_{i_{k}}\), since otherwise the presence of the edges \(\{x_{i_{k}},y_{j_{k}}\}\) and \(\{x,y\}\) would imply, using the fact that \((<_{X},<_{Y})\) is a strong ordering of \(G\), that \(x_{i_{k}}\) is adjacent to \(y\).
Consider the iteration of the **while** loop of the algorithm right after the edge \(\{x_{i_{k}},y_{j_{k}}\}\) was added to \(M\). Then \(i=i_{k}\) and \(j=j_{k}\) at the beginning of that loop. Since \(x<_{X}x_{i}\) and \(y<_{Y}y_{i}\), the facts that \(x_{i}\) and \(y_{j}\) are non-adjacent to \(y\) and \(x\), respectively, and that \((<_{X},<_{Y})\) is a strong
ordering of \(G\), imply that the condition \(\min(x_{i})\neq 1\) and \(\min(y_{j})\neq 1\) of the **while** loop is satisfied. Hence, the algorithm enters the **while** loop. Let \(p=\min(y_{j})\) and \(q=\min(x_{i})\). Using the fact that \((<_{X},<_{Y})\) is a strong ordering of \(G\), we infer that \(x<_{X}x_{p}\) and \(y<_{Y}y_{q}\). Since the graph \(G\) is connected, exactly one of the conditions of the three **if** statements within the **while** loop will be satisfied and the algorithm adds at least one more edge \(e=\{x_{i_{k-1}},y_{j_{k-1}}\}\) to \(M\). In particular, \((i_{k-1},j_{k-1})\in\{(p-1,q-1),(\max(y_{q-1}),q-1),(p-1,\max(x_{p-1}))\}\). By the definition of \(k\), we have \(y_{j_{k-1}}<_{Y}y\). Since we also have \(y<_{Y}y_{q}\), we infer that \(j_{k-1}<q-1\) and therefore \(j_{k-1}=\max(x_{p-1})\) and consequently \(i_{k-1}=p-1\). The vertex \(x_{p-1}=x_{i_{k-1}}\) is an endpoint of an edge in \(M\) and therefore not adjacent to \(y\), since no neighbor of \(y\) is \(M\)-saturated. In particular, \(x_{p-1}\neq x\) and thus \(x<_{X}x_{p}\) implies that \(x<_{X}x_{p-1}\). But now, the presence of the edges \(\{x,y\}\) and \(\{x_{p-1},y_{j_{k-1}}\}\) together with \(x<_{X}x_{p-1}\), \(y_{j_{k-1}}<_{Y}y\), and the fact that \((<_{X},<_{Y})\) is a strong ordering of \(G\), implies that \(x_{p-1}\) is adjacent to \(y\), a contradiction.
### Solving UCT in proper interval graphs in linear time
The following result is a consequence of Theorem 3.7 and the fact that every proper interval graph is a chordal graph.
**Corollary 5.11**.: _The vertex-clique incidence graph of a proper interval graph \(G\) can be computed in linear time._
We now have everything ready to prove the announced result.
**Theorem 5.12**.: Upper Clique Transversal _can be solved in linear time in the class of proper interval graphs._
Proof.: The algorithm proceeds in three steps. In the first step, we compute from the input graph \(G=(V,E)\) its vertex-clique incidence graph \(B_{G}\), with parts \(X=V\) and \(Y=\mathcal{C}_{G}\). By Theorem 5.8, the graph \(B_{G}\) is a bipartite permutation graph. In the second step of the algorithm, we compute a maximum induced matching \(M\) of \(B_{G}\), using Theorem 5.10. Finally, the algorithm returns the set of \(M\)-saturated vertices in \(X\). The pseudocode is given as Algorithm 2.
```
Input: A proper interval graph \(G=(V,E)\). Output: A maximum minimal clique transversal of \(G\).
1 compute the vertex-clique incidence graph \(B_{G}\), with parts \(X=V\) and \(Y=\mathcal{C}_{G}\);
2 compute a maximum induced matching \(M\) of \(B_{G}\);
3 compute the set \(M_{X}\) of \(M\)-saturated vertices in \(X\);
4return\(M_{X}\);
```
**Algorithm 2**Computing a maximum minimal clique transversal of a proper interval graph
_Correctness._ By construction, the set \(M_{X}\) returned by the algorithm is a subset of \(X\), and thus a set of vertices of \(G\). Since every vertex of \(G\) belongs to a maximal clique, and every maximal clique contains a vertex, \(B_{G}\) does not have any isolated vertices. Furthermore, since the vertices of \(Y\) are precisely the maximal cliques of \(G\), no two vertices in \(Y\) have comparable neighborhoods in \(B_{G}\). Therefore, by Theorem 5.10, the set \(M_{X}\) dominates \(Y\). By Corollary 5.3, \(M_{X}\) is a maximum minimal clique transversal in \(G\).
_Time complexity._ Computing the vertex-clique incidence graph \(B_{G}\) can be done in linear time by Corollary 5.11. Since \(B_{G}\) is a bipartite permutation graph, a maximum induced
matching of \(B_{G}\) can be computed in linear time, see Theorem 5.10. The set of \(M\)-saturated vertices in \(X\) can also be computed in linear time. Thus, the overall time complexity of the algorithm is \(\mathcal{O}(|V|+|E|)\).
The above proof also shows the following.
**Theorem 5.13**.: _For every proper interval graph \(G\), the upper clique transversal number of \(G\) is equal to the induced matching number of \(B_{G}\)._
We conclude the section by showing that the result of Theorem 5.13 does not generalize to the class of interval graphs.
**Observation 5.14**.: _There exist interval graphs such that the difference between the induced matching number of their vertex-clique incidence graph and the upper clique transversal number of the graph is arbitrarily large._
Proof.: Let \(q\geq 2\) and let \(G\) be the graph obtained from two disjoint copies of the star graph \(K_{1,q}\) by adding an edge between the two vertices of degree \(q\). It is easy to see that \(G\) is an interval graph.
We claim that the upper clique transversal number of \(G\) is at most \(q+1\), while the induced matching number of \(B_{G}\) is at least \(2q\). To see that the upper clique transversal number of \(G\) is at most \(q+1\), consider an arbitrary minimal clique transversal \(S\) of \(G\). Then \(S\) must contain at least one of the vertices of degree \(q+1\); let \(u\) be such a vertex. Then, since \(S\) is minimal, it cannot contain any of the \(q\) neighbors of \(u\) that are of degree \(1\) in \(G\). Thus, \(S\) either consists of the two vertices of degree \(q+1\) in \(G\), or contains \(u\) and all its non-neighbors in \(G\). In either case, \(S\) is of size at most \(q+1\).
It remains to show that the induced matching number of the vertex-clique incidence graph of \(G\) is at least \(2q\). As usual, let \(B_{G}=(X,Y;E)\), with \(X=V(G)\) and \(Y=\mathcal{C}_{G}\). Since \(G\) is triangle-free and has no isolated vertices, the maximal cliques of \(G\) are exactly the edges of \(G\), and the edges of \(B_{G}\) are the pairs \(\{x,e\}\) where \(x\in V(G)\), \(e\in E(G)\), and \(x\) is an endpoint of \(e\). Thus, \(B_{G}\) is isomorphic to the graph obtained from \(G\) by subdividing each edge. Let \(M\) be the set of edges of \(B_{G}\) of the form \(\{x,e\}\) where \(x\) is a vertex in \(B_{G}\) of degree \(1\) and \(e\) is the unique edge incident with it. Then \(M\) is an induced matching in \(B_{G}\) of size \(2q\) and hence the induced matching number of \(B_{G}\) is at least \(2q\).
## 6 Conclusion
We performed a systematic study of the complexity of Upper Clique Transversal in various graph classes, showing, on the one hand, NP-completeness of the problem in the classes of chordal graphs, chordal bipartite graphs, and line graphs of bipartite graphs, and, on the other hand, linear-time solvability in the classes of split graphs and proper interval graphs.
Our work leaves open several questions.
**Question 1**.: _What is the complexity of computing a minimal clique transversal in a given graph?_
**Question 2**.: _What is the complexity of Upper Clique Transversal in the class of interval graphs?_
**Question 3**.: _For what graphs \(G\) does the upper clique transversal number equal to the induced matching number of the vertex-clique incidence graph?_
While not all interval graphs have the stated property, Theorem 5.13 shows that the property is satisfied by every proper interval graph. But there is more; for example, all cycles have the property.
The upper clique transversal number is a trivial upper bound for the clique transversal number; however, the ratio between these two parameters can be arbitrarily large in general. For instance, in the complete bipartite graph \(K_{1,q}\) the former one has value \(q\) while the latter one has value \(1\). This motivates the following.
**Question 4**.: _For which graph classes is the ratio (or even the difference) between the clique transversal number and the upper clique transversal number bounded?_
**Question 5**.: _What is the parameterized complexity of Upper Clique Transversal (with respect to its natural parameterization)?_
Using hypergraph techniques, it can be shown that the problem is in XP, see [12].
### Acknowledgements
We are grateful to Nikolaos Melissinos and Haiko Muller for their helpful comments. The work of the first named author is supported in part by the Slovenian Research Agency (I0-0035, research program P1-0285 and research projects N1-0102, N1-0160, J1-3001, J1-3002, J1-3003, J1-4008, and J1-4084). Part of the work was done while the author was visiting Osaka Prefecture University in Japan, under the operation Mobility of Slovene higher education teachers 2018-2021, co-financed by the Republic of Slovenia and the European Union under the European Social Fund. The second named author is partially supported by JSPS KAKENHI Grant Number JP17K00017, 20H05964, and 21K11757, Japan.
|
2309.03319 | An observation about conformal points on surfaces | We study the existence of points on a compact oriented surface at which a
symmetric bilinear two-tensor field is conformal to a Riemannian metric. We
give applications to the existence of conformal points of surface
diffeomorphisms and vector fields. | Peter Albers, Gabriele Benedetti | 2023-09-06T19:00:09Z | http://arxiv.org/abs/2309.03319v1 | # An observation about conformal points on surfaces
###### Abstract.
We study the existence of points on a compact oriented surface at which a symmetric bilinear two-tensor field is conformal to a Riemannian metric. We give applications to the existence of conformal points of surface diffeomorphisms and vector fields.
Key words and phrases:Conformal points, Poincare-Hopf, line fields 2020 Mathematics Subject Classification: 53C18 (Primary) 57R22 (Secondary)
## 1. Statement of results
### Conformal points
Let \(\Sigma\) be a compact, oriented surface, possibly with non-empty boundary \(\partial\Sigma\). Denote by \(C_{1},\dots,C_{n}\) the boundary components of \(\Sigma\) with the induced orientation. Let \(\operatorname{Sym}((T^{*}\Sigma)^{\otimes 2})\to\Sigma\) be the bundle of symmetric bilinear tensors on \(\Sigma\). Fix a Riemannian metric \(g\) on \(\Sigma\), that is, a positive-definite section of \(\operatorname{Sym}((T^{*}\Sigma)^{\otimes 2})\to\Sigma\).
**Definition 1.1**.: We say that a section \(h\) of \(\operatorname{Sym}((T^{*}\Sigma)^{\otimes 2})\to\Sigma\) is _conformal to \(g\) at the point \(z\in\Sigma\)_ if there exists \(c\in\mathbb{R}\) such that \(h_{z}=cg_{z}\).
Motivated by [1], the goal of this note is to study the set of points
\[\mathcal{C}(g,h)\subset\Sigma\]
at which \(h\) is conformal to \(g\) (see Theorem 1.2) in order to investigate conformal points of diffeomorphisms \(F\colon\Sigma\to\Sigma\), in which case \(h=F^{*}g\) (see Theorem 1.4 and Corollary 1.5), and of vector fields (see Corollary 1.7).
Our main observation is that \(\mathcal{C}(g,h)\) is the zero-set of a section \(H^{a}\) in a distinguished vector bundle \(E^{a}\to\Sigma\) over the surface, which we describe now. Let \(\operatorname{End}(T\Sigma)\to\Sigma\) be the bundle of endomorphisms of \(T\Sigma\) and let
\[E^{a}\subset\operatorname{End}(T\Sigma) \tag{1.1}\]
be the subbundle of those endomorphisms which are symmetric with respect to \(g\) and have zero trace. For all \(z\in\Sigma\), an element of \(R\in E^{a}_{z}\) has the matrix expression
\[\begin{pmatrix}a&b\\ b&-a\end{pmatrix}\qquad a,b\in\mathbb{R},\]
with respect to a positive, orthonormal basis of \(T_{z}\Sigma\). Thus, any non-zero element \(R\in E^{a}_{z}\) is, up to a positive scalar multiple, a reflection \(R\colon T_{z}\Sigma\to T_{z}\Sigma\) along a line in \(T_{z}\Sigma\). In particular, the \(S^{1}\)-bundle associated with \(E^{a}\) is the bundle of unoriented lines in \(T\Sigma\). This \(S^{1}\)-bundle is doubly covered by the bundle of oriented lines in \(T\Sigma\) which, in turn, is the unit-tangent bundle of \(\Sigma\), that is, the \(S^{1}\)-bundle associated with \(T\Sigma\to\Sigma\). The above discussion shows that \(E^{a}\) is an oriented plane bundle over \(\Sigma\) with Euler number
\[e(E^{a})=2e(T\Sigma)=2\chi(\Sigma). \tag{1.2}\]
Given a symmetric bilinear two-tensor field \(h\) over \(\Sigma\), let \(H\) be the section of \(\operatorname{End}(T\Sigma)\) representing \(h\) with respect to \(g\), namely
\[g_{z}(u,H_{z}v)=h_{z}(u,v),\qquad\forall z\in\Sigma,\ \forall\,u,v\in T_{z}\Sigma. \tag{1.3}\]
We denote by
\[H^{a}:=H-\frac{\operatorname{tr}H}{2}I \tag{1.4}\]
the section of \(E^{a}\) corresponding to the trace-free part of \(H\). Here \(I\) is the section of \(\operatorname{End}(T\Sigma)\) such that \(I_{z}\) is the identity of \(T_{z}\Sigma\) for all \(z\in\Sigma\).
Thus, we conclude that
\[z\in\mathcal{C}(g,h)\quad\iff\quad H_{z}^{a}=0.\]
From this relationship we see that, generically, \(h\) has only finitely many conformal points and all of them lie in the interior of \(\Sigma\). In this case, we can use the Poincare-Hopf Theorem for unoriented line fields on oriented surfaces with boundary to algebraically count conformal points. To give the precise statement, let us introduce some notation under the assumption that \(\mathcal{C}(g,h)\) is finite and \(\mathcal{C}(g,h)\subset\Sigma\setminus\partial\Sigma\). For each \(z\in\mathcal{C}(g,h)\), we define
\[\operatorname{ind}_{(g,h)}(z)\in\mathbb{Z}\]
as the index of \(z\) seen as a zero of the section \(H^{a}\) of \(E^{a}\to\Sigma\). We count the elements in \(\mathcal{C}(g,h)\) algebraically via the integer
\[[\mathcal{C}(g,h)]:=\sum_{z\in\mathcal{C}(g,h)}\operatorname{ind}_{(g,h)}(z) \in\mathbb{Z}. \tag{1.5}\]
Moreover, for every boundary component \(C_{i}\) of \(\Sigma\), with \(i=1,\dots,n\), we define
\[w_{i}(g,h)\in\mathbb{Z} \tag{1.6}\]
as the winding number of the section \(H^{a}|_{C_{i}}\) with respect to \(R^{i}\in E^{a}\), where \(R^{i}(z)\) is the reflection along the line \(T_{z}\partial\Sigma\subset T_{z}\Sigma\) for \(z\in C_{i}\).
**Theorem 1.2**.: _Let \(g\) be a Riemannian metric on a compact, oriented surface \(\Sigma\). Then the following two statements hold._
1. _For any symmetric bilinear two-tensor field_ \(h\) _over_ \(\Sigma\) _such that_ \(\mathcal{C}(g,h)\) _is finite and_ \(\mathcal{C}(g,h)\subset\Sigma\setminus\partial\Sigma\)_, the equality_ \[[\mathcal{C}(g,h)]=2\chi(\Sigma)+\sum_{i=1}^{n}w_{i}(g,h)\] (1.7) _holds, where_ \(\chi(\Sigma)\) _denotes the Euler characteristic of_ \(\Sigma\)_._
2. _Let_ \(\mathcal{C}\subset\Sigma\setminus\partial\Sigma\) _be a finite set of points,_ \(\iota\colon\mathcal{C}\to\mathbb{Z}\) _an arbitrary function, and_ \(w_{1},\dots,w_{n}\in\mathbb{Z}\) _arbitrary integers satisfying_ \[\sum_{z\in\mathcal{C}}\iota(z)=2\chi(\Sigma)+\sum_{i=1}^{n}w_{i}.\] (1.8) _Then there exists a symmetric bilinear two-tensor field_ \(h\) _over_ \(\Sigma\) _such that_ \(\mathcal{C}=\mathcal{C}(g,h)\)_,_ \(\iota(z)=\operatorname{ind}_{(g,h)}(z)\) _for all_ \(z\in\mathcal{C}\) _and_ \(w_{i}=w_{i}(g,h)\) _for all_ \(i=1,\dots,n\)_._
**Remark 1.3**.: For the convenience of the reader, we give the short proof of Theorem 1.2 in Section 2 although this can be deduced from the literature. For statement (1), we refer to [10], [12] and [5] which deal with the Poincare-Hopf Theorem for oriented line fields on surfaces with boundary and to [7, III.2.2], [9], [8] and [3] which deal with the Poincare-Hopf theorem for unoriented line fields on surfaces without boundary. For statement (2), we refer to the Extension Theorem in [6, p. 145]. Finally, we notice that, passing to the orientation double cover, Theorem 1.2 also holds for non-orientable surfaces.
We discuss now two situations where the set \(\mathcal{C}(g,h)\) naturally appears.
### Caratheodory's conjecture
First, let us consider a smooth embedding \(\rho\colon S^{2}\to\mathbb{R}^{3}\). Here \(\Sigma=S^{2}\) and we take \(g^{\rho}\) and \(h^{\rho}\) to be the first and the second fundamental form of the embedding \(\rho\), respectively, with respect to the ambient Euclidean metric. The elements of \(\mathcal{C}(g^{\rho},h^{\rho})\) are the so-called umbilical points, namely points at which the two principal curvatures of the embedding coincide. In this case, (1.7) yields the well-known result that \([\mathcal{C}(g^{\rho},h^{\rho})]=4\), namely that the algebraic count of umbilical points is equal to four. For example, when \(\rho\) is an ellipsoid of revolution, \(\mathcal{C}(g^{\rho},h^{\rho})\) consists exactly of the two poles, both having index two. In general, it is natural to ask which further conditions must the points \(z\in\mathcal{C}(g^{\rho},h^{\rho})\) and their indices satisfy
besides \([\mathcal{C}(g^{\rho},h^{\rho})]=4\). For instance, Caratheodory's conjecture [4, 13] asserts that convexity of the embedding \(\rho\) implies \(\operatorname{ind}(z)\leq 2\) for all \(z\in\mathcal{C}(g^{\rho},h^{\rho})\), and, in particular, entails that \(\mathcal{C}(g^{\rho},h^{\rho})\) always contains at least two points.
### Conformal points of a diffeomorphism
The second situation in which \(\mathcal{C}(g,h)\) naturally appears is when \(h=F^{*}g\), where \(F\colon\Sigma\to\Sigma\) is any orientation-preserving diffeomorphism of \(\Sigma\). In this case, \(\mathcal{C}(g,F^{*}g)\) is the set of so-called conformal points of \(F\) (with respect to \(g\)). Assuming that \(\mathcal{C}(g,F^{*}g)\) is finite and \(\mathcal{C}(g,F^{*}g)\subset\Sigma\setminus\partial\Sigma\), we are going to give a formula for \(w_{i}(g,F^{*}g)\) in terms of the behavior of \(F\) at the boundary. In order to state the result, for \(i=1,\dots,n\) let \(\nu_{i}\colon C_{i}\to T\Sigma\) be the outward normal at the boundary component \(C_{i}\) and \(\tau_{i}\colon C_{i}\to T\Sigma\) be the unit vector tangent to \(C_{i}\) in the positive direction. The pair \((\nu_{i},\tau_{i})\) then forms a positive orthonormal frame for \(g\) along \(C_{i}\). We trivialize \(T\Sigma|_{\partial\Sigma}=\sqcup_{i}C_{i}\times\mathbb{R}^{2}\) using \((\nu_{i},\tau_{i})\) at \(C_{i}\), \(i=1,\dots,n\). Since \(F\) maps boundary components to boundary components (not necessarily the same) we can express \(\mathrm{d}F\) in this trivialization as
\[\mathrm{d}F\big{|}_{C_{i}}=:N_{i}=c_{i}\begin{pmatrix}a_{i}&0\\ b_{i}&1\end{pmatrix}. \tag{1.9}\]
Here \(a_{i},c_{i}\colon C_{i}\to(0,\infty)\), \(b_{i}\colon C_{i}\to\mathbb{R}\) and \((a_{i},b_{i})\) is never equal to \((1,0)\) since \(F\) has no conformal point on \(C_{i}\) by assumption.
**Theorem 1.4**.: _For all \(i=1,\dots,n\) we have the equality_
\[w_{i}(g,F^{*}g)=w(a_{i}-1,b_{i}), \tag{1.10}\]
_where \(w(a_{i}-1,b_{i})\) is the winding number of the curve \((a_{i}-1,b_{i})\colon C_{i}\cong S^{1}\to\mathbb{R}^{2}\setminus\{0\}\) around the origin._
This formula, which will be proved in Section 3, allows us to compute \(w_{i}(g,F^{*}g)\) if we understand the behavior of \(F\) at points on the boundary sufficiently well. A remarkable example of this phenomenon is illustrated by the next corollary.
**Corollary 1.5**.: _If \(F\colon\Sigma\to\Sigma\) is the identity on the boundary and preserves an area form on \(\Sigma\), then_
\[w_{i}(g,F^{*}g)=0,\qquad\forall\,i=1,\dots,n. \tag{1.11}\]
_It follows that for this type of diffeomorphisms_
\[[\mathcal{C}(F)]=2\chi(\Sigma), \tag{1.12}\]
_that is, the number of conformal points of such an \(F\) is twice the Euler characteristic._
Proof.: By (1.10) the assertion is equivalent to showing \(w_{i}(a_{i}-1,b_{i})=0\). Since \(F\) is the identity at the boundary we conclude that \(\mathrm{d}F\cdot\tau_{i}=\tau_{i}\) and thus \(c_{i}=1\) in (1.9). Since \(F\) preserves an area form, it follows that \(\det N_{i}=1\), which implies that \(a_{i}=1\) in (1.9). Therefore, the curve \((a_{i}-1,b_{i})=(0,b_{i})\) is contained in the \(y\)-axis and does not cross \(0\). We conclude that its winding number around the origin \(w(a_{i}-1,b_{i})\) vanishes.
**Remark 1.6**.: Equation (1.12) was proved in [1], when \(\Sigma=D^{2}\), and \(F\) satisfies some additional conditions, which hold, for instance, when \(F\) is \(C^{1}\)-close to the identity,
If we linearize the property of being a conformal point for a diffeomorphism at the identity of \(\Sigma\), we get a corresponding condition for conformal points of vector fields on \(\Sigma\). This condition is easier phrased after reinterpreting conformality in terms of complex geometry, as we explain next.
### Conformal points and complex structures
Let \(\jmath\) be the complex structure associated with the Riemannian metric \(g\) and the orientation of \(\Sigma\). In other words, \(\jmath\) yields a section of \(\operatorname{End}(T\Sigma)\) such that \(v\) and \(\jmath_{z}v\) form a positive, orthogonal basis of \(T_{z}\Sigma\) for all \(z\in\Sigma\) and all \(v\in T_{z}\Sigma\setminus\{0\}\). Thus, \(\jmath_{z}\) has the matrix expression
\[\begin{pmatrix}0&-1\\ 1&0\end{pmatrix} \tag{1.13}\]
with respect to a positive, orthonormal basis of \(T_{z}\Sigma\). An endomorphism \(H\colon T_{z}\Sigma\to T_{z}\Sigma\) commutes with \(j_{z}\) if and only if \(H\) has the matrix expression
\[\begin{pmatrix}a&-b\\ b&a\end{pmatrix}\qquad a,b\in\mathbb{R},\]
in such a basis. In particular, we deduce that \(H\) is, up to a scalar multiple, a rotation matrix. Analogously, \(H\) anticommutes with \(j_{z}\) if and only if \(H\) has the matrix expression
\[\begin{pmatrix}a&b\\ b&-a\end{pmatrix}\qquad a,b\in\mathbb{R},\]
in such a basis. In particular, we deduce that \(E^{a}\), see (1.1), is exactly the bundle of endomorphisms anticommuting with \(j\). Therefore, if we denote by \(E^{c}\to\Sigma\) the bundle of endomorphisms commuting with \(j\), we get the splitting
\[\operatorname{End}(T\Sigma)=E^{c}\oplus E^{a}. \tag{1.14}\]
Furthermore, as \(j\)-complex line bundle we can write
\[E^{a}\cong T\otimes\overline{T^{*}},\qquad T:=T^{(1,0)}\Sigma,\]
where \(T^{(1,0)}\Sigma\) is the holomorphic tangent bundle of \(\Sigma\) and \(\overline{T^{*}}\) denotes the conjugate of the dual bundle of \(T\). With this identification, a local section of \(E^{a}\) is given by \(\frac{\partial}{\partial x}\otimes\mathrm{d}\bar{z}\) where \(z\) is a local holomorphic coordinate compatible with \(j\). Thus, the Euler number of \(E^{a}\) as real oriented plane bundle coincides with its Chern number as complex line bundle. Using that \(c_{1}(T)=\chi(\Sigma)\), this gives another derivation of (1.2) by computing
\[c_{1}(E^{a})=c_{1}(T\otimes\overline{T^{*}})=c_{1}(T)-c_{1}(T^{*})=c_{1}(T)+c _{1}(T)=2c_{1}(T)=2\chi(\Sigma).\]
Finally, let us assume that \(z\) is a conformal point of an orientation-preserving diffeomorphism \(F\colon\Sigma\to\Sigma\). Then,
\[(F^{*}g)_{z}=cg_{z}\text{ for some }c>0. \tag{1.15}\]
If we denote by \(M\) the matrix representation of \(\mathrm{d}_{z}F\) with respect to positive, orthonormal bases of \(T_{z}\Sigma\) and \(T_{F(z)}\Sigma\), then (1.15) can be rewritten as
\[M^{T}M=cI\,.\]
This condition is equivalent to saying that \(M\) is, up to a scalar multiple, a rotation matrix. Since \(j_{z}\) and \(j_{F(z)}\) are represented by the matrix (1.13), we see that \(\mathrm{d}_{z}Fj_{z}=j_{F(z)}\mathrm{d}_{z}F\). We conclude that \(z\) is a conformal point of \(F\) if and only if \(F\) is \(j\)-holomorphic at \(z\) with respect to \(j\)-holomorphic coordinates around \(z\) and \(F(z)\).
### Conformal points of vector fields
Let \(f\) be a vector field on \(\Sigma\). Let \(F_{t}:\Sigma\to\Sigma\) be the time-\(t\) map of the flow of \(f\). Suppose that \(z\in\Sigma\) is a point such that \(F_{t}(z)\in\mathcal{C}(g,(F_{t})^{*}g)\) for all \(t\) close to zero. In particular, \(F_{t}\) is \(j\)-holomorphic at \(z\) in a local \(j\)-holomorphic chart for all small \(t\). Taking the derivative in \(t\) at \(t=0\), we conclude that the vector field \(f\) is \(j\)-holomorphic at \(z\). In other words \(\bar{\partial}_{j}f\) is a section of \(E^{a}\) which vanishes at \(z\). Here, \(\bar{\partial}_{j}\) denotes the Cauchy-Riemann operator sending sections of the holomorphic tangent bundle \(T=T^{(1,0)}\Sigma\) to sections of \(T\otimes\overline{T^{*}}\cong E^{a}\), and \(f\) is identified with its image under the isomorphism
\[T\Sigma\to T^{\mathbb{C}}\Sigma\cong T^{(1,0)}\Sigma\oplus T^{(0,1)}\Sigma\to T ^{(1,0)}\Sigma,\]
where \(T^{\mathbb{C}}\Sigma\) is the complexification of \(T\Sigma\).
Let \(\mathcal{C}(j,f)\) be the set of zeros of \(\bar{\partial}_{j}f\). If \(\mathcal{C}(j,f)\) is finite and \(\mathcal{C}(j,f)\subset\Sigma\setminus\partial\Sigma\), then we can associate an index \(\operatorname{ind}_{(j,f)}(z)\) to each \(z\in\mathcal{C}(j,f)\) and a winding number \(w_{i}(j,f)\) representing the relative winding number of \(\bar{\partial}_{j}f\) with respect to the canonical section \(R_{i}\) along \(C_{i}\) for every \(i=1,\dots,n\). Defining the algebraic count
\[[\mathcal{C}(j,f)]:=\sum_{z\in\mathcal{C}(j,f)}\operatorname{ind}_{(j,f)}(z),\]
we get the following consequence of Theorem 1.2.(1).
**Corollary 1.7**.: _Let \(\jmath\) be a complex structure on a compact surface \(\Sigma\) and \(f\) a vector field on \(\Sigma\) such that \(\mathcal{C}(\jmath,f)\) is finite and \(\mathcal{C}(\jmath,f)\subset\Sigma\setminus\partial\Sigma\). Then the equation_
\[[\mathcal{C}(\jmath,f)]=2\chi(\Sigma)+\sum_{i=1}^{n}w_{i}(\jmath,f)\]
_holds._
### An open question
Given any Riemannian metric \(g\) on \(\Sigma\) and diffeomorphism \(F\colon\Sigma\to\Sigma\), it is interesting to ask which further restrictions must the points \(z\) of \(\mathcal{C}(g,F^{*}g)\), their indices \(\operatorname{ind}_{(g,F^{*}g)}(z)\) and the numbers \(w_{i}(g,F^{*}g)\) satisfy besides equation (1.7). This question is related to the uniformization theorem for compact surfaces with boundary via Theorem 1.2.(2). For instance, given any two metrics \(g\) and \(h\) on \(\Sigma=S^{2}\) or \(\Sigma=D^{2}\), we can find a diffeomorphism \(F\colon\Sigma\to\Sigma\) such that \(F^{*}g\) and \(h\) are conformal at every point [11, Theorem 1]. Thus, \(\mathcal{C}(g,F^{*}g)=\mathcal{C}(g,h)\), \(\operatorname{ind}_{(g,F^{*}g)}(z)=\operatorname{ind}_{(g,h)}(z)\) for every \(z\) in this set, and \(w_{i}(g,F^{*}g)=w_{i}(g,h)\) for all \(i=1,\dots,n\). As a consequence of Theorem 1.2.(2), there are no further restrictions in this case.
On the other hand, on a general surface \(\Sigma\) there are metrics \(g\) and \(h\) such that \(h\) and \(F^{*}g\) are not conformal at all points, no matter how we choose the diffeomorphism \(F\). The easiest examples where this happens is when \(\Sigma=\mathbb{T}^{2}\), or when \(\Sigma=D^{2}\) and we require in addition the diffeomorphism \(F\) to be the identity at the boundary. For instance, on \(\mathbb{T}^{2}\) conformal classes of metrics \(g\) are classified by lattices \(\Gamma\) in \(\mathbb{C}\), up to Euclidean isometries and homotheties, where \(g\) is the Riemannian metric on \(\mathbb{T}^{2}=\mathbb{C}/\Gamma\) induced by the Euclidean metric on \(\mathbb{C}\). To get an example on the disc, let us identify \(D^{2}\) with the unit Euclidean disc in \(\mathbb{C}\). Let \(g\) be the Euclidean metric on \(D^{2}\). Recall that the group of diffeomorphisms \(\varphi\colon D^{2}\to D^{2}\) such that \(g\) and \(\varphi^{*}g\) are conformal at all points consists of the Mobius transformations preserving \(D^{2}\). Consider \(G\colon D^{2}\to D^{2}\) to be any diffeomorphism such that \(G|_{\partial D^{2}}\neq\varphi|_{\partial D^{2}}\) for all \(\varphi\). Such a \(G\) surely exists since if \(\varphi\) is not the identity, then \(\varphi\) can have at most two fixed points on the boundary. If we define \(h:=G^{*}g\), then there is no diffeomorphism \(F\colon D^{2}\to D^{2}\) which is identity at the boundary and such that \(F^{*}h\) and \(g\) are conformal at every point. Indeed, if such an \(F\) exists, then \((G\circ F)^{*}g=F^{*}G^{*}g=F^{*}h\) is conformal to \(g\) at all points, which means that \(F\circ G=\varphi\) for some Mobius transformation \(\varphi\) preserving the disc. Since \(F\) is the identity at the boundary, this would imply that \(G=\varphi\) on the boundary. A contradiction.
Thus, in the case of \(\mathbb{T}^{2}\) and of \(D^{2}\), it is meaningful to ask if there is a metric \(g\) and a diffeomorphism \(F\) (being the identity on the boundary in the case of \(D^{2}\)) such that \(\mathcal{C}(g,F^{*}g)\) is empty. If one can find a vector field \(f\) (vanishing on the boundary in the case of \(D^{2}\)) such that \(\mathcal{C}(\jmath,f)=\varnothing\), then \(\mathcal{C}(g,F^{*}_{t}g)=\varnothing\) for small \(t\neq 0\), as well, where \(F_{t}\) is the time-\(t\) map of the flow of \(f\).
In the case of \(\Sigma=\mathbb{T}^{2}\), we can readily find such a vector field for all conformal classes of complex structures. Indeed, let \(\mathbb{T}^{2}=\mathbb{C}/\Gamma\) where \(\Gamma\) is a lattice in \(\mathbb{C}\) and let \(\jmath\) be the complex structure on \(\Sigma\) induced by that on \(\mathbb{C}\). Up to Euclidean isometries and homotheties, we can assume that \(\Gamma\) is generated by \(1,\tau\in\mathbb{C}\), where \(\tau=a+ib\) with \(b>0\). Consider the vector field which in a global holomorphic trivialization of \(T^{(1,0)}\Sigma\) is written as \(f(z)=e^{\frac{2\pi i}{b}\mathbf{1}m\cdot z}\). Notice that \(f\) is well-defined since it is invariant under translations by \(1\) and \(\tau\). Moreover,
\[\bar{\partial}_{\jmath}f(z)=\frac{\partial}{\partial\bar{z}}e^{\frac{\pi}{b}(z -\bar{z})}=-\frac{\pi}{b}f(z),\]
which is nowhere vanishing.
However, we do not know if such a vector field \(f\) exists on \(D^{2}\). Since vector fields on \(D^{2}\) correspond to functions in a global trivialization of \(T^{(1,0)}D^{2}\), we have the following open question.
**Question 1.8**.: Does there exist a smooth function \(f\colon D^{2}\to\mathbb{C}\) satisfying the following two conditions?
1. \(\forall z\in D^{2},\quad\frac{\partial f}{\partial\bar{z}}(z)\neq 0\).
2. \(\forall z\in\partial D^{2},\quad f(z)=0\).
### Plan of the paper
Theorem 1.2 is proven in Section 2. Theorem 1.4 is proven in Section 3.
### Acknowledgments
P.A. and G.B. are partially supported by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy EXC2181/1 - 390900948 (the Heidelberg STRUCTURE-TURES Excellence Cluster), the Collaborative Research Center SFB/TRR 191 - 281071066 (Symplectic Structures in Geometry, Algebra and Dynamics), and the Research Training Group RTG 2229 - 281869850 (Asymptotic Invariants and Limits of Groups and Spaces). G.B. warmly thanks Thomas Rot for stimulating discussions around the topics of this paper.
## 2. Proof of Theorem 1.2
We prove Theorem 1.2.(1). Let \(h\) be a symmetric bilinear two-tensor field over \(\Sigma\) such that \(\mathcal{C}(g,h)\) is finite and \(\mathcal{C}(g,h)\subset\Sigma\setminus\partial\Sigma\). Recall the definition of \(H\) and \(H^{a}\) from (1.3) and (1.4).
If \(\Sigma\) has no boundary, then \([\mathcal{C}(g,h)]=e(E^{a})=2\chi(\Sigma)\) by the Poincare-Hopf Theorem for oriented plane bundles [2, Theorem 11.17]. If \(\Sigma\) has boundary, let \(\hat{\Sigma}\) be the closed, oriented surface that we obtain from \(\Sigma\) by gluing a disc \(D_{1},\dots,D_{n}\) along each boundary component \(C_{1},\dots,C_{n}\). The gluing maps \(D^{2}\to D_{i}\) have the Euclidean disc
\[D^{2}=\{(x,y)\in\mathbb{R}^{2}\ |\ x^{2}+y^{2}\leq 1\}\]
as domain and send the boundary \(\partial D^{2}\) traversed in the positive sense to \(\bar{C}_{i}\), that is, to \(C_{i}\) traversed in the negative sense. In this way, the gluing maps are positively oriented with respect to the orientation on \(\hat{\Sigma}\).
We let \(\hat{g}\) be any extension of \(g\) to \(\Sigma\) as a Riemannian metric. On the bundle \(E^{a}|_{D_{i}}\) we choose a nowhere vanishing section \(M^{i}\) defined as the reflection along the direction of \(\partial_{x}\in TD^{2}\). Let \(w_{\bar{C}_{i}}(H^{a},M^{i})\) be the winding number of \(H^{a}\) with respect to \(M^{i}\) along \(C_{i}\) traversed in the negative direction. Then
\[w_{\bar{C}_{i}}(H^{a},M^{i})=w_{\bar{C}_{i}}(H^{a},R^{i})+w_{\bar{C}_{i}}(R^{ i},M^{i})=-w_{C_{i}}(H^{a},R^{i})+w_{\partial D^{2}}(R^{i},M^{i})=-w_{i}(g,h)+2,\]
where we have used that \(\bar{C}_{i}\) is identified with \(\partial D^{2}\) and that the unoriented line tangent to \(\partial D^{2}\) rotates twice with respect to the horizontal unoriented line. By the Extension Theorem in [6, p. 145], it is possible to construct an extension \(\hat{h}\) of \(h\) to \(\hat{\Sigma}\) such that \(\mathcal{C}(\hat{g},\hat{h})=\mathcal{C}(g,h)\cup\{z_{1},\dots,z_{n}\}\), where \(z_{1},\dots,z_{n}\) are the centers of the discs \(D_{1},\dots,D_{n}\) and
\[\operatorname{ind}_{(\hat{g},\hat{h})}(z_{i})=w_{\bar{C}_{i}}(H^{a},M^{i})=2- w_{i}(g,h). \tag{2.1}\]
Therefore,
\[[\mathcal{C}(g,h)]=[\mathcal{C}(\hat{g},\hat{h})]-\sum_{i=1}^{n}\operatorname {ind}_{(\hat{g},\hat{h})}(z_{i})=2\chi(\hat{\Sigma})-2n+\sum_{i=1}^{n}w_{i}(g, h)=2\chi(\Sigma)+\sum_{i=1}^{n}w_{i}(g,h),\]
where we used that \(\chi(\Sigma)+n=\chi(\hat{\Sigma})\) as follows from the formula \(\chi(A\cup B)=\chi(A)+\chi(B)-\chi(A\cap B)\). We have thus completed the proof of Theorem 1.2.(1).
Let us first prove Theorem 1.2.(2) when \(\Sigma\) has no boundary. Let us consider an embedded closed disc \(D\) containing \(\mathcal{C}\) in its interior. There is a section \(H^{\rm out}\) of \(E^{a}\) which is nowhere vanishing on \(\Sigma\setminus\hat{D}\) and there is a section \(H^{\rm in}\) which is nowhere vanishing over \(D\). The winding number of \(H^{\rm out}\) with respect to \(H^{\rm in}\) along \(\partial D\) is \(w(H^{\rm out},H^{\rm in})=2\chi(\Sigma)\). For each \(z\in\mathcal{C}\) consider an embedded closed disc \(D^{z}\) centered at \(z\) and contained in \(\hat{D}\). After shrinking the discs \(D^{z}\) we may assume that they are pairwise disjoint. Let \(H^{z}\) be a section of \(E^{a}|_{D^{z}}\) which has just one zero at \(z\) with index \(\operatorname{ind}(z)=\imath(z)\). Thus the winding number of \(H^{z}\) with respect to \(H^{\rm in}\) along \(\partial D^{z}\) is \(w(H^{z},H^{\rm in})=\imath(z)\). Since \(2\chi(\Sigma)=\sum_{z\in\mathcal{C}}\iota(z)\) by assumption, we get
\[w(H^{\rm out},H^{\rm in})=\sum_{z\in\mathcal{C}}w(H^{z},H^{\rm in}).\]
Consider the surface
\[\tilde{\Sigma}:=D\setminus\bigcup_{z\in\mathcal{C}}\hat{D}^{z}.\]
It satisfies \(\partial\tilde{\Sigma}=\partial D\sqcup(\sqcup_{z\in\mathcal{C}}\overline{ \partial D^{z}})\). Since \(w(H^{\rm out},H^{\rm in})-\sum_{z\in\mathcal{C}}w(H^{z},H^{\rm in})=0\), the Extension Theorem in [6, p. 145] implies that there is a nowhere vanishing section \(\tilde{H}\) of \(E^{a}|_{\tilde{\Sigma}}\) coinciding
with \(H^{\rm out}\) on \(\partial D\) and with \(H^{z}\) on every \(\partial D^{z}\). Thus, \(H^{\rm out}\), \(\tilde{H}\), and all \(H^{z}\) glue together to yield a section \(H\) of \(E^{a}\to\Sigma\) having the desired properties.
When \(\Sigma\) has boundary, we construct the closed surface \(\hat{\Sigma}\) as in the proof of Theorem 1.2.(1). We define \(\hat{\mathcal{C}}:=\mathcal{C}\cup\{z_{1},\dots,z_{n}\}\) and \(i\colon\hat{\mathcal{C}}\to\mathbb{Z}\) as the extension of \(i\) such that \(\imath(z_{i})=2-w_{i}\) for all \(i=1,\dots,n\). Applying Theorem 1.2.(2) for closed surfaces to \(\hat{\Sigma}\) and \(\hat{\imath}\) and using (2.1) yields Theorem 1.2.(2) for the case of surfaces with boundary, as well.
## 3. Proof of Theorem 1.4
Let \(C_{i}\) be a component of \(\partial\Sigma\) for some \(i\in\{1,\dots,n\}\). There is \(j\in\{1,\dots,n\}\) such that \(F(C_{i})=C_{j}\). Recall that \({\rm d}F|_{C_{i}}\) is expressed by the matrix
\[N_{i}=c_{i}\begin{pmatrix}a_{i}&0\\ b_{i}&1\end{pmatrix}\]
with respect to the positive orthonormal bases \(\nu_{i},\tau_{i}\) and \(\nu_{j},\tau_{j}\).
The metric \(F^{*}g|_{C_{i}}\) is represented by the endomorphism \({\rm d}F^{T}\cdot{\rm d}F\) via (1.3). A computation shows that the matrix representing \({\rm d}F^{T}\cdot{\rm d}F\) with respect to the basis \(\nu_{i},\tau_{i}\) is
\[N_{i}^{T}N_{i}=c_{i}^{2}Q_{i},\quad\text{with}\quad Q_{i}=\begin{pmatrix}a_{i}^ {2}+b_{i}^{2}&b_{i}\\ b_{i}&1\end{pmatrix}.\]
We point out that the condition that \((a_{i},b_{i})\) is never equal to \((1,0)\) is equivalent to \(Q_{i}\) having distinct eigenvalues, since \(Q_{i}\) is symmetric. Let \(q_{i}\colon C_{i}\to\mathbb{R}P^{1}\cong\mathbb{R}/\pi\mathbb{Z}\) be the eigendirection of \(Q_{i}\) with larger eigenvalue. By (1.6), \(w_{i}(g,F^{*}g)\) is the degree of the map \(q_{i}\colon C_{i}\to\mathbb{R}/\pi\mathbb{Z}\). Therefore, our goal is to show that the degree of \(q_{i}\) is equal to the winding number of \((a_{i}-1,b_{i})\colon C_{i}\to\mathbb{R}^{2}\) around the origin. To this purpose, let us parametrize \(C_{i}\) in the positive direction by \(\theta_{i}\in\mathbb{R}/2\pi\mathbb{Z}\) and, to ease notation, let us drop all the subscripts \(i\) in what follows.
We may assume without loss of generality that the curve \((a-1,b)\colon\mathbb{R}/2\pi\mathbb{Z}\to\mathbb{R}^{2}\) intersects the positive real axis transversely. In this case \(w(a-1,b)\) counts the number of points \(\theta_{0}\in\mathbb{R}/2\pi\mathbb{Z}\) such that \((a(\theta_{0})-1,b(\theta_{0}))\) lies on the positive real axis, namely \(a(\theta_{0})>1\) and \(b(\theta_{0})=0\), with sign: the intersection is counted positively if \(b^{\prime}(\theta_{0})>0\) and negatively if \(b^{\prime}(\theta_{0})<0\).
On the other hand, the degree of \(q\) is computed using a regular value \(\xi\in\mathbb{R}/\pi\mathbb{Z}\) of \(q\). Being regular means that \(q^{\prime}(\theta_{0})\neq 0\) for all \(\theta_{0}\in q^{-1}(\xi)\). In this case, the degree of \(q\) counts number of points \(\theta_{0}\in q^{-1}(\xi)\) with sign: the point \(\theta_{0}\) is counted positively if \(q^{\prime}(\theta_{0})>0\) and negatively if \(q^{\prime}(\theta_{0})<0\).
Choosing \(\xi=0\), we see that \(\theta_{0}\in q^{-1}(0)\) if and only if \((1,0)\in\mathbb{R}^{2}\) is an eigenvector of \(Q\) with eigenvalue larger than \(1\). This happens exactly when \(b(\theta_{0})=0\) and \(a(\theta_{0})>1\), that is when \((a-1,b)\) intersects the positive real axis. Therefore, we prove that \(0\) is a regular value of \(q\) and that \(w(a-1,b)\) is the degree of \(q\) if we can show that for every such \(\theta_{0}\) the numbers \(b^{\prime}(\theta_{0})\) and \(q^{\prime}(\theta_{0})\) have the same sign.
For this purpose, let \(v(\theta)=(x(\theta),y(\theta))\in\mathbb{R}^{2}\) be a generator of the line \(q(\theta)\) such that \(v(\theta_{0})=(1,0)\) and write \(\lambda(\theta)\) for the corresponding eigenvalue of \(Q(\theta)\), so that \(\lambda(\theta_{0})=a(\theta_{0})\). Then \(q^{\prime}(\theta_{0})=y^{\prime}(\theta_{0})\). To compute \(y^{\prime}(\theta_{0})\) we differentiate the vector equation \(\big{(}Q(\theta)-\lambda(\theta)I\big{)}v(\theta)=0\) at \(\theta_{0}\):
\[\big{(}Q(\theta_{0})-\lambda(\theta_{0})I\big{)}v^{\prime}(\theta_{0})+\big{(} Q^{\prime}(\theta_{0})-\lambda^{\prime}(\theta_{0})I\big{)}v(\theta_{0})=0.\]
Therefore, substituting the values for \(Q(\theta_{0})\), \(\lambda(\theta_{0})\) and \(Q^{\prime}(\theta_{0})\) and taking the \(y\)-component of the vector equation, we get
\[\big{(}1-a(\theta_{0})\big{)}y^{\prime}(\theta_{0})+b^{\prime}(\theta_{0})=0.\]
Thus,
\[q^{\prime}(\theta_{0})=y^{\prime}(\theta_{0})=\frac{b^{\prime}(\theta_{0})}{a( \theta_{0})-1}\]
from which we see that \(q^{\prime}(\theta_{0})\) and \(b^{\prime}(\theta_{0})\) have the same sign since \(a(\theta_{0})>1\). This completes the proof. |
2301.00032 | Bayesian Learning for Dynamic Inference | The traditional statistical inference is static, in the sense that the
estimate of the quantity of interest does not affect the future evolution of
the quantity. In some sequential estimation problems however, the future values
of the quantity to be estimated depend on the estimate of its current value.
This type of estimation problems has been formulated as the dynamic inference
problem. In this work, we formulate the Bayesian learning problem for dynamic
inference, where the unknown quantity-generation model is assumed to be
randomly drawn according to a random model parameter. We derive the optimal
Bayesian learning rules, both offline and online, to minimize the inference
loss. Moreover, learning for dynamic inference can serve as a meta problem,
such that all familiar machine learning problems, including supervised
learning, imitation learning and reinforcement learning, can be cast as its
special cases or variants. Gaining a good understanding of this unifying meta
problem thus sheds light on a broad spectrum of machine learning problems as
well. | Aolin Xu, Peng Guan | 2022-12-30T19:16:23Z | http://arxiv.org/abs/2301.00032v1 | # Bayesian Learning for Dynamic Inference
###### Abstract
The traditional statistical inference is static, in the sense that the estimate of the quantity of interest does not affect the future evolution of the quantity. In some sequential estimation problems however, the future values of the quantity to be estimated depend on the estimate of its current value. This type of estimation problems has been formulated as the dynamic inference problem. In this work, we formulate the Bayesian learning problem for dynamic inference, where the unknown quantity-generation model is assumed to be randomly drawn according to a random model parameter. We derive the optimal Bayesian learning rules, both offline and online, to minimize the inference loss. Moreover, learning for dynamic inference can serve as a meta problem, such that all familiar machine learning problems, including supervised learning, imitation learning and reinforcement learning, can be cast as its special cases or variants. Gaining a good understanding of this unifying meta problem thus sheds light on a broad spectrum of machine learning problems as well.
## 1 Introduction
### Dynamic inference
Traditional statistical estimation, or statistical inference in general is static, in the sense that the estimate of the quantity of interest does not affect the future evolution of the quantity. In some sequential estimation problems however, we do encounter the situation where the future value of the quantity to be estimated depends on the estimate of its current value. Examples include 1) stock price prediction by big investors, where the prediction of the tomorrow's price of a stock affects tomorrow's investment decision, which further changes the stock's supply-demand status and hence its price the day after tomorrow; 2) interactive product recommendation, where the estimate of a user's preference based on the user's activity leads to certain product recommendations to the user, which would in turn shape the user's future activity and preference; 3) behavior prediction in multi-agent systems, e.g. vehicles on the road, where the estimate of an adjacent vehicle's intention based on its current driving situation leads to a certain action of the ego vehicle, which can change the future driving situation and intention of the adjacent vehicle. We may call such problems as _dynamic inference_, which is formulated and studied in depth in [1]. It is shown that the problem of dynamic inference can be converted to an Markov decision-making process (MDP), and the optimal estimation strategy can be derived through dynamic programming. We give a brief overview of the problem of dynamic inference in Section 2.
### Learning for dynamic inference
There are two major ingredients in dynamic inference: the probability transition kernels of the quantity of interest given each observation, and the probability transition kernels of the next
observation given the current observation and the estimate of the current quantity of interest. We may call them the _quantity-generation model_ and the _observation-transition model_, respectively. Solving the dynamic inference problem requires the knowledge of the two models. However, in most of the practically interesting situations, we do not have such knowledge. Instead, we either have a training dataset from which we can learn these models or we can learn them on-the-fly during the inference.
In this work, we set up the learning problem in a _Bayesian framework_, and derive the optimal learning rules, both offline (Section 3) and online (Section 4), for dynamic inference under this framework. Specifically, we assume the unknown models are elements in some parametric families of probability transition kernels, and the unknown model parameters are randomly drawn according to some prior distributions. The goal is then to find an optimal Bayesian learning rule, which can return an estimation strategy that minimizes the inference loss. The approach we take toward this goal is converting the learning problem to an MDP with an augmented state, which consists of the current observation and a belief vector of the unknown parameters, and solving the MDP by dynamic programming over the augmented state space. The solution, though optimal, may still be computationally challenging unless the belief vector can be compactly represented. Nevertheless, it already has a greatly reduced search space compared to the original learning problem, and provides a theoretical basis for the design of more computationally efficient approximate solutions.
Perhaps equally importantly, the problem of learning for dynamic inference can serve as a meta problem, such that almost all familiar learning problems can be cast as its special cases or variants. Examples include supervised learning, imitation learning, and reinforcement learning, including bandit and contextual bandit problems. For instance, the Bayesian _offline_ learning for dynamic inference can be viewed as an extension of the _behavior cloning_ method in imitation learning [2, 3, 4], in that it not only learns the demonstrator's action-generation model, but simultaneously learns a policy based on the learned model to minimize the overall imitation error. As another instance, the quantity to be estimated in dynamic inference may be viewed as a latent variable of the loss function, so that the Bayesian _online_ learning for dynamic inference can be viewed as _Bayesian reinforcement learning_[5, 6, 7, 8], where an optimal policy is learned by estimating the unknown loss function. Learning for dynamic inference thus provides us with a unifying formulation of different learning problems. Having a good understanding of this problem is helpful for gaining better understandings of the other learning problems as well.
### Relation to existing works
The problem of dynamic inference and learning for dynamic inference appear to be new, but it can be viewed from different angles, and is related to a variety of existing problems. The most intimately related work is the original formulations of imitation learning [9]. The online learning for dynamic inference is closely related to ans subsumes Bayesian reinforcement learning. Some recent study on Bayesian reinforcement learning and interactive decision making include [10, 11].
A problem formulation with a similar spirit in a minimax framework appear recently in [12]. In that work, an adversarial online learning problem where the action in each round affects the future observed data is set up. It may be viewed as _adversarial online_ learning for dynamic _minimax_ inference, from our standpoint. The advantage of the Bayesian formulation is that all the variables under consideration, including the unknown model parameters, are generated from some fixed joint distribution, thus the optimality of learning can be defined and the optimal learning rule can be derived. On the contrary, with the adversarial formulation, only certain definitions of regret can be
studied.
The overall optimality proof technique we adopt is similar to those used in solving partially observed MDP (POMDP) and Bayesian reinforcement learning over the augmented belief space [13, 14]. Several proofs are adapted from the rigorous exposition of the optimality of the belief-state MDP reformulation of the POMDP [15].
As mentioned in the previous subsection, Bayesian learning for dynamic inference can be viewed as a unifying formulation for Bayesian imitation learning and Bayesian reinforcement learning. These problems are surveyed in [16, 17, 18] for relevant imitation learning, and in [19, 20, 21, 8, 22] for relevant reinforcement learning.
## 2 Overview of dynamic inference
### Problem formulation
The problem of an \(n\)-round dynamic inference is to estimate \(n\) unknown quantities of interest \(Y^{n}\)_sequentially_ based on observations \(X^{n}\), where in the \(i\)th round of estimation, \(X_{i}\) depends on the observation \(X_{i-1}\) and the estimate \(\widehat{Y}_{i-1}\) of \(Y_{i-1}\) in the previous round, while the quantity of interest \(Y_{i}\) only depends on \(X_{i}\), and the estimate \(\widehat{Y}_{i}\) of \(Y_{i}\) can depend on everything available so far, namely \((X^{i},\widehat{Y}^{i-1})\), through an estimator \(\psi_{i}\) as \(\widehat{Y}_{i}=\psi_{i}(X^{i},\widehat{Y}^{i-1})\). The sequence of estimators \(\psi^{n}=(\psi_{1},\ldots,\psi_{n})\) constitute an _estimation strategy_. We assume to know the distribution \(P_{X_{1}}\) of the initial observation, and the probability transition kernels \((K_{X_{i}|X_{i-1},\widehat{Y}_{i-1}})_{i=2}^{n}\) and \((K_{Y_{i}|X_{i}})_{i=1}^{n}\). These distributions and \(\psi^{n}\) define a joint distribution of \((X^{n},Y^{n},\widehat{Y}^{n})\), all the variables under consideration. The Bayesian network of the random variables in dynamic inference with a Markov estimation strategy, meaning that each estimator has the form \(\psi_{i}:\mathsf{X}\to\widehat{\mathsf{Y}}\), is illustrated in Fig. 1.
The goal of dynamic inference can then be formally stated as finding an estimation strategy to minimize the accumulated expected loss over the \(n\)-rounds:
\[\operatorname*{arg\,min}_{\psi^{n}}\ \mathbb{E}\Big{[}\sum_{i=1}^{n}\ell(X_{i},Y_{i},\widehat{Y}_{i})\Big{]},\quad\widehat{Y}_{i}=\psi_{i}(X^{i},\widehat{ Y}^{i-1}) \tag{1}\]
where \(\ell:\mathsf{X}\times\mathsf{Y}\times\widehat{\mathsf{Y}}\to\mathbb{R}\) is a loss function that evaluates the estimate made in each round. Compared with the traditional statistical inference under the Bayesian formulation, where the goal is to find an estimator \(\psi\) of a random quantity \(Y\) based on a jointly distributed observation \(X\) to minimize \(\mathbb{E}[\ell(Y,\psi(X))]\), we summarize the two distinctive features of dynamic inference in (1):
Figure 1: Bayesian network of the random variables under consideration with \(n=4\). Here we assume the estimates are made with Markov estimators, such that \(\widehat{Y}_{i}=\psi_{i}(X_{i})\).
* The joint distribution of the pair \((X_{i},Y_{i})\) changes in each round in a controlled manner, as it depends on \((X_{i-1},\widehat{Y}_{i-1})\);
* The loss in each round is contextual, as it depends on \(X_{i}\).
### Optimal estimation strategy for dynamic inference
It is shown in [1] that optimization problem in (1) is equivalent to
\[\operatorname*{arg\,min}_{\psi^{n}}\ \mathbb{E}\Big{[}\sum_{i=1}^{n}\bar{\ell}(X_ {i},\widehat{Y}_{i})\Big{]}, \tag{2}\]
where \(\bar{\ell}(x,\hat{y})\triangleq\mathbb{E}[\ell(x,Y,\hat{y})|X=x,\widehat{Y}= \hat{y}]\), and for any realization \((x_{i},\hat{y}_{i})\) of \((X_{i},\widehat{Y}_{i})\), it can be computed as \(\bar{\ell}(x_{i},\hat{y}_{i})=\mathbb{E}[\ell(x_{i},Y_{i},\hat{y}_{i})|X_{i}= x_{i}]\). With this reformulation, the unknown quantities \(Y_{i}\) do not appear in the loss function any more, and the optimization problem becomes a standard MDP. The observations \(X^{n}\) become the states in this MDP, the estimates \(\widehat{Y}^{n}\) become the actions, the probability transition kernel \(K_{X_{i}|X_{i-1},\widehat{Y}_{i-1}}\) now defines the controlled state transition, and any estimation strategy \(\psi^{n}\) becomes a policy of this MDP. The goal becomes finding an optimal policy for this MDP to minimize the accumulated expected loss defined w.r.t. \(\bar{\ell}\). The solution to the MDP will be an optimal estimation strategy for dynamic inference.
From the theory of MDP it is known that the optimal estimators \((\psi^{*}_{1},\ldots,\psi^{*}_{n})\) for the optimization problem in (2) can be Markov, meaning that \(\psi^{*}_{i}\) can take only \(X_{i}\) as input, and the values of the optimal estimates \(\psi^{*}_{i}(x)\) for \(i=1,\ldots,n\) and \(x\in\mathsf{X}\) can be found via dynamic programming. Define the functions \(Q^{*}_{i}:\mathsf{X}\times\widehat{Y}\to\mathbb{R}\) and \(V^{*}_{i}:\mathsf{X}\to\mathbb{R}\) recursively as \(Q^{*}_{n}(x,\hat{y})\triangleq\bar{\ell}(x,\hat{y})\), \(V^{*}_{i}(x)\triangleq\min_{y\in\widehat{\mathsf{Y}}}Q^{*}_{i}(x,\hat{y})\) for \(i=n,\ldots,1\), and \(Q^{*}_{i}(x,\hat{y})\triangleq\bar{\ell}(x,\hat{y})+\mathbb{E}[V^{*}_{i+1}(X_ {i+1})|X_{i}=x,\widehat{Y}_{i}=\hat{y}]\) for \(i=n-1,\ldots,1\). The optimal estimate to make in the \(i\)th round when \(X_{i}=x\) is then
\[\psi^{*}_{i}(x)\triangleq\operatorname*{arg\,min}_{\hat{y}\in \widehat{\mathsf{Y}}}Q^{*}_{i}(x,\hat{y}). \tag{3}\]
It is shown that the estimators \((\psi^{*}_{1},\ldots,\psi^{*}_{n})\) defined in (3) achieve the minimum in (1). Moreover, For any \(i=1,\ldots,n\) and any initial distribution \(P_{X_{i}}\),
\[\min_{\psi_{i},\ldots,\psi_{n}}\ \mathbb{E}\Big{[}\sum_{j=i}^{n}\ell(X_{j},Y_{j}, \widehat{Y}_{j})\Big{]}=\mathbb{E}[V^{*}_{i}(X_{i})], \tag{4}\]
with the minimum achieved by \((\psi^{*}_{i},\ldots,\psi^{*}_{n})\). As shown by the examples in [1], the implication of the optimal estimation strategy is that, in each round of estimation, the estimate to make is not necessarily the optimal single-round estimate in that round, but one which takes into account the accuracy in that round, and tries to steer the future observations toward those with which the quantities of interest tend to easy to estimate.
## 3 Bayesian offline learning for dynamic inference
Solving dynamic inference requires the knowledge of the quantity-generation models \((K_{Y_{i}|X_{i}})_{i=1}^{n}\) and the observation-transition models \((K_{X_{i}|X_{i-1},\widehat{Y}_{i-1}})_{i=2}^{n}\). In most of the practically interesting situations however, we may not have such knowledge. Instead we may have a training dataset from
which we can learn these models, or may learn them on-the-fly during inference. In this section and the next one, we study the offline learning and the online learning problems for dynamic inference respectively, with _unknown_ quantity-generation models but _known_ observation transition models. This is already a case of sufficient interest, as the observation-transition model in many problems, e.g. imitation learning, are available. The proof techniques we develop carry over to the case where the observation-transition models are also unknown. In that case, the solution will have the same form, but a further-augmented state with a belief vector of the observation-transition model parameter; and the belief update has two parts, separately for the parameters of the quantity-generation model and the observation transition model.
Formally, in this section we assume that the initial distribution \(P_{X_{1}}\) and the probability transition kernels \((K_{X_{i}|X_{i-1},\widehat{Y}_{i-1}})_{i=2}^{n}\) are still known, while the unknown \(K_{Y_{i}|X_{i}}\)'s are the same element \(P_{Y|X,W}\) of a parametrized family of kernels \(\{P_{Y|X,w},w\in\mathsf{W}\}\) and the unknown parameter \(W\) is a random element of \(\mathsf{W}\) with prior distribution \(P_{W}\). The training data \(Z^{m}\) consists of \(m\) samples, and is drawn from some distribution \(P_{Z^{m}|W}\) with \(W\) as a parameter. This setup is quite flexible, in that the \(Z^{m}\) need not be generated in the same way as the data generated during inference. One example is a setup similar to _imitation learning_, where \(Z^{m}=((X_{1}^{\prime},Y_{1}^{\prime}),\ldots,(X_{m}^{\prime},Y_{m}^{\prime}))\) and
\[P_{Z^{m}|W}=P_{X_{1}^{\prime}}K_{Y_{1}^{\prime}|X_{1}^{\prime}}\prod_{i=2}^{n} K_{X_{i}^{\prime}|X_{i-1}^{\prime},Y_{i-1}^{\prime}}K_{Y_{i}^{\prime}|X_{i}^{ \prime}} \tag{5}\]
with \(P_{X_{1}^{\prime}}=P_{X_{1}}\), \((K_{X_{i}^{\prime}|X_{i-1}^{\prime},\widehat{Y}_{i-1}^{\prime}})_{i=2}^{n}=( K_{X_{i}|X_{i-1},\widehat{Y}_{i-1}})_{i=2}^{n}\), and \(K_{Y_{i}^{\prime}|X_{i}^{\prime}}=K_{Y_{i}|X_{i}}=P_{Y|X,W}\) for \(i=1,\ldots,n\). With a training dataset, we can define the _offline-learned estimation strategy_ for dynamic inference as follows.
**Definition 1**.: _An offline-learned estimation strategy with an \(m\)-sample training dataset for an \(n\)-round dynamic inference is a sequence of estimators \(\psi_{m}^{n}=(\psi_{m,1},\ldots,\psi_{m,n})\), where \(\psi_{m,i}:(\mathsf{X}\times\widehat{\mathsf{Y}})^{m}\times\mathsf{X}^{i} \times\widehat{\mathsf{Y}}^{i-1}\rightarrow\widehat{\mathsf{Y}}\) is the estimator for the \(i\)th round of estimation, which maps the dataset \(Z^{m}\) as well as the past observations and estimates \((X^{i},\widehat{Y}^{i-1})\) up to the \(i\)th round to an estimate \(\widehat{Y}_{i}\) of \(Y_{i}\), such that \(\widehat{Y}_{i}=\psi_{m,i}(Z^{m},X^{i},\widehat{Y}^{i-1})\), \(i=1,\ldots,n\)._
Any specification of the above probabilistic models and an offline-learned estimation strategy determines a joint distribution of the random variables \((W,Z^{m},X^{n},Y^{n},\widehat{Y}^{n})\) under consideration. The Bayesian network of the variables is shown in Fig. 2, where the training data is assumed to be generated in the imitation learning setup. A crucial observation from the Bayesian network is that \(W\) is conditionally independent of \((X^{n},\widehat{Y}^{n})\) given \(Z^{m}\), as the quantities of interest \(Y^{n}\) are not observed. In other words, given the training data, no more information about \(W\) can be gained during inference. We formally state this observation as the following lemma.
**Lemma 1**.: _In offline learning for dynamic inference, the parameter \(W\) is conditionally independent of the observations and the estimates \((X^{n},\widehat{Y}^{n})\) during inference given the training data \(Z^{m}\)._
Given an offline-learned estimation strategy \(\psi_{m}^{n}\) for an \(n\)-round dynamic inference with an \(m\)-sample training dataset, we can define its _inference loss_ as \(\mathbb{E}\big{[}\sum_{i=1}^{n}\ell(X_{i},Y_{i},\widehat{Y}_{i})\big{]}\). The goal of offline learning is to find an offline-learned estimation strategy to minimize the inference loss:
\[\operatorname*{arg\,min}_{\psi_{m}^{n}}\mathbb{E}\Big{[}\sum_{i=1}^{n}\ell(X_ {i},Y_{i},\widehat{Y}_{i})\Big{]},\quad\text{with }\widehat{Y}_{i}=\psi_{m,i}(Z^{m},X^{i},\widehat{Y}^{i-1}). \tag{6}\]
### MDP reformulation
#### 3.1.1 Equivalent expression of inference loss
We first show that the inference loss in (6) can be expressed in terms of a loss function that does not take the unknown \(Y_{i}\) as input.
**Theorem 1**.: _For any offline-learned estimation strategy \(\psi_{m}^{n}\), its inference loss can be written as_
\[\mathbb{E}\Big{[}\sum_{i=1}^{n}\ell(X_{i},Y_{i},\widehat{Y}_{i}) \Big{]}=\mathbb{E}\Big{[}\sum_{i=1}^{n}\tilde{\ell}(\pi_{m},X_{i},\widehat{Y}_ {i})\Big{]}, \tag{7}\]
_where \(\pi_{m}(\cdot)\triangleq\mathbb{P}[W\in\cdot|Z^{m}]\) is the posterior distribution of the kernel parameter \(W\) given the training dataset \(Z^{m}\), and \(\tilde{\ell}:\Delta\times\mathsf{X}\times\widehat{\mathsf{Y}}\to\mathbb{R}\), with \(\Delta\) being the space of probability distributions on \(\mathsf{W}\), is defined as_
\[\tilde{\ell}(\pi,x,\hat{y})\triangleq\int_{\mathsf{W}}\int_{ \mathsf{Y}}\pi(\mathrm{d}w)P_{Y|X,W}(\mathrm{d}y|x,w)\ell(x,y,\hat{y}). \tag{8}\]
The proof is given in Appendix A. Theorem 1 states that the inference loss of an offline-learned estimation strategy \(\psi_{m}^{n}\) is equal to
\[J(\psi_{m}^{n})\triangleq\mathbb{E}\Big{[}\sum_{i=1}^{n}\tilde{ \ell}(\pi_{m},X_{i},\widehat{Y}_{i})\Big{]}, \tag{9}\]
with \(\widehat{Y}_{i}=\psi_{m,i}(Z^{m},X^{i},\widehat{Y}^{i-1})\). It follows that the offline learning problem in (6) can be equivalently written as
\[\operatorname*{arg\,min}_{\psi_{m}^{n}}J(\psi_{m}^{n}). \tag{10}\]
#### 3.1.2 \((\pi_{m},X_{i})_{i=1}^{n}\) as a controlled Markov chain
Next, we show that the sequence \((\pi_{m},X_{i})_{i=1}^{n}\) appearing in (9) form a controlled Markov chain with \(\widehat{Y}^{n}\) as the control sequence. In other words, the tuple \((\pi_{m},X_{i+1})\) depends on the history \((\pi_{m},X^{i},\widehat{Y}^{i})\) only through \((\pi_{m},X_{i},\widehat{Y}_{i})\), as formally stated in the following lemma.
Figure 2: Bayesian network of the random variables in offline learning for dynamic inference with the imitation learning setup, with \(m=n=4\). Here we assume the estimates are made with Markov estimators, such that \(\widehat{Y}_{i}=\psi_{m,i}(Z^{m},X_{i})\).
**Lemma 2**.: _Given any offline-learned estimation strategy \(\psi_{m}^{n}\), we have_
\[\mathbb{P}\big{[}(\pi_{m},X_{i+1})\in A\times B|\pi_{m},X^{i},\widehat{Y}^{i} \big{]}=\mathbf{1}\{\pi_{m}\in A\}\mathbb{P}\big{[}X_{i+1}\in B|X_{i},\widehat{ Y}_{i}\big{]} \tag{11}\]
_for any Borel sets \(A\subset\Delta\) and \(B\subset\mathsf{X}\), any realization of \((\pi_{m},X^{i},\widehat{Y}^{i})\), and any \(i=1,\ldots,n-1\)._
The proof is given in Appendix B.
#### 3.1.3 Optimality of Markov offline-learned estimators
Furthermore, the next three lemmas will show that the search space of the minimization problem in (10) can be restricted to Markov offline-learned estimators \(\bar{\psi}_{m,i}:\Delta\times\mathsf{X}\to\mathsf{Y}\), such that \(\widehat{Y}_{i}=\bar{\psi}_{m,i}(\pi_{m},X_{i})\). We start with a generalization of Blackwell's principle of irrelevant information.
**Lemma 3** (Generalized Blackwell's principle of irrelevant information).: _For any fixed functions \(\ell:\mathsf{Y}\times\widehat{\mathsf{Y}}\to\mathbb{R}\) and \(f:\mathsf{X}\to\mathsf{Y}\), the following equality holds:_
\[\min_{g:\mathsf{X}\to\widehat{\mathsf{Y}}}\mathbb{E}\big{[}\ell(f(X),g(X)) \big{]}=\min_{g:\mathsf{Y}\to\widehat{\mathsf{Y}}}\mathbb{E}\big{[}\ell(f(X), g(f(X)))\big{]}. \tag{12}\]
**Remark**.: The original Blackwell's principle of irrelevant information, stating that for any fixed function \(\ell:\mathsf{Y}\times\widehat{\mathsf{Y}}\to\mathbb{R}\),
\[\min_{g:\mathsf{X}\times\mathsf{Y}\to\widehat{\mathsf{Y}}}\mathbb{E}\big{[} \ell(Y,g(X,Y))\big{]}=\min_{g:\mathsf{Y}\to\widehat{\mathsf{Y}}}\mathbb{E} \big{[}\ell(Y,g(Y))\big{]}, \tag{13}\]
can be seen as a special case of the above lemma.
The proof of Lemma 3 is given in Appendix C. The first application of Lemma 3 is to prove that the last estimator of an optimal offline-learned estimation strategy can be replaced by a Markov one, which preserves the optimality.
**Lemma 4** (Last-round lemma for offline learning).: _Given any offline-learned estimation strategy \(\psi_{m}^{n}\), there exists a Markov offline-learned estimator \(\bar{\psi}_{m,n}:\Delta\times\mathsf{X}\to\widehat{\mathsf{Y}}\), such that_
\[J(\psi_{m,1},\ldots,\psi_{m,n-1},\bar{\psi}_{m,n})\leq J(\psi_{m}^{n}). \tag{14}\]
The proof is given in Appendix D. Lemma 3 can be further used to prove that whenever the last offline-learned estimator is Markov, the preceding estimator can also be replaced by a Markov one which preserves the optimality.
**Lemma 5** (\((i-1)\)th-round lemma for offline learning).: _For any \(i\geq 2\), given any offline-learned estimation strategy \((\psi_{m,1},\ldots,\psi_{m,i-1},\bar{\psi}_{m,i})\) for an \(i\)-round dynamic inference with an \(m\)-sample training dataset, if the offline-learned estimator for the \(i\)th round of estimation is a Markov one \(\bar{\psi}_{m,i}:\Delta\times\mathsf{X}\to\widehat{\mathsf{Y}}\), then there exists a Markov offline-learned estimator \(\bar{\psi}_{m,i-1}:\Delta\times\mathsf{X}\to\widehat{\mathsf{Y}}\) for the \((i-1)\)th round, such that_
\[J(\psi_{m,1},\ldots,\psi_{m,i-2},\bar{\psi}_{m,i-1},\bar{\psi}_{m,i})\leq J( \psi_{m,1},\ldots,\psi_{m,i-1},\bar{\psi}_{m,i}). \tag{15}\]
The proof is given in Appendix E. With Lemma 4 and Lemma 5, we can prove the optimality of Markov offline-learned estimators, as given in Appendix F.
**Theorem 2**.: _The minimum of \(J(\psi_{m}^{n})\) in (10) can be achieved by an offline-learned estimation strategy \(\bar{\psi}_{m}^{n}\) with Markov estimators \(\bar{\psi}_{m,i}:\Delta\times\mathsf{X}\to\widehat{\mathsf{Y}}\), \(i=1,\ldots,n\), such that \(\widehat{Y}_{i}=\bar{\psi}_{m,i}(\pi_{m},X_{i})\)._
#### 3.1.4 Conversion to MDP
Theorem 1 and Theorem 2 with Lemma 2 imply that the original offline learning problem in (6) is equivalent to
\[\operatorname*{arg\,min}_{\psi_{m}^{n}}\mathbb{E}\Big{[}\sum_{i=1}^ {n}\tilde{\ell}(\pi_{m},X_{i},\widehat{Y}_{i})\Big{]},\quad\widehat{Y}_{i}=\psi _{m,i}(\pi_{m},X_{i}), \tag{16}\]
and the sequence \((\pi_{m},X_{i})_{i=1}^{n}\) is a controlled Markov chain driven by \(\widehat{Y}^{n}\). With this reformulation, we see that the offline learning problem becomes a standard MDP. The tuples \((\pi_{m},X_{i})_{i=1}^{n}\) become the states in this MDP, the estimates \(\widehat{Y}^{n}\) become the actions, the probability transition kernel \(P_{(\pi_{m},X_{i})|(\pi_{m},X_{i-1}),\widehat{Y}_{i-1}}\) now defines the controlled state transition, and any Markov offline-learned estimation strategy \(\psi_{m}^{n}\) becomes a policy of this MDP. The goal of learning becomes finding the optimal policy of the MDP to minimize the accumulated expected loss defined w.r.t. \(\tilde{\ell}\). The solution to this MDP will be an optimal offline-learned estimation strategy for dynamic inference.
### Solution via dynamic programming
#### 3.2.1 Optimal offline-learned estimation strategy
From the theory of MDP it is known that the optimal policy for the MDP in (16), namely the optimal offline-learned estimation strategy, can be found via dynamic programming. To derive the optimal estimators, define the functions \(Q_{m,i}^{*}:\Delta\times\mathsf{X}\times\widehat{Y}\to\mathbb{R}\) and \(V_{m,i}^{*}:\Delta\times\mathsf{X}\to\mathbb{R}\) for offline learning recursively for \(i=n,\ldots,1\) as \(Q_{m,n}^{*}(\pi,x,\hat{y})\triangleq\tilde{\ell}(\pi,x,\hat{y})\), and
\[V_{m,i}^{*}(\pi,x) \triangleq\min_{\hat{y}\in\widehat{\mathsf{Y}}}Q_{m,i}^{*}(\pi,x,\hat{y}),\quad i=n,\ldots,1 \tag{17}\] \[Q_{m,i}^{*}(\pi,x,\hat{y}) \triangleq\tilde{\ell}(\pi,x,\hat{y})+\mathbb{E}[V_{m,i+1}^{*}( \pi,X_{i+1})|X_{i}=x,\widehat{Y}_{i}=\hat{y}],\quad i=n-1,\ldots,1 \tag{18}\]
with \(\tilde{\ell}\) is as defined in (8), and the conditional expectation in (18) is taken w.r.t. \(X_{i+1}\). The optimal offline-learned estimate to make in the \(i\)th round when \(\pi_{m}=\pi\) and \(X_{i}=x\) is then
\[\psi_{m,i}^{*}(\pi,x)\triangleq\operatorname*{arg\,min}_{\hat{y }\in\widehat{\mathsf{Y}}}Q_{m,i}^{*}(\pi,x,\hat{y}). \tag{19}\]
#### 3.2.2 Minimum inference loss and loss-to-go
For any offline-learned estimation strategy \(\psi_{m}^{n}\), we can define its loss-to-go in the \(i\)th round of estimation when \(\pi_{m}=\pi\) and \(X_{i}=x\) as
\[V_{m,i}(\pi,x;\psi_{m}^{n})\triangleq\mathbb{E}\Big{[}\sum_{j=i }^{n}\ell(X_{j},Y_{j},\widehat{Y}_{j})\Big{|}\pi_{m}=\pi,X_{i}=x\Big{]}, \tag{20}\]
which is the conditional expected loss accumulated from the \(i\)th round to the final round when \((\psi_{m,i},\ldots,\psi_{m,n})\) are used as the offline-learned estimators, given that the posterior distribution of the kernel parameter \(W\) given the training dataset \(Z^{m}\) is \(\pi\) and the observation in the \(i\)th round is \(x\). The following theorem states that the offline-learned estimation strategy \((\psi_{m,1}^{*},\ldots,\psi_{m,n}^{*})\) derived from dynamic programming not only achieves the minimum inference loss over the \(n\) rounds, but also achieves the minimum loss-to-go in each round with any training dataset and any observation in that round.
**Theorem 3**.: _The offline-learned estimators \((\psi_{m,1}^{*},\ldots,\psi_{m,n}^{*})\) defined in (19) according to the recursion in (17) and (18) constitute an optimal offline-learned estimation strategy for dynamic inference, which achieves the minimum in (6). Moreover, for any Markov offline-learned estimation strategy \(\psi_{m}^{n}\), with \(\psi_{m,i}:\Delta\times\mathsf{X}\to\mathsf{Y}\), its loss-to-go satisfies_
\[V_{m,i}(\pi,x;\psi_{m}^{n})\geq V_{m,i}^{*}(\pi,x) \tag{21}\]
_for all \(\pi\in\Delta\), \(x\in\mathsf{X}\) and \(i=1,\ldots,n\), where the equality holds if \(\psi_{m,j}(\pi,x)=\psi_{m,j}^{*}(\pi,x)\) for all \(\pi\in\Delta\), \(x\in\mathsf{X}\) and \(j\geq i\)._
The proof is given in Appendix G. A consequence of Theorem 3 is that in offline learning for dynamic inference, the minimum expected loss accumulated from the \(i\)th round to the final round can be expressed in terms of \(V_{m,i}^{*}\), as stated in the following corollary.
**Corollary 1**.: _In offline learning for dynamic inference, for any \(i\) and any initial distribution \(P_{X_{i}}\),_
\[\min_{\psi_{m,i},\ldots,\psi_{m,n}}\ \mathbb{E}\Big{[}\sum_{j=i}^{n}\ell(X_{j},Y_{j},\widehat{Y}_{j})\Big{]}=\mathbb{E}[V_{m,i}^{*}(\pi_{m},X_{i})], \tag{22}\]
_and the minimum is achieved by the estimators \((\psi_{m,i}^{*},\ldots,\psi_{m,n}^{*})\) defined in (19)._
## 4 Bayesian online learning for dynamic inference
In the setup of offline learning for dynamic inference, we assume that before the inference takes place, a training dataset \(Z^{m}\) drawn from some distribution \(P_{Z^{m}|W}\) is observed, and \(W\) can be estimated from \(Z^{m}\). In the online learning setup, we assume that there is no training dataset available before the inference; instead, during the inference, after an estimate \(\widehat{Y}_{i}\) is made in each round, the true value \(Y_{i}\) is revealed, and \(W\) can be estimated on-the-fly in each round from all the observations available so far.
Same as the offline learning setup, we assume that during inference, the initial distribution \(P_{X_{1}}\) and the probability transition kernels \(K_{X_{i}|X_{i-1},\widehat{Y}_{i-1}}\), \(i=1,\ldots,n\) are still known, while the unknown \(K_{Y_{i}|X_{i}}\)'s are the same element \(P_{Y|X,W}\) of a parametrized family of kernels \(\{P_{Y|X,w},w\in\mathsf{W}\}\) and the unknown kernel parameter \(W\) is a random element of \(\mathsf{W}\) with prior distribution \(P_{W}\). We can define the _online-learned estimation strategy_ for dynamic inference as follows. Note that we overload the notations \(\psi_{i}\) as an online-learned estimator and \(Z_{i}\) as \((X_{i},Y_{i})\) throughout this section.
**Definition 2**.: _An online-learned estimation strategy for an \(n\)-round dynamic inference is a sequence of estimators \(\psi^{n}=(\psi_{1},\ldots,\psi_{n})\), where \(\psi_{i}:(\mathsf{X}\times\mathsf{Y})^{i-1}\times\widehat{\mathsf{Y}}^{i-1} \times\mathsf{X}\to\widehat{\mathsf{Y}}\) is the estimator in the \(i\)th round of estimation, which maps the past observations \(Z^{i-1}=(X_{j},Y_{j})_{j=1}^{i-1}\) and estimates \(\widehat{Y}^{i-1}\) in addition to a new observation \(X_{i}\) to an estimate \(\widehat{Y}_{i}\) of \(Y_{i}\), such that \(\widehat{Y}_{i}=\psi_{i}(Z^{i-1},\widehat{Y}^{i-1},X_{i})\)._
The Bayesian network of all the random variables \((W,X^{n},Y^{n},\widehat{Y}^{n})\) in online learning for dynamic inference is shown in Fig. 3. A crucial observation from the Bayesian network is that \(W\) is conditionally independent of \((X_{i},\widehat{Y}^{i})\) given \(Z^{i-1}\), as stated in the following lemma.
**Lemma 6**.: _In online learning for dynamic inference, in the \(i\)th round of estimation, the kernel parameter \(W\) is conditionally independent of the current observation \(X_{i}\) and the estimates \(\widehat{Y}^{i}\) up to the \(i\)th round given the past observations \(Z^{i-1}\)._
same as the offline learning setup, given an online-learned estimation strategy \(\psi^{n}\), we can define its inference loss as \(\mathbb{E}\big{[}\sum_{i=1}^{n}\ell(X_{i},Y_{i},\widehat{Y}_{i})\big{]}\). The goal of online learning for an \(n\)-round dynamic inference is to find an online-learned estimation strategy to minimize the inference loss:
\[\operatorname*{arg\,min}_{\psi^{n}}\mathbb{E}\Big{[}\sum_{i=1}^{n}\ell(X_{i},Y _{i},\widehat{Y}_{i})\Big{]},\quad\text{with }\widehat{Y}_{i}=\psi_{i}(Z^{i-1},\widehat{Y}^{i-1},X_{i}). \tag{23}\]
### MDP reformulation
#### 4.1.1 Equivalent expression of inference loss
We first show that the inference loss in (23) can be expressed in terms of a loss function that does not take the unknown \(Y_{i}\) as input.
**Theorem 4**.: _For any online-learned estimation strategy \(\psi^{n}\), its inference loss can be written as_
\[\mathbb{E}\Big{[}\sum_{i=1}^{n}\ell(X_{i},Y_{i},\widehat{Y}_{i})\Big{]}= \mathbb{E}\Big{[}\sum_{i=1}^{n}\tilde{\ell}(\pi_{i},X_{i},\widehat{Y}_{i}) \Big{]}, \tag{24}\]
_where \(\pi_{i}(\cdot)\triangleq\mathbb{P}[W\in\cdot|Z^{i-1}]\) is the posterior distribution of the kernel parameter \(W\) given the past observations \(Z^{i-1}\) to the \(i\)th round, and \(\tilde{\ell}:\Delta\times\mathbf{X}\times\widehat{\mathbf{Y}}\to\mathbb{R}\), with \(\Delta\) being the space of probability distributions on \(\mathsf{W}\), is defined in the same way as in (8),_
\[\tilde{\ell}(\pi,x,\hat{y})=\int_{\mathsf{W}}\int_{\mathsf{Y}}\pi(\mathrm{d}w )P_{Y|X,W}(\mathrm{d}y|x,w)\ell(x,y,\hat{y}). \tag{25}\]
The proof is given in Appendix H. Theorem 4 states that the inference loss of an online-learned estimation strategy \(\psi^{n}\) is equal to
\[J(\psi^{n})=\mathbb{E}\Big{[}\sum_{i=1}^{n}\tilde{\ell}(\pi_{i},X_{i},\widehat {Y}_{i})\Big{]},\quad\text{with }\widehat{Y}_{i}=\psi_{i}(Z^{i-1},\widehat{Y}^{i-1},X_{i}). \tag{26}\]
It follows that the learning problem in (23) can be equivalently written as
\[\operatorname*{arg\,min}_{\psi^{n}}J(\psi^{n}). \tag{27}\]
Figure 3: Bayesian network of variables in online learning for dynamic inference, with \(n=3\).
#### 4.1.2 \((\pi_{i},X_{i})_{i=1}^{n}\) as a controlled Markov chain
Next, we show that the sequence \((\pi_{i},X_{i})_{i=1}^{n}\) appearing in (26) form a controlled Markov chain with \(\widehat{Y}^{n}\) as the control sequence. In other words, the tuple \((\pi_{i+1},X_{i+1})\) depends on the history \((\pi^{i},X^{i},\widehat{Y}^{i})\) only through \((\pi_{i},X_{i},\widehat{Y}_{i})\), as formally stated in the following lemma.
**Lemma 7**.: _There exists a function \(f:\Delta\times\mathsf{X}\times\mathsf{Y}\to\Delta\), such that given any learned estimation strategy \(\psi^{n}\), we have_
\[\mathbb{P}[(\pi_{i+1},X_{i+1})\in A\times B|\pi^{i},X^{i},\widehat {Y}^{i}]=\] \[\quad\int_{\mathsf{W}}\int_{\mathsf{Y}}\pi_{i}(\mathrm{d}w)P_{Y|X,W}(\mathrm{d}y_{i}|X_{i},w)\mathbb{P}[f(\pi_{i},X_{i},y_{i})\in A]\mathbb{P} \big{[}X_{i+1}\in B|X_{i},\widehat{Y}_{i}\big{]} \tag{28}\]
_for any Borel sets \(A\subset\Delta\) and \(B\subset\mathsf{X}\), any realization of \((\pi_{i},X^{i},\widehat{Y}^{i})\), and any \(i=1,\ldots,n-1\)._
Lemma 7 is proved in Appendix I, based on the auxiliary lemma below proved in Appendix J.
**Lemma 8**.: _For a generic random tuple \((T,U,V)\in\mathsf{T}\times\mathsf{U}\times\mathsf{V}\) that forms a Markov chain \(T-U-V\), we have_
\[\mathbb{P}\big{[}V\in A\big{|}P_{V|U}(\cdot|U)=p,T\in B\big{]}=p(A) \tag{29}\]
_for any Borel sets \(A\in\mathsf{V}\) and \(B\in\mathsf{T}\), and any probability distribution \(p\) on \(\mathsf{V}\)._
#### 4.1.3 Optimality of Markov online-learned estimators
The next two lemmas will show that the search space of the minimization problem in (27) can be restricted to Markov online-learned estimators \(\bar{\psi}_{i}:\Delta\times\mathsf{X}\to\mathsf{Y}\), such that \(\widehat{Y}_{i}=\bar{\psi}_{i}(\pi_{i},X_{i})\). In parallel to the discussion of the offline learning, we first prove that the last estimator of an optimal online-learned estimation strategy can be replaced by a Markov one, which preserves the optimality.
**Lemma 9** (Last-round lemma for online learning).: _Given any online-learned estimation strategy \(\psi^{n}\), there exists a Markov online-learned estimator \(\bar{\psi}_{n}:\Delta\times\mathsf{X}\to\widehat{\mathsf{Y}}\), such that_
\[J(\psi_{1},\ldots,\psi_{n-1},\bar{\psi}_{n})\leq J(\psi^{n}). \tag{30}\]
The proof is given in Appendix K. We further prove that whenever the last online-learned estimator is Markov, the preceding estimator can be replaced by a Markov one which preserves the optimality.
**Lemma 10** (\((i-1)\)th-round lemma for online learning).: _For any \(i\geq 2\), given any online-learned estimation strategy \((\psi_{1},\ldots,\psi_{i-1},\bar{\psi}_{i})\) for an \(i\)-round dynamic inference, if the last estimator is a Markov one \(\bar{\psi}_{i}:\Delta\times\mathsf{X}\to\widehat{\mathsf{Y}}\), then there exists a Markov onlined-learned estimator \(\bar{\psi}_{i-1}:\Delta\times\mathsf{X}\to\widehat{\mathsf{Y}}\) for the \((i-1)\)th round, such that_
\[J(\psi_{1},\ldots,\psi_{i-2},\bar{\psi}_{i-1},\bar{\psi}_{i})\leq J(\psi_{1}, \ldots,\psi_{i-1},\bar{\psi}_{i}). \tag{31}\]
The proof is given in Appendix L. With Lemma 4 and Lemma 5, we can prove the optimality of Markov online-learned estimators, as given in Appendix M.
**Theorem 5**.: _The minimum of \(J(\psi^{n})\) in (27) can be achieved by a online-learned estimation strategy \(\bar{\psi}^{n}\) with Markov online-learned estimators \(\bar{\psi}_{i}:\Delta\times\mathsf{X}\to\widehat{\mathsf{Y}}\), such that \(\widehat{Y}_{i}=\bar{\psi}_{i}(\pi_{i},X_{i})\)._
#### 4.1.4 Conversion to MDP
Theorem 4 and Theorem 5 with Lemma 7 imply that the original online learning problem in (23) is equivalent to
\[\operatorname*{arg\,min}_{\psi^{n}}\mathbb{E}\Big{[}\sum_{i=1}^{n} \tilde{\ell}(\pi_{i},X_{i},\widehat{Y}_{i})\Big{]},\quad\widehat{Y}_{i}=\psi_{ i}(\pi_{i},X_{i}) \tag{32}\]
and the sequence \((\pi_{i},X_{i})_{i=1}^{n}\) is a controlled Markov chain driven by \(\widehat{Y}^{n}\). With this reformulation, we see that the online learning problem becomes a standard MDP. The tuples \((\pi_{i},X_{i})_{i=1}^{n}\) become the states in this MDP, the estimates \(\widehat{Y}^{n}\) become the actions, the probability transition kernel \(P_{(\pi_{i},X_{i})|(\pi_{i-1},X_{i-1}),\widehat{Y}_{i-1}}\) now defines the controlled state transition, and any Markov online-learned estimation strategy \(\psi^{n}\) becomes a policy of this MDP. The goal of online learning becomes finding the optimal policy of the MDP to minimize the accumulated expected loss defined w.r.t. \(\tilde{\ell}\). The solution to this MDP will be an optimal online-learned estimation strategy for dynamic inference.
### Solution via dynamic programming
#### 4.2.1 Optimal online-learned estimation strategy
From the theory of MDP it is known that the optimal policy for the MDP in (32), namely the optimal online-learned estimation strategy, can be found via dynamic programming. To derive the optimal estimators, define the functions \(Q_{i}^{*}:\Delta\times\mathsf{X}\times\widehat{Y}\to\mathbb{R}\) and \(V_{i}^{*}:\Delta\times\mathsf{X}\to\mathbb{R}\) for online learning recursively for \(i=n,\ldots,1\) as \(Q_{n}^{*}(\pi,x,\hat{y})\triangleq\tilde{\ell}(\pi,x,\hat{y})\), and
\[V_{i}^{*}(\pi,x) \triangleq\min_{\hat{y}\in\widehat{\mathsf{Y}}}Q_{i}^{*}(\pi,x, \hat{y}),\quad i=n,\ldots,1 \tag{33}\] \[Q_{i}^{*}(\pi,x,\hat{y}) \triangleq\tilde{\ell}(\pi,x,\hat{y})+\mathbb{E}[V_{i+1}^{*}(\pi _{i+1},X_{i+1})|\pi_{i}=\pi,X_{i}=x,\widehat{Y}_{i}=\hat{y}],\,i=n-1,\ldots,1 \tag{34}\]
with \(\tilde{\ell}\) is as defined in (8), and the conditional expectation in (34) is taken w.r.t. \((\pi_{i+1},X_{i+1})\). The optimal online-learned estimate to make in the \(i\)th round when \(\pi_{i}=\pi\) and \(X_{i}=x\) is then
\[\psi_{i}^{*}(\pi,x)\triangleq\operatorname*{arg\,min}_{\hat{y}\in\widehat{ \mathsf{Y}}}Q_{i}^{*}(\pi,x,\hat{y}). \tag{35}\]
#### 4.2.2 Minimum inference loss and loss-to-go
For any online-learned estimation strategy \(\psi^{n}\), we can define its loss-to-go in the \(i\)th round of estimation when \(\pi_{i}=\pi\) and \(X_{i}=x\) as
\[V_{i}(\pi,x;\psi^{n})\triangleq\mathbb{E}\Big{[}\sum_{j=i}^{n} \ell(X_{j},Y_{j},\widehat{Y}_{j})\Big{|}\pi_{i}=\pi,X_{i}=x\Big{]}, \tag{36}\]
which is the conditional expected loss accumulated from the \(i\)th round to the final round when \((\psi_{i},\ldots,\psi_{n})\) are used as the learned estimators, given that in the \(i\)th round the posterior distribution of the kernel parameter \(W\) given the past observations \(Z^{i-1}\) is \(\pi\) and the observation \(X_{i}\) is \(x\). The following theorem states that the online-learned estimation strategy \((\psi_{1}^{*},\ldots,\psi_{n}^{*})\) derived from dynamic programming not only achieves the minimum inference loss over the \(n\) rounds, but also achieves the minimum loss-to-go in each round with any past and current observations in that round.
**Theorem 6**.: _The online-learned estimators \((\psi_{1}^{*},\ldots,\psi_{n}^{*})\) defined in (35) according to the recursion in (33) and (34) constitute an optimal online-learned estimation strategy for dynamic inference, which achieves the minimum in (23). Moreover, for any Markov online-learned estimation strategy \(\psi^{n}\), with \(\psi_{i}:\Delta\times\mathsf{X}\to\mathsf{Y}\), its loss-to-go satisfies_
\[V_{i}(\pi,x;\psi^{n})\geq V_{i}^{*}(\pi,x) \tag{37}\]
_for all \(\pi\in\Delta\), \(x\in\mathsf{X}\) and \(i=1,\ldots,n\), where the equality holds if \(\psi_{j}(\pi,x)=\psi_{j}^{*}(\pi,x)\) for all \(\pi\in\Delta\), \(x\in\mathsf{X}\) and \(j\geq i\)._
The proof is given in Appendix N. A consequence of Theorem 6 is that in online learning for dynamic inference, the minimum expected loss accumulated from the \(i\)th round to the final round can be expressed in terms of \(V_{i}^{*}\), as stated in the following corollary.
**Corollary 2**.: _In online learning for dynamic inference, for any \(i\) and any initial distribution \(P_{X_{i}}\),_
\[\min_{\psi_{i},\ldots,\psi_{n}}\ \mathbb{E}\Big{[}\sum_{j=i}^{n}\ell(X_{j},Y_{j}, \widehat{Y}_{j})\Big{]}=\mathbb{E}[V_{i}^{*}(\pi_{i},X_{i})], \tag{38}\]
_and the minimum is achieved by the estimators \((\psi_{i}^{*},\ldots,\psi_{n}^{*})\) defined in (35)._
## Appendix A Proof of Theorem 1
For each \(i=1,\ldots,n\), we have
\[\mathbb{E}\big{[}\ell(X_{i},Y_{i},\widehat{Y}_{i})|Z^{m},X^{i}, \widehat{Y}^{i-1}\big{]}\] \[= \int_{\mathsf{Y}}P_{Y_{i}|Z^{m},X^{i},\widehat{Y}^{i-1}}(\mathrm{ d}y)\ell(X_{i},y,\widehat{Y}_{i}) \tag{39}\] \[= \int_{\mathsf{W}}\int_{\mathsf{Y}}P_{W|Z^{m},X^{i},\widehat{Y}^{ i-1}}(\mathrm{d}w)P_{Y_{i}|Z^{m},X^{i},\widehat{Y}^{i-1},W=w}(\mathrm{d}y)\ell(X_{i},y, \widehat{Y}_{i})\] (40) \[= \int_{\mathsf{W}}\int_{\mathsf{Y}}\pi_{m}(\mathrm{d}w)P_{Y|X,W}( \mathrm{d}y|X_{i},w)\ell(X_{i},y,\widehat{Y}_{i})\] (41) \[= \tilde{\ell}(\pi_{m},X_{i},\widehat{Y}_{i}), \tag{42}\]
where (39) is due to the fact that \(X_{i}\) and \(\widehat{Y}_{i}\) are determined by \((Z^{m},X^{i},\widehat{Y}^{i-1})\); and (41) follows from the fact that \(W\) is conditionally independent of \((X^{i},\widehat{Y}^{i-1})\) given \(Z^{m}\) as stated in Lemma 1, and the fact that \(Y_{i}\) is conditionally independent of \((Z^{m},X^{i-1},\widehat{Y}^{i-1})\) given \((X_{i},W)\). With the above equality and the fact that
\[\mathbb{E}\Big{[}\sum_{i=1}^{n}\ell(X_{i},Y_{i},\widehat{Y}_{i})\Big{]}=\sum_{ i=1}^{n}\mathbb{E}\big{[}\mathbb{E}[\ell(X_{i},Y_{i},\widehat{Y}_{i})|Z^{m},X^{i}, \widehat{Y}^{i-1}]\big{]}, \tag{43}\]
we obtain (7).
Proof of Lemma 2
For any offline-learned estimation strategy \(\psi_{m}^{n}\), any Borel sets \(A\subset\Delta\) and \(B\subset\mathsf{X}\), and any realization of \((\pi_{m},X^{i},\widehat{Y}^{i})\),
\[\mathbb{P}\big{[}(\pi_{m},X_{i+1})\in A\times B\big{|}\pi_{m},X^{i},\widehat{Y}^{i}\big{]} =\mathbb{P}\big{[}\pi_{m}\in A\big{|}\pi_{m}\big{]}\mathbb{P}\big{[} X_{i+1}\in B|\pi_{m},X^{i},\widehat{Y}^{i}\big{]} \tag{44}\] \[=\mathbf{1}\{\pi_{m}\in A\}\mathbb{P}\big{[}X_{i+1}\in B|X_{i}, \widehat{Y}_{i}\big{]} \tag{45}\]
where the second equality is due to the fact that \(X_{i+1}\) is conditionally independent of \((\pi_{m},X^{i-1},\widehat{Y}^{i-1})\) given \((X_{i},\widehat{Y}_{i})\). This proves the claim, and we can see that the right side of (11) only depends on \((\pi_{m},X_{i},\widehat{Y}_{i})\).
## Appendix C Proof of Lemma 3
. The left side of (12) is the Bayes risk of estimating \(f(X)\) based on \(X\), defined w.r.t. the loss function \(\ell\), which can be written as \(R_{\ell}(f(X)|X)\); while the right side of (12) is the Bayes risk of estimating \(f(X)\) based on \(f(X)\) itself, also defined w.r.t. the loss function \(\ell\), which can be written as \(R_{\ell}(f(X)|f(X))\). It follows from a data processing inequality of the generalized conditional entropy that
\[R_{\ell}(f(X)|X)\leq R_{\ell}(f(X)|f(X)), \tag{46}\]
as \(f(X)-X-f(X)\) form a Markov chain. If follows from the same data processing inequality that
\[R_{\ell}(f(X)|X)\geq R_{\ell}(f(X)|f(X)), \tag{47}\]
as \(X-f(X)-f(X)\) also form a Markov chain. Hence \(R_{\ell}(f(X)|X)=R_{\ell}(f(X)|f(X))\), which proves the claim.
## Appendix D Proof of Lemma 4
The inference loss of \(\psi_{m}^{n}\) can be written as
\[J(\psi_{m}^{n})=\mathbb{E}\Big{[}\sum_{i=1}^{n-1}\tilde{\ell} \big{(}(\pi_{m},X_{i}),\widehat{Y}_{i}\big{)}\Big{]}+\mathbb{E}\big{[}\tilde{ \ell}\big{(}(\pi_{m},X_{n}),\psi_{m,n}(Z^{m},X^{n},\widehat{Y}^{n-1})\big{)} \big{]}. \tag{48}\]
Since the first expectation in (48) does not depend on \(\psi_{m,n}\), it suffices to show that there exists a learned estimator \(\bar{\psi}_{m,n}:\Delta\times\mathsf{X}\to\widehat{\mathsf{Y}}\), such that
\[\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{m},X_{n}),\bar{\psi}_{ m,n}(\pi_{m},X_{n})\big{)}\big{]}\leq\mathbb{E}\big{[}\tilde{\ell}\big{(}( \pi_{m},X_{n}),\psi_{m,n}(Z^{m},X^{n},\widehat{Y}^{n-1})\big{)}\big{]}. \tag{49}\]
The existence of such an estimator is guaranteed by Lemma 3, as \((\pi_{m},X_{n})\) is a function of \((Z^{m},X^{n},\widehat{Y}^{n-1})\).
Proof of Lemma 5
The inference loss of the given \((\psi_{m,1},\ldots,\psi_{m,i-1},\bar{\psi}_{m,i})\) is
\[J(\psi_{m,1},\ldots,\psi_{m,i-1},\bar{\psi}_{m,i})= \mathbb{E}\Big{[}\sum_{j=1}^{i-2}\tilde{\ell}((\pi_{m},X_{j}), \widehat{Y}_{j})\Big{]}+\] \[\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{m},X_{i-1}),\widehat{Y }_{i-1})\big{]}+\] \[\mathbb{E}\big{[}\tilde{\ell}((\pi_{m},X_{i}),\bar{\psi}_{m,i}( \pi_{m},X_{i}))\big{]}. \tag{50}\]
Since the first expectation in (50) does not depend on \(\psi_{m,i-1}\), it suffices to show that there exists a learned estimator \(\bar{\psi}_{m,i-1}:\Delta\times\mathsf{X}\to\widehat{\mathsf{Y}}\), such that
\[\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{m},X_{i-1}),\bar{\psi}_ {m,i-1}(\pi_{m},X_{i-1})\big{)}\big{]}+\mathbb{E}\big{[}\tilde{\ell}\big{(}( \pi_{m},\bar{X}_{i}),\bar{\psi}_{m,i}(\pi_{m},\bar{X}_{i})\big{)}\big{]}\] \[\leq \mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{m},X_{i-1}),\widehat{Y }_{i-1}\big{)}\big{]}+\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{m},X_{i}), \bar{\psi}_{m,i}(\pi_{m},X_{i})\big{)}\big{]}, \tag{51}\]
where \(\bar{X}_{i}\) on the left side is the observation in the \(i\)th round when the Markov offline-learned estimator \(\bar{\psi}_{m,i-1}\) is used in the \((i-1)\)th round. To get around with the dependence of \(X_{i}\) on \(\psi_{m,i-1}\), we write the second expectation on the right side of (51) as
\[\mathbb{E}\big{[}\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{m},X_{i}),\bar{\psi }_{m,i}(\pi_{m},X_{i})\big{)}\big{|}\pi_{m},X_{i-1},\widehat{Y}_{i-1}\big{]} \big{]} \tag{52}\]
and notice that the conditional expectation \(\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{m},X_{i}),\bar{\psi}_{i}(\pi_{m},X_ {i})\big{)}\big{|}\pi_{m},X_{i-1},\widehat{Y}_{i-1}\big{]}\) does not depend on \(\psi_{i-1}\). This is because the conditional distribution of \((\pi_{m},X_{i})\) given \((\pi_{m},X_{i-1},\widehat{Y}_{i-1})\) is solely determined by the probability transition kernel \(P_{X_{i}|X_{i-1},\widehat{Y}_{i-1}}\), as shown in the proof of Lemma 2 stating that \((\pi_{m},X_{i})_{i=1}^{n}\) is a controlled Markov chain with \(\widehat{Y}^{n}\) as the control sequence. It follows that the right side of (51) can be written as
\[\mathbb{E}\Big{[}\tilde{\ell}\big{(}(\pi_{m},X_{i-1}),\widehat{Y }_{i-1}\big{)}+\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{m},X_{i}),\bar{\psi} _{m,i}(\pi_{m},X_{i})\big{)}\big{|}\pi_{m},X_{i-1},\widehat{Y}_{i-1}\big{]} \Big{]}\] \[= \mathbb{E}\big{[}g(\pi_{m},X_{i-1},\widehat{Y}_{i-1})\big{]} \tag{53}\] \[= \mathbb{E}\big{[}g(\pi_{m},X_{i-1},\psi_{m,i-1}(Z^{m},X^{i-1}, \widehat{Y}^{i-2}))\big{]} \tag{54}\]
for a function \(g\) that does not depend on \(\psi_{m,i-1}\). Since \((\pi_{m},X_{i-1})\) is a function of \((Z^{m},X^{i-1},\widehat{Y}^{i-2})\), it follows from Lemma 3 that there exists a learned estimator \(\bar{\psi}_{m,i-1}:\Delta\times\mathsf{X}\to\widehat{\mathsf{Y}}\), such that
\[\mathbb{E}\big{[}g\big{(}\pi_{m},X_{i-1},\psi_{m,i-1}(Z^{m},X^{i-1},\widehat{Y}^{i-2})\big{)}\big{]} \tag{55}\] \[\geq \mathbb{E}\big{[}g\big{(}\pi_{m},X_{i-1},\bar{\psi}_{m,i-1}(\pi_ {m},X_{i-1})\big{)}\big{]}\] (56) \[= \mathbb{E}\Big{[}\tilde{\ell}\big{(}(\pi_{m},X_{i-1}),\bar{\psi} _{m,i-1}(\pi_{m},X_{i-1})\big{)}+\] \[\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{m},\bar{X}_{i}),\bar{ \psi}_{m,i}(\pi_{m},\bar{X}_{i})\big{)}\big{|}\pi_{m},X_{i-1},\bar{\psi}_{m,i-1} (\pi_{m},X_{i-1})\big{]}\Big{]}\] (57) \[= \mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{m},X_{i-1}),\bar{\psi} _{m,i-1}(\pi_{m},X_{i-1})\big{)}\big{]}+\mathbb{E}\big{[}\tilde{\ell}\big{(}( \pi_{m},\bar{X}_{i}),\bar{\psi}_{m,i}(\pi_{m},\bar{X}_{i})\big{)}\big{]}, \tag{58}\]
which proves (51) and the claim.
Proof of Theorem 2
Picking an optimal offline-learned estimation strategy \(\psi_{m}^{n}\), we can first replace its last estimator by a Markov one that preserves the optimality of the strategy, which is guaranteed by Lemma 4. Then, for \(i=n,\ldots,2\), we can repeatedly replace the \((i-1)\)th estimator by a Markov one that preserves the optimality of the previous strategy, which is guaranteed by Lemma 5 and the additive structure of the inference loss as in (9). Finally we obtain an offline-learned estimation strategy consisting of Markov estimators that achieves the same inference loss as the originally picked offline-learned estimation strategy.
## Appendix G Proof of Theorem 3
The first claim stating that the offline-learned estimation strategy \((\psi_{m,1}^{*},\ldots,\psi_{m,n}^{*})\) achieves the minimum in (6) follows from the equivalence between (6) and the MDP in (16), and from the well-known optimality of the solution derived from dynamic programming to MDP.
The second claim can be proved via backward induction. Consider an arbitrary Markov offline-learned estimation strategy \(\psi_{m}^{n}\) with \(\psi_{m,i}:\Delta\times\mathsf{X}\rightarrow\mathsf{Y}\), based on which the learned estimates during inference are made.
* In the final round, for all \(\pi\in\Delta\) and \(x\in\mathsf{X}\), \[V_{m,n}(\pi,x;\psi_{m}^{n}) =\tilde{\ell}(\pi,x,\psi_{m,n}(\pi,x))\] (59) \[\geq V_{m,n}^{*}(\pi,x),\] (60) where (59) is due to the definitions of \(V_{m,n}\) in (20) and \(\tilde{\ell}\) in (8); and (60) is due to the definition of \(V_{m,n}^{*}\) in (17), while the equality holds if \(\psi_{m,n}(\pi,x)=\psi_{m,n}^{*}(\pi,x)\).
* For \(i=n-1,\ldots,1\), suppose (21) holds in the \((i+1)\)th round. We first show a self-recursive expression of \(V_{m,i}(\pi,x;\psi_{m}^{n})\): \[V_{m,i}(\pi,x;\psi_{m}^{n}) =\mathbb{E}\Big{[}\sum_{j=i}^{n}\ell(X_{j},Y_{j},\widehat{Y}_{j} )\Big{|}\pi_{m}=\pi,X_{i}=x\Big{]}\] (61) \[=\mathbb{E}[\ell(X_{i},Y_{i},\widehat{Y}_{i})|\pi_{m}=\pi,X_{i}=x ]+\mathbb{E}\Big{[}\sum_{j=i+1}^{n}\ell(X_{j},Y_{j},\widehat{Y}_{j})\Big{|} \pi_{m}=\pi,X_{i}=x\Big{]}\] (62) \[=\mathbb{E}\big{[}\mathbb{E}[\ell(X_{i},Y_{i},\widehat{Y}_{i})| \widehat{Y}_{i},\pi_{m}=\pi,X_{i}=x]\big{|}\pi_{m}=\pi,X_{i}=x\big{]}+\] \[\quad\mathbb{E}\bigg{[}\sum_{j=i+1}^{n}\ell(X_{j},Y_{j},\widehat {Y}_{j})\Big{|}X_{i+1},\pi_{m}=\pi,X_{i}=x\Big{]}\bigg{|}\pi_{m}=\pi,X_{i}=x \bigg{]}\] (63) \[=\mathbb{E}[\tilde{\ell}(\pi,x,\widehat{Y}_{i})\big{|}\pi_{m}= \pi,X_{i}=x\big{]}+\] \[\quad\mathbb{E}\bigg{[}\mathbb{E}\Big{[}\sum_{j=i+1}^{n}\ell(X_{ j},Y_{j},\widehat{Y}_{j})\Big{|}\pi_{m}=\pi,X_{i+1}\Big{]}\bigg{|}\pi_{m}=\pi,X_{i}=x \bigg{]}\] (64) \[=\tilde{\ell}(\pi,x,\psi_{m,i}(\pi,x))+\mathbb{E}\big{[}V_{m,i+1 }(\pi,X_{i+1};\psi_{m}^{n})|\pi_{m}=\pi,X_{i}=x\big{]}\] (65)
where the second term of (64) follows from the fact that \(X_{i}\) is conditionally independent of \((X_{i+1}^{n},Y_{i+1}^{n},\widehat{Y}_{i+1}^{n})\) given \((\pi_{m},X_{i+1})\), which is a consequence of the assumption that the offline-learned estimators are Markov and the specification of the joint distribution of \((Z^{m},X^{n},Y^{n},\widehat{Y}^{n})\) in the setup of the offline learning problem, and can be seen from Fig. 2. Then,
\[V_{m,i}(\pi,x;\psi_{m}^{n}) \geq\tilde{\ell}(\pi,x,\psi_{m,i}(\pi,x))+\mathbb{E}\big{[}V_{m,i +1}^{*}(\pi,X_{i+1})|\pi_{m}=\pi,X_{i}=x\big{]} \tag{66}\] \[=\tilde{\ell}(\pi,x,\psi_{m,i}(\pi,x))+\mathbb{E}\big{[}V_{m,i+1} ^{*}(\pi,X_{i+1})|\pi_{m}=\pi,X_{i}=x,\widehat{Y}_{i}=\psi_{m,i}(\pi,x)\big{]}\] (67) \[=\tilde{\ell}(\pi,x,\psi_{m,i}(\pi,x))+\mathbb{E}\big{[}V_{m,i+1} ^{*}(\pi,X_{i+1})|X_{i}=x,\widehat{Y}_{i}=\psi_{m,i}(\pi,x)\big{]}\] (68) \[=Q_{m,i}^{*}(\pi,x,\psi_{m,i}(\pi,x))\] (69) \[\geq V_{m,i}^{*}(\pi,x) \tag{70}\]
where (66) follows from the inductive assumption; (67) follows from the fact that \(\widehat{Y}_{i}\) is determined given \(\pi_{m}=\pi\) and \(X_{i}=x\); (68) follows from the fact that \(X_{i+1}\) is independent of \(\pi_{m}\) given \((X_{i},\widehat{Y}_{i})\); and the final inequality with the equality condition follow from the definitions of \(V_{m,i}^{*}\) and \(\psi_{m,i}^{*}\) in (17) and (19).
This proves the second claim.
## Appendix H Proof of Theorem 4
For each \(i=1,\ldots,n\), we have
\[\mathbb{E}\big{[}\ell(X_{i},Y_{i},\widehat{Y}_{i})\big{|}Z^{i-1},\widehat{Y}^{i-1},X_{i}\big{]}\] \[= \int_{\mathsf{V}}P_{Y_{i}|Z^{i-1},\widehat{Y}^{i-1},X_{i}}( \mathrm{d}y)\ell(X_{i},y,\widehat{Y}_{i}) \tag{71}\] \[= \int_{\mathsf{W}}\int_{\mathsf{V}}P_{W|Z^{i-1},\widehat{Y}^{i-1},X_{i}}(\mathrm{d}w)P_{Y_{i}|Z^{i-1},\widehat{Y}^{i-1},X_{i},W=w}(\mathrm{d}y )\ell(X_{i},y,\widehat{Y}_{i})\] (72) \[= \int_{\mathsf{W}}\int_{\mathsf{V}}\pi_{i}(\mathrm{d}w)P_{Y|X,W}( \mathrm{d}y|X_{i},w)\ell(X_{i},y,\widehat{Y}_{i})\] (73) \[= \tilde{\ell}(\pi_{i},X_{i},\widehat{Y}_{i}), \tag{74}\]
where (71) is due to the fact that \(X_{i}\) and \(\widehat{Y}_{i}\) are determined by \((Z^{i-1},\widehat{Y}^{i-1},X_{i})\); and (73) follows from the fact that \(W\) is conditionally independent of \((\widehat{Y}^{i-1},X_{i})\) given \(Z^{i-1}\) as a consequence of Lemma 6, and the fact that \(Y_{i}\) is conditionally independent of \((Z^{i-1},\widehat{Y}^{i-1})\) given \((X_{i},W)\). With the above equality and the fact that
\[\mathbb{E}\Big{[}\sum_{i=1}^{n}\ell(X_{i},Y_{i},\widehat{Y}_{i})\Big{]}=\sum_ {i=1}^{n}\mathbb{E}\big{[}\mathbb{E}[\ell(X_{i},Y_{i},\widehat{Y}_{i})|Z^{i-1},\widehat{Y}^{i-1},X_{i}]\big{]}, \tag{75}\]
we obtain (24).
Proof of Lemma 7
We first show that \(\pi_{i+1}\) can be determined by \((\pi_{i},X_{i},Y_{i})\). To see it, we express \(\pi_{i+1}\) as
\[P_{W|Z^{i}} =P_{W,Z_{i}|Z^{i-1}}/P_{Z_{i}|Z^{i-1}} \tag{76}\] \[=P_{W|Z^{i-1}}P_{X_{i}|W,Z^{i-1}}P_{Y_{i}|X_{i},W,Z^{i-1}}/P_{Z_{i} |Z^{i-1}}\] (77) \[=\pi_{i}P_{X_{i}|X_{i-1},\widehat{Y}_{i-1}}P_{Y_{i}|X_{i},W}/P_{Z_ {i}|Z^{i-1}}\] (78) \[=\frac{\pi_{i}P_{Y_{i}|X_{i},W}}{\int_{\mathsf{W}}\pi_{i}(\mathrm{ d}w^{\prime})P_{Y_{i}|X_{i},W=w^{\prime}}} \tag{79}\]
where (78) follows from the facts that 1) \(\widehat{Y}_{i-1}\) is determined by \(Z^{i-1}\), and \(X_{i}\) is conditionally independent of \((W,Z^{i-2},Y_{i-1})\) given \((X_{i-1},\widehat{Y}_{i-1})\); and 2) \(Y_{i}\) is conditionally independent of \(Z^{i-1}\) given \((X_{i},W)\). It follows that \(\pi_{i+1}\) can be written as
\[\pi_{i+1}=f(\pi_{i},X_{i},Y_{i}) \tag{80}\]
for a function \(f\) that maps \(\big{(}\pi_{i}(\cdot),X_{i},Y_{i}\big{)}\) to \(\pi_{i+1}(\cdot)\propto\pi_{i}(\cdot)P_{Y|X,W}(Y_{i}|X_{i},\cdot)\).
With (80), for any online-learned estimation strategy \(\psi^{n}\), any Borel sets \(A\subset\Delta\) and \(B\subset\mathsf{X}\), and any realization of \((\pi^{i},X^{i},\widehat{Y}^{i})\), we have
\[\mathbb{P}\big{[}(\pi_{i+1},X_{i+1})\in A\times B|\pi^{i},X^{i}, \widehat{Y}^{i}\big{]}\] \[= \int_{\mathsf{Y}}\mathbb{P}\big{[}\mathrm{d}y_{i}|\pi^{i},X^{i}, \widehat{Y}^{i}\big{]}\mathbb{P}\big{[}(\pi_{i+1},X_{i+1})\in A\times B|\pi^{ i},X^{i},\widehat{Y}^{i},Y_{i}=y_{i}\big{]} \tag{81}\] \[= \int_{\mathsf{Y}}\mathbb{P}\big{[}\mathrm{d}y_{i}|\pi^{i},X^{i}, \widehat{Y}^{i}\big{]}\mathbb{P}\big{[}f(\pi_{i},X_{i},y_{i})\in A]\mathbb{P} \big{[}X_{i+1}\in B|X_{i},\widehat{Y}_{i}\big{]}\] (82) \[= \int_{\mathsf{Y}}\int_{\mathsf{W}}\mathbb{P}\big{[}\mathrm{d}w| \pi^{i},X^{i},\widehat{Y}^{i}\big{]}\mathbb{P}\big{[}\mathrm{d}y_{i}|\pi^{i},X ^{i},\widehat{Y}^{i},W=w\big{]}\mathbb{P}\big{[}f(\pi_{i},X_{i},y_{i})\in A] \mathbb{P}\big{[}X_{i+1}\in B|X_{i},\widehat{Y}_{i}\big{]}\] (83) \[= \int_{\mathsf{Y}}\int_{\mathsf{W}}\pi_{i}(\mathrm{d}w)P_{Y|X,W}( \mathrm{d}y_{i}|X_{i},w)\mathbb{P}[f(\pi_{i},X_{i},y_{i})\in A]\mathbb{P} \big{[}X_{i+1}\in B|X_{i},\widehat{Y}_{i}\big{]}, \tag{84}\]
where (82) follows from (80) and the fact that \(X_{i+1}\) is conditionally independent of \((Z^{i-1},Y_{i},\widehat{Y}^{i-1})\) given \((X_{i},\widehat{Y}_{i})\); and (84) follows from 1) Lemma 8 and the fact that \(W\) is conditionally independent of \((Z^{i-1},X_{i},\widehat{Y}^{i})\) given \(Z^{i-1}\), as a consequence of Lemma 6, and 2) the fact that \(Y_{i}\) is conditionally independent of \((Z^{i-1},\widehat{Y}^{i})\) given \((X_{i},W)\).
This proves the Lemma 7, and we see that the right side of (28) only depends on \((\pi_{i},X_{i},\widehat{Y}_{i})\).
## Appendix J Proof of Lemma 8
Given a probability distribution \(p\) on \(\mathsf{V}\), let \(\mathsf{U}_{p}\triangleq\{u\in\mathsf{U}:P_{V|U}(\cdot|u)=p\}\). Then, for any Borel sets \(A\in\mathsf{V}\) and \(B\in\mathsf{T}\),
\[\mathbb{P}\big{[}V\in A|P_{V|U}(\cdot|U)=p,T\in B\big{]} =\frac{\mathbb{P}\big{[}V\in A,P_{V|U}(\cdot|U)=p,T\in B\big{]}}{ \mathbb{P}\big{[}P_{V|U}(\cdot|U)=p,T\in B\big{]}} \tag{85}\] \[=\frac{\int_{\mathsf{U}_{p}}P_{U}(\mathrm{d}u)P_{V|U}(A|u)P_{T|U}( B|u)}{\int_{\mathsf{U}_{p}}P_{U}(\mathrm{d}u)P_{T|U}(B|u)}\] (86) \[=p(A), \tag{87}\]
where (86) follows from the definition of \(\mathsf{U}_{p}\) and the assumption that \(T\) and \(V\) are conditionally independent given \(U\); and (87) follows from the fact that \(P_{V|U}(A|u)=p(A)\) for all \(u\in\mathsf{U}_{p}\).
## Appendix K Proof of Lemma 9
The inference loss of \(\psi_{i}^{n}\) can be written as
\[J(\psi^{n})=\mathbb{E}\Big{[}\sum_{i=1}^{n-1}\tilde{\ell}((\pi_{i},X_{i}), \widehat{Y}_{i})\Big{]}+\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{n},X_{n}), \psi_{n}(Z^{n-1},\widehat{Y}^{n-1},X_{n})\big{)}\big{]}. \tag{88}\]
Since the first expectation in (88) does not depend on \(\psi_{n}\), it suffices to show that there exists a Markov online-learned estimator \(\bar{\psi}_{n}:\Delta\times\mathsf{X}\to\widehat{\mathsf{Y}}\), such that
\[\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{n},X_{n}),\bar{\psi}_{n}(\pi_{n},X_{ n})\big{)}\big{]}\leq\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{n},X_{n}),\psi_{n} (Z^{n-1},\widehat{Y}^{n-1},X_{n})\big{)}\big{]}. \tag{89}\]
The existence of such an estimator is guaranteed by Lemma 3, as \((\pi_{n},X_{n})\) is a function of \((Z^{n-1},\widehat{Y}^{n-1},X_{n})\).
## Appendix L Proof of Lemma 10
The proof is given in Appendix L. The inference loss of the given \((\psi_{1},\ldots,\psi_{i-1},\bar{\psi}_{i})\) is
\[J(\psi_{1},\ldots,\psi_{i-1},\bar{\psi}_{i})=\mathbb{E}\Big{[} \sum_{j=1}^{i-2}\tilde{\ell}((\pi_{j},X_{j}),\widehat{Y}_{j})\Big{]}+\] \[\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{i-1},X_{i-1}),\widehat {Y}_{i-1}\big{)}\big{]}+\] \[\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{i},X_{i}),\bar{\psi}_{ i}(\pi_{i},X_{i})\big{)}\big{]}. \tag{90}\]
Since the first expectation in (90) does not depend on \(\psi_{i-1}\), it suffices to show that there exists a Markov online-learned estimator \(\bar{\psi}_{i-1}:\Delta\times\mathsf{X}\to\widehat{\mathsf{Y}}\), such that
\[\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{i-1},X_{i-1}),\bar{ \psi}_{i-1}(\pi_{i-1},X_{i-1})\big{)}\big{]}+\mathbb{E}\big{[}\tilde{\ell} \big{(}(\pi_{i},\bar{X}_{i}),\bar{\psi}_{i}(\pi_{i},\bar{X}_{i})\big{)}\big{]}\] \[\leq \mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{i-1},X_{i-1}),\widehat {Y}_{i-1}\big{)}\big{]}+\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{i},X_{i}), \bar{\psi}_{i}(\pi_{i},X_{i})\big{)}\big{]}, \tag{91}\]
where \(\bar{X}_{i}\) on the left side is the observation in the \(i\)th round when the Markov estimator \(\bar{\psi}_{i-1}\) is used in the \((i-1)\)th round. To get around with the dependence of \(X_{i}\) on \(\psi_{i-1}\), we write the second expectation on the right side of (91) as
\[\mathbb{E}\big{[}\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{i},X_{i}),\bar{\psi }_{i}(\pi_{i},X_{i})\big{)}\big{|}\pi_{i-1},X_{i-1},\widehat{Y}_{i-1}\big{]} \big{]} \tag{92}\]
and notice that the conditional expectation \(\mathbb{E}[\tilde{\ell}\big{(}(\pi_{i},X_{i}),\bar{\psi}_{i}(\pi_{i},X_{i}) \big{)}\big{|}\pi_{i-1},X_{i-1},\widehat{Y}_{i-1}\big{]}\) does not depend on \(\psi_{i-1}\). This is because the conditional distribution of \((\pi_{i},X_{i})\) given \((\pi_{i-1},X_{i-1},\widehat{Y}_{i-1})\) is solely determined by the probability transition kernels \(P_{Y_{i-1}|X_{i-1},W}\) and \(P_{X_{i}|X_{i-1},\widehat{Y}_{i-1}}\), as shown in the proof of Lemma 7 stating that \((\pi_{i},X_{i})_{i=1}^{n}\) is a controlled Markov chain driven by \(\hat{Y}^{n}\). It follows
that the right side of (91) can be written as
\[\mathbb{E}\Big{[}\tilde{\ell}\big{(}(\pi_{i-1},X_{i-1}),\widehat{Y}_ {i-1}\big{)}+\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{i},X_{i}),\bar{\psi}_{i}( \pi_{i},X_{i})\big{)}|\pi_{i-1},X_{i-1},\widehat{Y}_{i-1}\big{]}\Big{]}\] \[= \mathbb{E}\big{[}g\big{(}\pi_{i-1},X_{i-1},\widehat{Y}_{i-1}\big{)} \big{]} \tag{93}\] \[= \mathbb{E}\big{[}g\big{(}\pi_{i-1},X_{i-1},\psi_{i-1}(Z^{i-2}, \widehat{Y}^{i-2},X_{i-1})\big{)}\big{]} \tag{94}\]
for a function \(g\) that does not depend on \(\psi_{i-1}\). Since \((\pi_{i-1},X_{i-1})\) is a function of \((Z^{i-2},\widehat{Y}^{i-2},X_{i-1})\), it follows from Lemma 3 that there exists a learned estimator \(\bar{\psi}_{i-1}:\Delta\times\mathsf{X}\to\widehat{\mathsf{Y}}\), such that
\[\mathbb{E}\big{[}g(\pi_{i-1},X_{i-1},\psi_{i-1}(Z^{i-2},\widehat{ Y}^{i-2},X_{i-1}))\big{]} \tag{95}\] \[\geq \mathbb{E}\big{[}g(\pi_{i-1},X_{i-1},\bar{\psi}_{i-1}(\pi_{i-1}, X_{i-1}))\big{]}\] (96) \[= \mathbb{E}\Big{[}\tilde{\ell}((\pi_{i-1},X_{i-1}),\bar{\psi}_{i- 1}(\pi_{i-1},X_{i-1}))+\] \[\mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{i},\bar{X}_{i}),\bar{ \psi}_{i}(\pi_{i},\bar{X}_{i})\big{)}|\pi_{i-1},X_{i-1},\bar{\psi}_{i-1}(\pi_{ i-1},X_{i-1})\big{]}\Big{]}\] (97) \[= \mathbb{E}\big{[}\tilde{\ell}\big{(}(\pi_{i-1},X_{i-1}),\bar{ \psi}_{i-1}(\pi_{i-1},X_{i-1})\big{)}\big{]}+\mathbb{E}\big{[}\tilde{\ell} \big{(}(\pi_{i},\bar{X}_{i}),\bar{\psi}_{i}(\pi_{i},\bar{X}_{i})\big{)}\big{]}, \tag{98}\]
which proves (91) and the claim.
## Appendix M Proof of Theorem 5
Picking an optimal online-learned estimation strategy \(\psi^{n}\), we can first replace its last estimator by a Markov one that preserves the optimality of the strategy, which is guaranteed by Lemma 9. Then, for \(i=n,\dots,2\), we can repeatedly replace the \((i-1)\)th estimator by a Markov one that preserves the optimality of the previous strategy, which is guaranteed by Lemma 10 and the additive structure of the inference loss as in (26). Finally we obtain an online-learned estimation strategy consisting of Markov online-learned estimators that achieves the same inference loss as the originally picked online-learned estimation strategy.
## Appendix N Proof of Theorem 6
The first claim stating that the online-learned estimation strategy \((\psi_{1}^{*},\dots,\psi_{n}^{*})\) achieves the minimum in (23) follows from the equivalence between (23) and the MDP in (32), and from the well-known optimality of the solution derived from dynamic programming to MDP.
The second claim can be proved via backward induction. Consider an arbitrary Markov online-learned estimation strategy \(\psi^{n}\) with \(\psi_{i}:\Delta\times\mathsf{X}\to\mathsf{Y}\), based on which the learned estimates are made. For any pair \((i,j)\) such that \(1\leq i\leq j\leq n\),
\[\mathbb{E}\big{[}\ell(X_{j},Y_{j},\widehat{Y}_{j})|\pi_{i},X_{i} \big{]}\] \[= \mathbb{E}\big{[}\mathbb{E}[\ell(X_{j},Y_{j},\widehat{Y}_{j})| \pi_{j},X_{j},\pi_{i},X_{i}]|\pi_{i},X_{i}\big{]} \tag{99}\] \[= \mathbb{E}\Big{[}\int_{\mathsf{W}}P(\mathrm{d}w|\pi_{j},X_{j}, \pi_{i},X_{i})\int_{\mathsf{Y}}P(\mathrm{d}y_{j}|\pi_{j},X_{j},\pi_{i},X_{i},W =w)\ell(X_{j},y_{j},\widehat{Y}_{j})\Big{|}\pi_{i},X_{i}\Big{]}\] (100) \[= \mathbb{E}\Big{[}\int_{\mathsf{W}}\int_{\mathsf{Y}}\pi_{j}( \mathrm{d}w)P_{Y|X,W}(\mathrm{d}y_{j}|X_{j},w)\ell(X_{j},y_{j},\widehat{Y}_{j} )\Big{|}\pi_{i},X_{i}\Big{]}\] (101) \[= \mathbb{E}\big{[}\tilde{\ell}(\pi_{j},X_{j},\widehat{Y}_{j})|\pi_ {i},X_{i}\big{]} \tag{102}\]
where (100) follows from the fact that \(\widehat{Y}_{j}\) is determined by \((\pi_{j},X_{j})\); (101) follows from 1) Lemma 8 and the fact that \(W\) is conditionally independent of \((Z^{i-1},X_{i},X_{j})\) given \(Z^{j-1}\), and 2) \(Y_{j}\) is conditionally independent of \(Z^{j-1}\) given \((X_{j},W)\); and (102) follows from the definition of \(\tilde{\ell}\) in (8). With the above identity, the loss-to-go defined in (36) can be rewritten as
\[V_{i}(\pi,x;\psi^{n})=\mathbb{E}\Big{[}\sum_{j=i}^{n}\tilde{\ell}(\pi_{j},X_{j},\widehat{Y}_{j})\Big{|}\pi_{i}=\pi,X_{i}=x\Big{]},\quad i=1,\ldots,n. \tag{103}\]
Now we can proceed with proving the second claim via backward induction.
* In the final round, for all \(\pi\in\Delta\) and \(x\in\mathsf{X}\), \[V_{n}(\pi,x;\psi^{n}) =\tilde{\ell}(\pi,x,\psi_{n}(\pi,x))\] (104) \[\geq V_{n}^{*}(\pi,x),\] (105) where (104) is due to (102) with \(i=j=n\); and (105) is due to the definition of \(V_{n}^{*}\) in (33), while the equality holds if \(\psi_{n}(\pi,x)=\psi_{n}^{*}(\pi,x)\).
* For \(i=n-1,\ldots,1\), suppose (37) holds in the \((i+1)\)th round. We first show a self-recursive expression of \(V_{i}(\pi,x;\psi^{n})\): \[V_{i}(\pi,x;\psi^{n})\] \[=\mathbb{E}\Big{[}\sum_{j=i}^{n}\tilde{\ell}(\pi_{j},X_{j}, \widehat{Y}_{j})\Big{|}\pi_{i}=\pi,X_{i}=x\Big{]}\] (106) \[=\mathbb{E}[\tilde{\ell}(\pi_{i},X_{i},\widehat{Y}_{i})|\pi_{i}= \pi,X_{i}=x]+\mathbb{E}\Big{[}\sum_{j=i+1}^{n}\tilde{\ell}(\pi_{j},X_{j}, \widehat{Y}_{j})\Big{|}\pi_{i}=\pi,X_{i}=x\Big{]}\] (107) \[=\tilde{\ell}(\pi,x,\psi_{i}(\pi,x))+\mathbb{E}\Bigg{[}\mathbb{E }\Big{[}\sum_{j=i+1}^{n}\tilde{\ell}(\pi_{j},X_{j},\widehat{Y}_{j})\Big{|}\pi _{i+1},X_{i+1},\pi_{i}=\pi,X_{i}=x\Big{]}\Bigg{|}\pi_{i}=\pi,X_{i}=x\Bigg{]}\] (108) \[=\tilde{\ell}(\pi,x,\psi_{i}(\pi,x))+\mathbb{E}\Bigg{[}\mathbb{E }\Big{[}\sum_{j=i+1}^{n}\tilde{\ell}(\pi_{j},X_{j},\widehat{Y}_{j})\Big{|}\pi _{i+1},X_{i+1}\Big{]}\Bigg{|}\pi_{i}=\pi,X_{i}=x\Bigg{]}\] (109) \[=\tilde{\ell}(\pi,x,\psi_{i}(\pi,x))+\mathbb{E}[V_{i+1}(\pi_{i+1},X_{i+1};\psi^{n})|\pi_{i}=\pi,X_{i}=x]\] (110) where the second term of (109) follows from the fact that \(\widehat{Y}_{i+1}\) is determined by \((\pi_{i+1},X_{i+1})\), and the fact that \((\pi_{j},X_{j})_{j=i+1}^{n}\) is conditionally independent of \((\pi_{i},X_{i})\) given \((\pi_{i+1},X_{i+1},\widehat{Y}_{i+1})\) as guaranteed by Lemma 7. Then, \[V_{i}(\pi,x;\psi^{n}) \geq\tilde{\ell}(\pi,x,\psi_{i}(\pi,x))+\mathbb{E}\big{[}V_{i+1}^{ *}(\pi_{i+1},X_{i+1})|\pi_{i}=\pi,X_{i}=x\big{]}\] (111) \[=\tilde{\ell}(\pi,x,\psi_{i}(\pi,x))+\mathbb{E}\big{[}V_{i+1}^{*}( \pi_{i+1},X_{i+1})|\pi_{i}=\pi,X_{i}=x,\widehat{Y}_{i}=\psi_{i}(\pi,x)\big{]}\] (112) \[=Q_{i}^{*}(\pi,x,\psi_{i}(\pi,x))\] (113) \[\geq V_{i}^{*}(\pi,x)\] (114) where (111) follows from the inductive assumption; (112) follows from the fact that \(\widehat{Y}_{i}\) is determined given \((\pi_{i},X_{i})\); (113) follows from the definition of \(Q_{i}^{*}\) in (34); and the final inequality with the equality condition follow from the definitions of \(V_{i}^{*}\) and \(\psi_{i}^{*}\) in (33) and (35). This proves the second claim.
## Acknowledgement
The authors would like to thank Prof. Maxim Raginsky for the encouragement of looking into dynamic aspects of statistical problems, and Prof. Lav Varshney for helpful discussions on this work.
|
2309.12214 | Can We Reliably Improve the Robustness to Image Acquisition of Remote
Sensing of PV Systems? | Photovoltaic (PV) energy is crucial for the decarbonization of energy
systems. Due to the lack of centralized data, remote sensing of rooftop PV
installations is the best option to monitor the evolution of the rooftop PV
installed fleet at a regional scale. However, current techniques lack
reliability and are notably sensitive to shifts in the acquisition conditions.
To overcome this, we leverage the wavelet scale attribution method (WCAM),
which decomposes a model's prediction in the space-scale domain. The WCAM
enables us to assess on which scales the representation of a PV model rests and
provides insights to derive methods that improve the robustness to acquisition
conditions, thus increasing trust in deep learning systems to encourage their
use for the safe integration of clean energy in electric systems. | Gabriel Kasmi, Laurent Dubus, Yves-Marie Saint-Drenan, Philippe Blanc | 2023-09-21T16:15:56Z | http://arxiv.org/abs/2309.12214v3 | # Can We Reliably Improve the Robustness to Image Acquisition of Remote Sensing of PV Systems?
###### Abstract
Photovoltaic (PV) energy is crucial for the decarbonization of energy systems. Due to the lack of centralized data, remote sensing of rooftop PV installations is the best option to monitor the evolution of the rooftop PV installed fleet at a regional scale. However, current techniques lack reliability and are notably sensitive to shifts in the acquisition conditions. To overcome this, we leverage the wavelet scale attribution method (WCAM) [21], which decomposes a model's prediction in the space-scale domain. The WCAM enables us to assess on which scales the representation of a PV model rests and provides insights to derive methods that improve the robustness to acquisition conditions, thus increasing trust in deep learning systems to encourage their use for the safe integration of clean energy in electric systems.
## 1 Introduction
Photovoltaic (PV) energy grows rapidly and is crucial for the decarbonization of electric systems [11]. The rapid growth of rooftop PV makes the estimation of the global PV installed capacity challenging as centralized data is often lacking [22]. Remote sensing rooftop PV on orthoimagery with computer vision models is a promising solution for mapping rooftop PV installations. However, current approaches lack reliability and generalize poorly from one image provider to the other [45; 18]. Improving generalizability and reliability requires improving the robustness to acquisition conditions and proposing methods to assess the reliability of the decision process of the PV classifiers [5].
Deep learning-based pipelines became the standard method for remote sensing PV systems. DeepSolar [48] paved the way for country-wide mapping of PV systems using deep learning and overhead imagery. While several works discuss the poor generalizability of current methods [45; 7; 18], these works do not tackle the robustness to varying acquisition conditions, which can be assimilated to image corruptions [13].
In this work, we analyze the robustness to heterogeneous acquisition conditions of models for remote sensing of PV installations using the wavelet scale attribution method (WCAM, [21]). The WCAM assesses the reliability of a model's decision by decomposing it into the scale-space domain. We analyze the model's sensitivity to acquisition conditions using the WCAM and derive a principled method for improving the robustness to these acquisition conditions. Our work shows that the WCAM provides a finer understanding of what the model sees as a PV panel and guides us to improve the robustness to acquisition conditions. By improving the reliability and robustness of deep learning models for rooftop PV mapping, we aim to facilitate the mapping and, thus, the integration into the electric grid of rooftop PV.
Related works
Remote sensing of PV installationsMany works leveraged overhead imagery and deep learning methods to map PV installations [31; 29; 9; 49]. The DeepSolar [48] method marked a significant milestone with mapping distributed and utility-scale installations over the continental United States using state-of-the-art deep learning models. Many works built on DeepSolar to map regions or countries, especially in Europe [24; 1; 22; 7; 32; 33]. However, current methods cannot be transposed from one region to another without incurring accuracy drops, thus limiting their practical usability [18] due to a lack of reliability of the generated data [5]. To address this gap, we propose to study and mitigate the impact of acquisition conditions, ubiquitous with overhead imagery, which prevents reusing trained models for registry updates.
Sensitivity to distribution shiftsThe sensitivity to distribution shifts [25] prevents from using pre-trained models without further training, whether temporally or spatially, i.e., it limits their ability to generalize [5; 30]. Some works empirically discussed this issue [45] and argued that the generalization ability depended on how hard to recognize the PV panels are. However, no work properly disentangled the effect of each source of variability identified by [42]: geographical conditions, varying acquisition conditions, and the ground sampling distance (GSD).
Frequency-centric explanationsA line of works aimed at explaining the behavior of neural networks through the lenses of frequency analysis. Several works showed that convolutional neural networks (CNNs) are biased towards high frequencies [44; 47] and that robust methods tend to limit this bias [51; 2]. Other works highlighted a so-called spectral bias [37; 46; 20], showing that CNNs learn the input frequencies from the lowest to the highest. More recently, using wavelet transforms, [21] expanded attribution methods from the pixel to the space-scale (wavelet) domain. This work connects the fields of interpretability and robustness and enables understanding _what_ models see on images. It has not yet been applied to orthoimagery, where scales are explicitly indexed.
## 3 Data and methods
### Data
We consider the crowdsourced training dataset _Base de donnees d'apprentissage profond photo-voltaique_ (BDAPPV, [23]). This dataset contains annotated images of 28,000 PV panels in France and neighboring countries. This dataset also proposes annotations of images that depict the same PV panel but from two different image providers: images coming from the Google Earth Engine (hereafter referred to as "Google") [10] and from the IGN, the French public operator for geographic information. We have double annotations for more than 8,000 PV systems. It allows us to assess the impact of the acquisition conditions as the only change factor between two images is the varying acquisition condition: the semantic content (the PV panel and its surroundings) remains unchanged. The native ground sampling distance (GSD) of Google images is 10 cm/pixel and 20 cm/pixel for IGN images. We define the acquisition conditions as the properties of the technical infrastructure (airborne or spaceborne, camera type, image quantization, and postprocessing) and the atmospheric and meteorological conditions the day the image was taken. Figure 3 in the appendix A presents images samples of the BDAPPV dataset.
### Methods
#### 3.2.1 Identifying where the sensitivity to distribution shifts comes from
Empirical frameworkBDAPPV features images of the same installations from two providers and records the crude location of the PV installations. Using this information, we can define three test cases to disentangle the distribution shifts that occur with remote sensing data: the resolution, the acquisition conditions, and the geographical variability. We train a ResNet-50 model [12] on Google images downsampled at 20cm/pixel of resolution and evaluate it on three datasets: a dataset with Google images at their native 10cm/pixel resolution ("Google 10 cm/pixel"), the IGN images with a native 20cm/pixel resolution ("IGN") and Google images downsampled at 20 cm/pixel located outside
of France ("Google OOD"). We add the test set to record the test accuracy without distribution shift ("Google baseline"). We only do random crops, rotations, and ImageNet normalizations during training. Figure 4 in appendix A presents examples of images seen during training and test.
#### 3.2.2 Data augmentations for improving the robustness to acquisition conditions
Benchmarking current approachesThe literature on robustness to image corruptions [13] proposed numerous data augmentation methods to improve the robustness of classification models to image corruptions[14; 15; 4; 3; 8]. We consider the well-established AugMix method [14] and the recently-proposed RandAugment [4] and AutoAugment [3] methods. These methods apply a random composition of perturbations to images during training to learn an invariance against these perturbations. We do not consider the case of training from multiple sources as our setting is that we wish to generalize to unseen images (either temporally or spatially, so we cannot incorporate knowledge about these images).
Lowering the reliance on high-frequency componentsSince we know that varying acquisition conditions mainly alter high-frequency components, we introduce two data augmentation techniques that aim at reducing the reliance on high-frequency components: Gaussian blurring ("Blurring") and Blurring + wavelet perturbation (WP). Blurring consists of a fixed image blur, while the Blurring + WP also perturbs the wavelet coefficients of the image to force the model to rely on several rather than one scale for prediction. We refer the reader to the appendix C.1 for more details on the data augmentation strategies and a review of the hyperparameters. Figure 9 in appendix C.2 illustrates the effect of the different data augmentation techniques. We compare this approach with a baseline without augmentations ("ERM") and existing data augmentation techniques.
#### 3.2.3 Understanding the sensitivity to acquisition conditions with the Wavelet sCale
Attribution [40; 39; 35] indicates the important regions for prediction, i.e., decomposes the prediction in the pixel (spatial) domain. The WCAM [21] generalizes attribution to the wavelet (space-scale domain). The WCAM provides us with two pieces of information: where the model sees and what scale (i.e., frequency) it sees at this location. Therefore, we can see if a prediction relies on robust or fragile frequencies. Additionally, the decomposition of the prediction in terms of scales is interpretable, particularly in the case of orthoimagery. For example, on Google images, details at the 1-2 pixel scale correspond to physical objects with a size between 0.1 and 0.2 m on the ground. Thus, we know what the model sees as a panel; we can interpret it and assess whether it is sensitive to varying acquisition conditions. The decomposition brought by the WCAM enables the interpretation of the model's decision process. Appendix B provides additional background for reading WCAMs.
## 4 Results
### Acquisition conditions mainly explain the poor generalization of PV mapping algorithms
ResultsTable 1 shows the results of the decomposition of the effect of distribution shifts into three components: resolution, acquisition conditions, and geographical shift. We can see that the F1 score drops the most when the model faces new acquisition conditions. The second most significant impact
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & F1 Score (\(\uparrow\)) & True positives rate & True negatives rate & False positives rate & False negatives rate \\ \hline Google baseline & 0.98 & 0.99 & 0.98 & 0.02 & 0.01 \\ Google 10cm/px & 0.89 & 0.81 & 1.00 & 0.00 & 0.19 \\ Google OOD & 0.98 & 0.99 & 0.98 & 0.02 & 0.01 \\ IGN & 0.46 & 0.32 & 0.95 & 0.03 & 0.68 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **F1 Score** and decomposition in true positives, true negatives, false positives, and false negatives rates of the disentanglement of the distribution shift between the GSD (Google 10 cm/px), the geographical variability (Google OOD) and the acquisition conditions (IGN).
comes from the change in the ground sampling distance, but the performance drop remains relatively small compared to the effect of the acquisition conditions. In our framework, there is no evidence of an effect of the geographical variability once we isolate the effects of the acquisition conditions and ground sampling distance. This effect is probably underestimated, as images of our dataset that are not in France are near France. However, the effect of the acquisition conditions is sizeable enough to seek methods for addressing it.
Mechanisms: when important factors disappear
Changing the provider (i.e., altering the acquisition conditions) alters the scales describing the image of PV panels. If the model relied on a scale no longer on the image, it could no longer recognize the PV panel. On Figure 1, we can see that on Google, the important factor was the factor **(b)** (on the leftmost image), which is no longer important on the IGN image (on the right). On the IGN image, the model instead relied on the factor **(a)** as the factor **(b)** is no longer visible. This change in the important factor (at the same scale in this example) seems to have driven the shift from predicting to not predicting the PV panel. Interestingly, we can see that the scales highlighted in **(c)** are visible on both images but not important for the prediction in the IGN image: the model no longer "sees" these details. We refer the reader to the appendix B.2 for further guidance on interpreting a WCAM.
### Lowering the reliance
on high frequencies improves generalization
Blurring and wavelet perturbation improve accuracyTable 2 reports the results of the evaluated data augmentation techniques to mitigate the effect of acquisition conditions. Augmentations that explicitly discard small scales (high frequencies) information perform the best. However, the blurring method sacrifices the recall (which drops to 0.6) to improve the F1 score. On Table 2, this can be seen by the increase in false positives. Therefore, this method is unreliable for improving the robustness to acquisition conditions. On the other hand, adding wavelet perturbation yields improvements and outperforms existing approaches without sacrificing precision or recall.
Relying on consistent scalesFigure 2 compares the scales on which the best-performing methods rely. In our case, we want our models to rely on the largest scales (i.e., lowest frequencies) to entail robustness [50] against varying acquisition conditions. We can see that the blurring and wavelet perturbation enforces this property better than other data augmentation techniques. Indeed, the model
\begin{table}
\begin{tabular}{r c c c c c} \hline \hline & F1 Score (\(\uparrow\)) & True positives & True negatives & False positives & False negatives \\ \hline Oracle & 0.88 & 1818 & 1992 & 428 & 83 \\ \hline ERM [43] & 0.44 & 566 & 2321 & 99 & 1335 \\ AutoAugment [3] & 0.46 & 598 & 2318 & 102 & 1303 \\ AugMix [14] & 0.48 & 624 & 2318 & 102 & 1277 \\ RandAugment [4] & 0.51 & 707 & 2280 & 140 & 1194 \\ Blurring & **0.74** & 1855 & 1196 & 1224 & 46 \\ Blurring + WP & 0.58 & 896 & 2114 & 306 & 1005 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **F1 Score** and decomposition in true positives, true negatives, false positives, and false negatives for models trained on Google with different mitigation strategies. All models are evaluated on the same test set, so we report the raw values rather than the rates. Evaluation on IGN images. The oracle corresponds to a model trained on IGN images with standard augmentations. Best results are **bolded**.
Figure 1: Predictions on Google image (left, upper row) and IGN image (right, upper row) and associated WCAMs (bottom row, displayed with the same color scale). The brighter, the more important the highlighted region for the prediction
relies on coarser scales (which are more robust) and on scales on which the ERM also relies. More generally, the WCAM lets us compare methods that perform quantitatively similarly.
On the choice of the training imagesOur results show that lowering the reliance on high-frequency content in the image improves generalization. This content is located on the 10-20cm scale and only appears on Google images. In Table 3, we flip our experiment to study how a model trained on IGN images generalizes to Google images. Results show that the model trained on IGN generalizes better to the downscaled Google images than the opposite. This result further supports the idea that higher GSD is not necessarily better for good robustness to acquisition conditions.
## 5 Conclusions and future work
We set up an experiment to disentangle the effects of heterogeneous acquisition conditions, geographical variability, and ground sampling distance on the generalization of deep neural networks to unseen data. Our results show that the sensitivity to acquisition conditions is the leading cause of poor generalization. To explain why models are sensitive to acquisition conditions, we leverage the wavelet scale attribution method (WCAM, [21]). Acquisition conditions perturb the scales the model relied on to make a prediction. If these scales correspond to high frequencies, they are likely to be disrupted by the acquisition conditions. We show that models biased towards low frequencies are more robust to acquisition conditions. We design a data augmentation method that outperforms other methods to improve the robustness to varying acquisition conditions. More generally, models trained on images with a lower GSD generalize better.
Broader impactCurrently, transmission system operators (TSOs) lack quality data regarding rooftop PV installations [22]. The lack of information leads to imprecise estimations and forecasts of the overall PV power generation, which in a context of sustained growth of the PV installed capacity could increase the uncertainty and threaten the grid's stability [36]. On the other hand, current methods for mapping rooftop PV installations lack reliability, owing to their poor generalization abilities beyond their training dataset [5]. This work addresses this gap and thus demonstrates that remote sensing of PV installations is a reliable way for TSOs to improve their knowledge regarding small-scale PV installations.
Future worksWe wish to discuss further the conditions on the training images for good robustness to acquisition conditions. In particular, we plan to discuss the trade-off between the minimal GSD to _reliably_ see PV panels [26] and a notion of image quality for the training data.
Figure 2: WCAMs on IGN of models trained on Google with different augmentation techniques.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & F1 Score (\(\uparrow\)) & True positives & True negatives & False positives & False negatives \\ \hline ERM [43] & 0.98 & 1891 & 2355 & 36 & 39 \\ Oracle (ERM trained on IGN) & 0.91 & 1815 & 2127 & 264 & 115 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **F1 Score** and true positives, true negatives, false positives, and false negatives. Evaluation computed on the Google dataset. ERM was trained on Google and Oracle on IGN images.
Acknowledgements
This work is funded by RTE France, the French transmission system operator, and benefited from CIFRE funding from the ANRT. The authors gratefully acknowledge the support of this project.
|
2309.09152 | Quantifying quantum coherence via nonreal Kirkwood-Dirac
quasiprobability | Kirkwood-Dirac (KD) quasiprobability is a quantum analog of phase space
probability of classical statistical mechanics, allowing negative or/and
nonreal values. It gives an informationally complete representation of a
quantum state. Recent works have revealed the important roles played by the KD
quasiprobability in the broad fields of quantum science and quantum technology.
In the present work, we use the KD quasiprobability to access the quantum
coherence in a quantum state. We show that the $l_1$-norm of the imaginary part
of the KD quasiprobability over an incoherent reference basis and a second
basis, maximized over all possible choices of the latter, can be used to
quantify quantum coherence, satisfying certain desirable properties. It is
upper bounded by the quantum uncertainty, i.e., the quantum standard deviation,
of the incoherent basis in the state. It gives a lower bound to the $l_1$-norm
quantum coherence, and for a single qubit, they are identical. We discuss the
measurement of the KD coherence based on the measurement of the KD
quasiprobability and an optimization procedure in hybrid quantum-classical
schemes, and suggest statistical interpretations. We also discuss its relevance
in the physics of linear response regime. | Agung Budiyono, Hermawan K. Dipojono | 2023-09-17T04:34:57Z | http://arxiv.org/abs/2309.09152v1 | # Quantifying quantum coherence via Kirkwood-Dirac quasiprobability
###### Abstract
Kirkwood-Dirac (KD) quasiprobability is a quantum analog of phase space probability of classical statistical mechanics, allowing negative or/and nonreal values. It gives an informationally complete representation of a quantum state. Recent works have revealed the important roles played by the KD quasiprobability in the broad fields of quantum science and quantum technology. In the present work, we use the KD quasiprobability to access the quantum coherence in a quantum state. We show that the \(l_{1}\)-norm of the imaginary part of the KD quasiprobability over an incoherent reference basis and a second basis, maximized over all possible choices of the latter, can be used to quantify quantum coherence, satisfying certain desirable properties. It is upper bounded by the quantum uncertainty, i.e., the quantum standard deviation, of the incoherent basis in the state. It gives a lower bound to the \(l_{1}\)-norm quantum coherence, and for a single qubit, they are identical. We discuss the measurement of the KD coherence based on the measurement of the KD quasiprobability and an optimization procedure in hybrid quantum-classical schemes, and suggest statistical interpretations. We also discuss its relevance in the physics of linear response regime.
quantum coherence, Kirkwood-Dirac quasiprobability, nonclassicality pacs: 03.65.Ta, 03.65.Ca
Introduction
Quantum coherence is one of the defining features of quantum mechanics, manifesting the superposition principle. It underlies the nonclassical features of quantum phenomena. Recently, quantum coherence has also been recognized as one of the key ingredients for various schemes of quantum technologies [1; 2]. In the last decade, the success of the resource theoretical framework to study diverse nonclassical features of quantum systems by regarding them as constituting resources for some operational tasks [3], has led many researchers to apply the framework to rigorously characterize quantum coherence [1; 2; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. In this approach, one defines coherence as an aspect which cannot be created by different classes of incoherence preserving quantum operations. However, while mathematically well-defined, the physical interpretation of these formal operations are not entirely clear [1; 11; 12]. Moreover, the resulting coherence quantifiers do not have transparent interpretation in terms of direct laboratory operations.
On the other hand, recently, there has been a revival of interest in the Kirkwood-Dirac (KD) quasiprobability, an informationally complete representation of a quantum state [15; 16; 17; 18]. KD quasiprobability returns correct marginal probabilities, but it may take negative or/and nonreal values. Such negativity or nonreality, a.k.a. KD nonclassicality, indicates nonclassicality stronger than noncommutativity [19; 20], and is suggested as the origin of quantum advantage in certain quantum metrology [21] and quantum heat engine [22]. KD quasiprobability appears naturally in different forms of quantum fluctuations, and KD nonclassicality has been argued to signify genuine quantum behaviour of the underlying physical processes [23]. It has been used to characterize work distribution to extend thermodynamics fluctuation theorem in quantum regime [24; 25], as a witness of information scrambling in many body systems [26; 27], and as proofs of contextuality [28; 29]. It is therefore instructive to ask: how does coherence in a quantum state is encoded in the associated KD quasiprobability representation. The answer to this question might also offer useful insight into the roles of quantum coherence in physical situations listed above where KD nonclassicality is crucial.
In the present work, we propose a characterization and quantification of quantum coherence based on KD quasiprobability. First, given a quantum state and an incoherent reference basis, we identify a quantity, referred to as KD coherence, that is given by the \(l_{1}\)-norm of
the imaginary part of the KD quasiprobability defined over a reference basis and a second basis, and maximized over all possible choices of the latter. It formalizes the intuition that coherence should reflect the noncommutativity between the state and the incoherent basis, and we show that it satisfies certain desirable properties for a quantifier of quantum coherence. It is upper bounded by the total sum of the quantum standard deviation, thus the quantum uncertainty, of the incoherent basis in the state. KD coherence gives a lower bound to the \(l_{1}\)-norm coherence, and for an arbitrary state of a single qubit, they give the same value. We discuss the observation of the KD coherence via a couple of methods for the reconstruction of KD quasiprobability, combined with an optimization procedure in hybrid quantum-classical schemes. These suggest statistical interpretation of the KD coherence as the maximal disturbance induced by the measurement of, or the maximal mean absolute error in the estimation of the incoherent basis. We also give a short discussion on the relevance of the KD coherence to characterize linear response function.
## II Quantum coherence and Kirkwood-Dirac quasiprobability
### Quantum coherence
Consider a quantum system with the Hilbert space of finite dimension \(d\), and choose an orthonormal basis \(\{\left|a\right\rangle\}\), \(\sum_{a}\Pi_{a}=\mathbb{I}\), where \(\Pi_{a}:=\left|a\right\rangle\left\langle a\right|\) is a projector, assumed, for simplicity, to be one dimensional (rank-one projector). Such a basis decomposes the \(d\) dimensional Hilbert space into the direct sum of the one-dimensional \(d\) subspaces, each is spanned by \(\left|a\right\rangle\). A quantum state represented by the density operator \(\varrho\) on the Hilbert space is said to be incoherent with respect to the reference basis \(\{\left|a\right\rangle\}\) (or, relative to the Hilbert space decomposition into the associated subspaces) if it can be expressed as
\[\varrho=\sum_{a}p_{a}\left|a\right\rangle\left\langle a\right|, \tag{1}\]
\(p_{a}=\left\langle a|\varrho|a\right\rangle\), \(\sum_{a}p_{a}=1\). Namely, it is a classical statistical mixture of the elements of the reference basis. Hence, the density operator is diagonal with respect to the reference basis so that they are commuting, i.e., \(\left[\Pi_{a},\varrho\right]=0\), for all \(a\). Any state that cannot be so expressed is coherent with respect to the basis \(\{\left|a\right\rangle\}\). In this sense, \(\{\left|a\right\rangle\}\) is referred to as
the incoherent reference basis. The choice of the incoherent reference basis depends on the physical problem and/or the physical system under investigation.
A mathematically rigorous information theoretical framework to characterize coherence by regarding it as a resource is attracting a lot of attention recently [1; 2; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. In this resource theoretical framework [3], quantum states and operations are divided into those that are free and those whose preparation and implementation bear some cost. For example, in the resource theory of entanglement, the free operations are identified by the local operation and classical communication (LOCC) so that the free states are given by unentangled (separable) states [30; 31]. Such a division intuitively reflects the operational restriction in experimental scenario involving distant parties. In this framework, entangled states are thus seen as states with a resource whose provision may be used to overcome the restriction. Analogously, in the resource theory of coherence, the incoherent quantum states of Eq. (1) are assumed to be free, and the free operations are given by several different classes of incoherence preserving quantum operations [1; 2]. Quantum coherence is therefore naturally defined as the resource that cannot be created by these operations. This approach has led to the construction of various important coherence quantifiers. However, unlike LOCC, it is difficult to give a clear interpretation to the incoherence preserving operations alluded to above in terms of operational restriction in laboratory [1; 11; 12]. Moreover, most of the resulting coherence quantifiers cannot be interpreted in terms of direct laboratory operations [13].
For later reference, let us summarize the \(l_{1}\)-norm coherence arising in the above resource-theoretic approach [7]. Consider an arbitrary quantum state \(\varrho=\sum_{a,a^{\prime}}\varrho_{aa^{\prime}}\left|a\right\rangle\left\langle a ^{\prime}\right|\), \(\varrho_{aa^{\prime}}=\left\langle a|\varrho|a^{\prime}\right\rangle\), where \(\{\left|a\right\rangle\}\) is the incoherent basis. The \(l_{1}\)-norm quantum coherence in \(\varrho\) relative to the incoherent basis \(\{\left|a\right\rangle\}\) is then defined as: \(C_{l_{1}}[\varrho;\{\Pi_{a}\}]:=\min_{\tau\in\mathcal{I}\{\left|a\right\rangle \}}\|\varrho-\tau\|_{l_{1}}=\sum_{a\neq a^{\prime}}\left|\varrho_{aa^{\prime}}\right|\), where \(\mathcal{I}\{\left|a\right\rangle\}\) is the set of all incoherent states relative to the reference basis \(\{\left|a\right\rangle\}\), and \(\|\cdot\|_{l_{1}}\) is the \(l_{1}\) matrix norm. Hence, it is given by the sum of the absolute value of the off-diagonal terms of the density matrix, directly capturing the intuition that coherence must quantify the interference between the elements of the reference basis. Remarkably, for a single qubit, various different coherence quantifiers are equal to, or can be written as a simple function of, the \(l_{1}\)-norm coherence [2]. The \(l_{1}\)-norm coherence can be used to quantify the wave aspect in the wave-particle complementarity relations [32; 33; 34; 35; 36; 37]. It also has proven to be useful in studying speedup in quantum computation [38; 39; 40; 41; 42; 43].
### Kirkwood-Dirac quasiprobability
There is an informationally equivalent representation of the quantum state based on quasiprobability. Quasiprobability is the quantum analog of phase space probability distribution for classical statistical mechanics [44]. Due to the quantum noncommutativity (incompatibility), quasiprobability necessarily does not satisfy all the Kolmogorov's axioms for conventional probability [23]. For example, the Wigner function, the most well-known quasiprobability, may take negative values. There are infinitely many quasiprobability representation arising from the ambiguity of the ordering of operators. Here, for system with finite dimensional Hilbert space, and for the reason that will be clarified later, we shall use the representation of quantum state in terms of a specific quasiprobability called as Kirkwood-Dirac (KD) quasiprobability [15; 16; 17; 18] to access the coherence in the quantum state.
Given a quantum state \(\varrho\) acting on a Hilbert space with dimension \(d\), and two bases \(\{|a\rangle\}\) and \(\{|b\rangle\}\) of the Hilbert space, the KD quasiprobability is defined as
\[\Pr_{\rm KD}(a,b|\varrho):=\Tr\{\Pi_{b}\Pi_{a}\varrho\}=\langle b|\Pi_{a} \varrho|b\rangle\,. \tag{2}\]
The KD quasiprobability gives correct marginal probabilities, i.e., \(\sum_{a}\Pr_{\rm KD}(a,b|\varrho)=\Tr\{\Pi_{b}\varrho\}\), \(\sum_{b}\Pr_{\rm KD}(a,b|\varrho)=\Tr\{\Pi_{a}\varrho\}\), and thus normalized \(\sum_{a,b}\Pr_{\rm KD}(a,b|\varrho)=1\), but, it may assume negative or/and non-real values capturing nonclassicality tighter than noncommutativity [19; 20]. The real part is known as Terletsky-Margenau-Hill quasiprobability [45; 46]. Given the KD quasiprobability \(\Pr_{\rm KD}(a,b|\varrho)\), the density matrix \(\varrho\) can be recovered as, assuming \(\langle a|b\rangle\neq 0\) for all \((a,b)\), \(\sum_{a,b}\Pr_{\rm KD}(a,b|\varrho)\frac{|a\rangle\langle b|}{\langle b|a \rangle}=\sum_{a,b}\left\langle a|\varrho|b\right\rangle|a\rangle\left\langle b \right|=\varrho\), hence, they are informationally equivalent. Choosing a pair of bases so that \(\langle a|b\rangle=\frac{1}{\sqrt{d}}e^{i2\pi ab/d}\), the density matrix in the basis \(\{|a\rangle\}\) is thus obtained by Fourier transforming the KD quasiprobability as \(\langle a|\varrho|a^{\prime}\rangle=\sum_{b=0}^{d-1}\Pr_{\rm KD}(a,b|\varrho) e^{i\frac{2\pi}{d}(a-a^{\prime})b}\)[18]. One of the advantages of using KD quasiprobability representation is that one may use the negativity or/and the nonreality of the KD quasiprobability, i.e., the KD nonclassicality, to access genuine nonclassical behaviour of a quantum system, by showing that it violates some classical bound derived based on conventional real and non-negative probability. Indeed, as listed in the Introduction, the KD nonclassicality is playing significant roles in the study of quantum information [21; 22], quantum fluctuation [23; 26; 27], quantum thermodynamics [24; 25], and quantum foundation [28; 29].
## III Quantum coherence from the imaginary part of the KD quasiprobability
Since KD quasiprobability is an informationally complete representation of the quantum state, it is natural to ask how the KD quasiprobability representation encodes the quantum coherence in the quantum state relative to a given incoherent basis. Note that the KD quasiprobability is defined in terms of two bases, while quantum coherence is defined relative to a single incoherent basis. To pursue this question, we observe first a simple fact that for an arbitrary quantum state \(\varrho\) and a basis \(\{|a\rangle\}\), the imaginary part of the corresponding KD quasiprobability captures the commutation relation between the state and the basis, i.e.,
\[\mathrm{Im}\{\mathrm{Pr}_{\mathrm{KD}}(a,b|\varrho)\} = \mathrm{Im}\{\langle b|\Pi_{a}\varrho|b\rangle\}=\frac{1}{2i} \,\langle b|[\Pi_{a},\varrho]|b\rangle \tag{3}\] \[= \sum_{a^{\prime}\neq a}\mathrm{Im}\{\varrho_{aa^{\prime}}\, \langle b|a\rangle\,\langle a^{\prime}|b\rangle\}.\]
It is also clear from the second line that, choosing a second basis \(\{|b\rangle\}\) such that \(\langle b|a\rangle\,\langle a^{\prime}|b\rangle\neq 0\) for some pairs of \((a,a^{\prime})\), \(a\neq a^{\prime}\), \(\mathrm{Im}\{\mathrm{Pr}_{\mathrm{KD}}(a,b|\varrho)\}\neq 0\) implies that not all of the off-diagonal terms of the density matrix are vanishing, indicating the presence of coherence in \(\varrho\) with respect to the incoherent reference basis \(\{|a\rangle\}\).
We wish to devise a simple quantity from the imaginary part of the KD quasiprobability, which can faithfully detect the quantum coherence, and possesses certain properties expected for a coherence quantifier. To this end, given a general quantum state \(\varrho\) and an incoherent reference basis \(\{|a\rangle\}\), let us define the following quantity which maps the quantum state to a nonnegative quantity:
\[C_{\mathrm{KD}}[\varrho;\{\Pi_{a}\}] := \max_{\{|b\rangle\}}\sum_{a}\sum_{b}\big{|}\mathrm{Im}\{\mathrm{ Pr}_{\mathrm{KD}}(a,b|\varrho)\}\big{|} \tag{4}\] \[= \max_{\{|b\rangle\}}\sum_{a}\sum_{b}\big{|}\mathrm{Im}\{\langle b |\Pi_{a}\varrho|b\rangle\}\big{|}\] \[= \max_{\{|b\rangle\}}\sum_{a}\sum_{b}\frac{1}{2}\big{|}\,\langle b |[\Pi_{a},\varrho]|b\rangle\,\big{|},\]
where \(\{|b\rangle\}\) is another basis of the Hilbert space. We thus take the \(l_{1}\)-norm of \(\mathrm{Pr}_{\mathrm{KD}}(a,b|\varrho)\) and maximize over all possible choices of the second basis \(\{|b\rangle\}\). The maximization seeks for the largest incompatibility between the quantum state \(\varrho\) and the incoherent basis \(\{|a\rangle\}\), with respect to the the second basis \(\{|b\rangle\}\), under the \(l_{1}\)-norm. Next, suppose we wish to quantify the coherence of a composite of \(N\) subsystems with respect to an incoherent product basis,
i.e., \(\{|a\rangle\}=\{|a_{1}\rangle\otimes\cdots\otimes|a_{N}\rangle\}:=\{|a_{1},\ldots, a_{N}\rangle\}\), where \(|a_{i}\rangle\) is the first basis for subsystem \(i\). Then, we assume that the second basis is also a product, i.e., \(\{|b\rangle\}=\{|b_{1},\ldots,b_{N}\rangle\}\) where \(\{|b_{i}\rangle\}\) is the second basis for subsystem \(i\).
We show that \(C_{\mathrm{KD}}[\varrho;\{\Pi_{a}\}]\), here on referred to as KD coherence, satisfies certain desirable properties for a quantifier of quantum coherence, as follows:
1. Faithful, i.e., \(C_{\mathrm{KD}}[\varrho;\{\Pi_{a}\}]=0\) if and only if the quantum state \(\varrho\) is incoherent with respect to the basis \(\{|a\rangle\}\).
2. Convex, i.e., \(C_{\mathrm{KD}}[\sum_{k}p_{k}\varrho_{k};\{\Pi_{a}\}]\leq\sum_{k}p_{k}C_{ \mathrm{KD}}[\varrho_{k};\{\Pi_{a}\}]\), where \(\{p_{k}\}\) are probabilities: \(0\leq p_{k}\leq 1\), \(\sum_{k}p_{k}=1\).
3. Unitarily covariant: \(C_{\mathrm{KD}}[U\varrho U^{\dagger};\{U\Pi_{a}U^{\dagger}\}]=C_{\mathrm{KD}} [\varrho;\{\Pi_{a}\}]\).
4. Invariant under unitary transformations which commute with a Hermitian observable whose eigenvectors are given by the incoherent basis: \(C_{\mathrm{KD}}[U_{A}\varrho U_{A}^{\dagger};\{\Pi_{a}\}]=C_{\mathrm{KD}}[ \varrho;\{\Pi_{a}\}]\), where \([U_{A},A]=0\), \(A=\sum_{a}a\Pi_{a}\), \(a\in\mathbb{R}\).
5. Invariant under unitary transformation which permutes the index of the elements in the incoherent basis: \(C_{\mathrm{KD}}[U_{\mathrm{p}}\varrho U_{\mathrm{p}}^{\dagger};\{\Pi_{a}\}]=C _{\mathrm{KD}}[\varrho;\{\Pi_{a}\}]\), where \(U_{\mathrm{p}}\left|a\right\rangle=e^{i\theta_{a}}\left|\mu(a)\right\rangle\), \(\mu(a)\) is a permutation of index in the basis, and \(\theta_{a}\in\mathbb{R}\).
6. Nonincreasing under partial trace: \(C_{\mathrm{KD}}[\varrho_{12};\{\Pi_{a_{1}}\otimes\mathbb{I}_{2}\}]\geq C_{ \mathrm{KD}}[\varrho_{1};\{\Pi_{a_{1}}\}]\), where \(\varrho_{12}\) is the quantum state of the composite of subsystem \(1\) and \(2\), \(\varrho_{1}=\mathrm{Tr}_{2}\{\varrho_{12}\}\) is the quantum state of subsystem \(1\), \(\{|a_{1}\rangle\}\) is the incoherent basis of subsystem \(1\), and \(\mathbb{I}_{2}\) is the identity operator of subsystem \(2\).
7. Nonincreasing under decoherence operation, i.e., \(C_{\mathrm{KD}}[\varrho;\{\Pi_{a}\}]\geq C_{\mathrm{KD}}[\varrho^{\prime};\{ \Pi_{a}\}]\), where \(\varrho^{\prime}=p\varrho+(1-p)\mathcal{D}(\varrho;\{\Pi_{a}\})\), \(0\leq p\leq 1\), and \(\mathcal{D}(\varrho;\{\Pi_{a}\}):=\sum_{a}\Pi_{a}\varrho\Pi_{a}\) is the dephasing operation which removes the off-diagonal terms of \(\varrho\) in the basis \(\{|a\rangle\}\).
Let us sketch and discuss the proofs of the above properties.
To establish property (i) of faithfulness, first note that if \(\varrho\) is an incoherent state so that \([\Pi_{a},\varrho]=0\) for all \(a\), we have \(C_{\mathrm{KD}}[\varrho;\{\Pi_{a}\}]=0\) by definition. Conversely, let us suppose that \(C_{\mathrm{KD}}[\varrho;\{\Pi_{a}\}]=0\). Then, from the definition, we must have \(\mathrm{Im}\{\mathrm{Pr}_{\mathrm{KD}}(a,b|\varrho)\}=\left\langle b|[\Pi_{a},\varrho]|b\right\rangle/2i=0\) for all \(a\) and \(b\). This can only be true for all possible choices of \(\{|b\rangle\}\) if
\([\Pi_{a},\varrho]=0\) for all \(a\). This means that \(\{\Pi_{a}\}\) is the eigenprojectors for \(\varrho\), so that \(\varrho\) must be expressible as in Eq. (1), i.e., it is incoherent relative to the reference basis \(\{\left|a\right\rangle\}\).
Next, property (ii) of convexity shows that classical mixing \(\varrho=\sum_{k}p_{k}\varrho_{k}\) does not increase KD coherence, suggesting that it quantifies a genuine quantum information. This is a trivial implication of the triangle inequality for the \(l_{1}\)-norm and the fact that \(p_{k}\geq 0\), i.e., \(C_{\rm KD}[\sum_{k}p_{k}\varrho_{k};\{\Pi_{a}\}]=\max_{\{\left|b\right\rangle \}}\sum_{a}\sum_{b}\left|{\rm Im}\{\left\langle b|\Pi_{a}\sum_{k}p_{k}\varrho_ {k}|b\right\rangle\}\right|\leq\sum_{k}p_{k}\max_{\{\left|b\right\rangle\}} \sum_{a}\sum_{b}\left|{\rm Im}\{\left\langle b|\Pi_{a}\varrho_{k}|b\right\rangle \}\right|=\sum_{k}p_{k}C_{\rm KD}[\varrho_{k};\{\Pi_{a}\}]\).
The property (iii) of unitarily covariant can be directly established from the definition, i.e.,
\[C_{\rm KD}[U\varrho U^{\dagger};\{U\Pi_{a}U^{\dagger}\}] \tag{5}\] \[= \max_{\{\left|b\right\rangle\}}\sum_{a}\sum_{b}\left|{\rm Im}\{ \left\langle b|U\Pi_{a}U^{\dagger}U\varrho U^{\dagger}|b\right\rangle\}\right|\] \[= \max_{\{\left|b^{\prime}\right\rangle\}}\sum_{a}\sum_{b^{\prime} }\left|{\rm Im}\{\left\langle b^{\prime}|\Pi_{a}\varrho|b^{\prime}\right\rangle \}\right|\] \[= C_{\rm KD}[\varrho;\{\Pi_{a}\}],\]
where we have taken into account the fact that unitary operator \(U\) leads to transformation between bases \(\{\left|b^{\prime}\right\rangle\}=\{U^{\dagger}\left|b\right\rangle\}\) of the same Hilbert space, so that \(\max_{\{\left|b^{\prime}\right\rangle\}}(\cdot)=\max_{\{\left|b\right\rangle \}}(\cdot)\). This property captures the intuition that simultaneously unitarily rotating both the incoherent basis and the quantum state in the Hilbert space should give the same value of coherence.
To establish property (iv), we first note that for any unitary operator \(U_{A}\) which commutes with \(A=\sum_{a}a\left|a\right\rangle\left\langle a\right|\), we have \(U_{A}\left|a\right\rangle=e^{i\theta_{a}}\left|a\right\rangle\), \(\theta_{a}\in\mathbb{R}\), so that
\[C_{\rm KD}[U_{A}\varrho U_{A}^{\dagger};\{\Pi_{a}\}] \tag{6}\] \[= \max_{\{\left|b\right\rangle\}}\sum_{a}\sum_{b}\left|{\rm Im}\{ \left\langle b|U_{A}U_{A}^{\dagger}\Pi_{a}U_{A}\varrho U_{A}^{\dagger}|b \right\rangle\}\right|\] \[= \max_{\{\left|b^{\prime}\right\rangle\}}\sum_{a}\sum_{b^{\prime} }\left|{\rm Im}\{\left\langle b^{\prime}|\Pi_{a}\varrho|b^{\prime}\right\rangle \}\right|\] \[= C_{\rm KD}[\varrho;\{\Pi_{a}\}],\]
where in the second line we have inserted the identity \(U_{A}U_{A}^{\dagger}=\mathbb{I}\), and in the third line we defined \(\{\left|b^{\prime}\right\rangle\}=\{U_{A}^{\dagger}\left|b\right\rangle\}\) and used the fact that \(\max_{\{\left|b^{\prime}\right\rangle\}}(\cdot)=\max_{\{\left|b\right\rangle\}} (\cdot)\). We note that such unitaries which commute with \(A\) is covariant under the translation \(U=e^{-iA\theta}\) generated by \(A\) (taking \(\hbar=1\)), in the sense that its implementation followed by the translation yields the
same result when the order of the operations is reversed: \(e^{-iA\theta}U_{A}\varrho U_{A}^{\dagger}e^{iA\theta}=U_{A}e^{-iA\theta}\varrho e^{ iA\theta}U_{A}^{\dagger}\)[1].
Next, consider a unitary operator which permutes the elements of the incoherent basis, i.e., \(U_{\rm p}=\sum_{a}e^{i\theta_{a}}\left|\mu(a)\right\rangle\left\langle a\right|\), where \(\mu(a)\) is an index permutation. Such a permutation of index in the reference basis should not change the coherence relative to the basis as claimed by property (v). To see this, first we have \(\{U_{\rm p}\Pi_{a}U_{\rm p}^{\dagger}\}=\{\Pi_{\mu(a)}\}=\{\Pi_{a}\}\). Noting this, we may proceed as
\[C_{\rm KD}[U_{\rm p}\varrho U_{\rm p}^{\dagger};\{\Pi_{a}\}] \tag{7}\] \[= \max_{\{|b\rangle\}}\sum_{a}\sum_{b}\left|{\rm Im}\{\langle b|U_{ \rm p}U_{\rm p}^{\dagger}\Pi_{a}U_{\rm p}\varrho U_{\rm p}^{\dagger}|b\rangle \}\right|\] \[= \max_{\{|b^{\prime}\rangle\}}\sum_{a}\sum_{b^{\prime}}\left|{\rm Im }\{\langle b^{\prime}|\Pi_{\mu(a)}\varrho|b^{\prime}\rangle\}\right|\] \[= \max_{\{|b^{\prime}\rangle\}}\sum_{a}\sum_{b^{\prime}}\left|{\rm Im }\{\langle b^{\prime}|\Pi_{a}\varrho|b^{\prime}\rangle\}\right|\] \[= C_{\rm KD}[\varrho;\{\Pi_{a}\}],\]
where we have inserted \(U_{\rm p}U_{\rm p}^{\dagger}=\mathbb{I}\) and defined \(\{|b^{\prime}\rangle\}=\{U_{\rm p}^{\dagger}\left|b\right\rangle\}\), and in the fourth line we have relabelled the sum over \(a\). We note that the set of \(U_{\rm p}\) for a given reference basis comprises all the incoherence preserving unitaries, which is equivalent to the set of dephasing covariant unitaries [1], i.e., those unitaries whose operation followed by the dephasing operation \(\mathcal{D}(\varrho;\{\Pi_{a}\})\) yield the same effect when the order of the operations is reversed.
Property (vi) captures the intuition that if two subsystems are correlated, ignoring one of them should not increase the coherence of the other. This can be shown as
\[C_{\rm KD}[\varrho_{12};\{\Pi_{a_{1}}\otimes\mathbb{I}_{2}\}] \tag{8}\] \[:= \max_{\{|b_{1},b_{2}\rangle\}}\sum_{a_{1}}\sum_{b_{1},b_{2}}\left| {\rm Im}\{\sum_{a_{2}}{\rm Pr}_{\rm KD}(a_{1},a_{2},b_{1},b_{2}|\varrho_{12}) \}\right|\] \[= \max_{\{|b_{1},b_{2}\rangle\}}\sum_{a_{1}}\sum_{b_{1},b_{2}}\left| {\rm Im}\{\langle b_{1},b_{2}|(\Pi_{a_{1}}\otimes\mathbb{I}_{2})\varrho_{12}| b_{1},b_{2})\}\right|\] \[\geq \max_{\{|b_{1},b_{2}\rangle\}}\sum_{a_{1}}\sum_{b_{1}}\left|{\rm Im }\{\sum_{b_{2}}\left\langle b_{1},b_{2}|(\Pi_{a_{1}}\otimes\mathbb{I}_{2}) \varrho_{12}|b_{1},b_{2}\right\rangle\}\right|\] \[= \max_{\{|b_{1}\rangle\}}\sum_{a_{1}}\sum_{b_{1}}\left|{\rm Im}\{ \langle b_{1}|\Pi_{a_{1}}\varrho_{1}|b_{1})\}\right|\] \[= C_{\rm KD}[\varrho_{1};\{\Pi_{a_{1}}\}],\]
where \(\varrho_{1}=\sum_{b_{2}}\left\langle b_{2}|\varrho_{12}|b_{2}\right\rangle={ \rm Tr}_{2}\{\varrho_{12}\}\). One can see that the equality is obtained when there
is no quantum and classical correlation in the quantum state, i.e., \(\varrho_{12}=\varrho_{1}\otimes\varrho_{2}\), by virtue of the fact that \(\langle b_{2}|\varrho_{2}|b_{2}\rangle\) is real and positive for all \(b_{2}\), and \(\sum_{b_{2}}\langle b_{2}|\varrho_{2}|b_{2}\rangle=1\).
Finally, property (vii) can be shown as follows:
\[C_{\rm KD}[p\varrho+(1-p){\cal D}(\varrho;\{\Pi_{a^{\prime}}\}); \{\Pi_{a}\}] \tag{9}\] \[= pC_{\rm KD}[\varrho;\{\Pi_{a}\}]\leq C_{\rm KD}[\varrho;\{\Pi_{a }\}]],\]
where, we have used the fact that \([{\cal D}(\varrho;\{\Pi_{a^{\prime}}\}),\Pi_{a}]=0\) for all \(a\) and \(p\geq 0\) to get the equality in the second line.
Let us discuss a few implications of the above definition of KD coherence. First, it is clear from the definition that the maximum KD coherence in a quantum state relative to all possible incoherent bases is obtained as the maximum of the \(l_{1}\)-norm of the imaginary part of the associated KD quasiprobability defined over all possible pair of bases, i.e., \(\max_{\{|a\rangle\}}C_{\rm KD}[\varrho;\{\Pi_{a}\}]=\max_{\{|a\rangle\}}\max_{ \{|b\rangle\}}\sum_{a}\sum_{b}\left|{\rm Im}\{{\rm Pr}_{\rm KD}(a,b|\varrho) \}\right|\). Or, equivalently, the maximum of the \(l_{1}\)-norm of the imaginary part of the associated KD quasiprobability over all pair of the defining bases encodes the maximum coherence in the state relative to all incoherent bases.
Next, since KD coherence is defined as the maximal incompatibility between the state and the incoherent basis, it is natural to expect that it somewhat captures the genuine quantum uncertainty of the basis in the quantum state. It is therefore instructive to compare KD coherence relative to a basis with the quantum variance of the basis. Note that quantum variance quantifies the total quantum uncertainty which also includes the uncertainty arising from classical mixing. We show that KD coherence \(C_{\rm KD}[\varrho;\{\Pi_{a}\}]\) is always lower than or equal to the total sum of the square root of the quantum variance (i.e., quantum standard deviation) of the basis \(\{\Pi_{a}\}\) in the state \(\varrho\). To see this, we first have, from Eq. (4),
\[C_{\rm KD}[\varrho;\{\Pi_{a}\}] \tag{10}\] \[= \max_{\{|b\rangle\}}\sum_{a}\sum_{b}\left|{\rm Im}\Big{\{}\frac{ {\rm Tr}\{\Pi_{b}\Pi_{a}\varrho\}}{{\rm Tr}\{\Pi_{b}\varrho\}}\Big{\}}\right| {\rm Tr}\{\Pi_{b}\varrho\}\] \[\leq \sum_{a}\Big{[}\sum_{b_{*}}\Big{(}\Big{|}\frac{{\rm Tr}\{\Pi_{b_ {*}}\Pi_{a}\varrho\}}{{\rm Tr}\{\Pi_{b_{*}}\varrho\}}\Big{|}^{2}-{\rm Re} \Big{\{}\frac{{\rm Tr}\{\Pi_{b_{*}}\Pi_{a}\varrho\}}{{\rm Tr}\{\Pi_{b_{*}} \varrho\}}\Big{\}}^{2}\Big{)}{\rm Tr}\{\Pi_{b_{*}}\varrho\}\Big{]}^{1/2}\] \[\leq \sum_{a}\Big{[}\sum_{b_{*}}\frac{({\rm Tr}\{\Pi_{b_{*}}\Pi_{a} \varrho\})^{2}}{{\rm Tr}\{\Pi_{b_{*}}\varrho\}}-\big{(}\sum_{b_{*}}{\rm Re} \big{\{}{\rm Tr}\{\Pi_{b_{*}}\Pi_{a}\varrho\}\big{\}}\big{)}^{2}\Big{]}^{1/2},\]
where \(\{|b_{*}\rangle\}\) is the second basis which achieves the maximum, and we have made use of the Jensen inequality and the completeness relation for the second basis, i.e.,
1, to get the third and fourth lines. Next, applying the Cauchy-Schwartz inequality to the numerator in the first term on the right-hand side, i.e., \((\mathrm{Tr}\{\Pi_{b_{*}}\Pi_{a}\varrho\})^{2}=(\mathrm{Tr}\{(\Pi_{b_{*}}^{1/2} \Pi_{a}\varrho^{1/2})(\varrho^{1/2}\Pi_{b_{*}}^{1/2})\})^{2}\leq\mathrm{Tr}\{ \Pi_{b_{*}}\Pi_{a}\varrho\Pi_{a}\}\mathrm{Tr}\{\varrho\Pi_{b_{*}}\}\), and using the completeness relation \(\sum_{b_{*}}\Pi_{b_{*}}=\mathbb{I}\), we finally obtain
\[C_{\mathrm{KD}}[\varrho;\{\Pi_{a}\}] \leq \sum_{a}\left[\mathrm{Tr}\{\Pi_{a}^{2}\varrho\}-\mathrm{Tr}\{\Pi_ {a}\varrho\}^{2}\right]^{1/2} \tag{11}\] \[= \sum_{a}\Delta_{\Pi_{a}}[\varrho],\]
where \(\Delta_{\hat{O}}^{2}[\varrho]:=\mathrm{Tr}\{\hat{O}^{2}\varrho\}-(\mathrm{Tr} \{\hat{O}\varrho\})^{2}\) is the quantum variance of \(\hat{O}\) in the state \(\varrho\).
We proceed to show that the KD coherence for any quantum state \(\varrho\) relative to any reference basis \(\{\left|a\right\rangle\}\) is always lower than or equal to the \(l_{1}\)-norm coherence in \(\varrho\) relative to the basis \(\{\left|a\right\rangle\}\), and they give equal value for \(d=2\), i.e., for a single qubit. First, let us consider the general case for \(d\geq 2\). From Eqs. (3) and (4), we have
\[C_{\mathrm{KD}}[\varrho;\{\Pi_{a}\}] \leq \max_{\{\left|b\right\rangle\}}\sum_{a}\sum_{b}\Big{|}\sum_{a^{ \prime}\neq a}\left|\varrho_{aa^{\prime}}\right|\left\langle b|a\right\rangle \left|\right|\left\langle a^{\prime}|b\right\rangle\Big{|} \tag{12}\] \[= \sum_{a\neq a^{\prime}}\left|\varrho_{aa^{\prime}}\right|\max_{ \{\left|b\right\rangle\}}\sum_{b}\left|\left\langle b|a\right\rangle\left| \right|\left\langle a^{\prime}|b\right\rangle\right|.\]
On the other hand, using the Cauchy-Schwartz inequality we have \(\sum_{b}|\left\langle b|a\right\rangle||\left\langle a^{\prime}|b\right\rangle |\leq\left(\sum_{b}|\left\langle b|a\right\rangle|^{2}\sum_{b^{\prime}}|\left \langle a^{\prime}|b^{\prime}\right\rangle|^{2}\right)^{1/2}=1\), where we have made use of the completeness relation for the second basis, \(\sum_{b}\left|b\right\rangle\left\langle b\right|=\mathbb{I}\), and the equality is reached when the second basis \(\{\left|b\right\rangle\}\) and the incoherent basis \(\{\left|a\right\rangle\}\) satisfies \(|\left\langle a|b\right\rangle|=\frac{1}{\sqrt{d}}\) for all \(a,b\). Finally, upon inserting into Eq. (12), we obtain
\[C_{\mathrm{KD}}[\varrho;\{\Pi_{a}\}]\leq\sum_{a\neq a^{\prime}}\left|\varrho_ {aa^{\prime}}\right|=C_{l_{1}}[\varrho;\{\Pi_{a}\}], \tag{13}\]
as claimed. Hence, a non-vanishing KD coherence can be used to detect the \(l_{1}\)-norm quantum coherence. Moreover, since a vanishing KD coherence leads to a vanishing \(l_{1}\)-norm coherence (property (i)), it is a faithful detector.
Let us show that the inequality of Eq. (13) is always saturated for a single qubit, i.e., two dimensional quantum system, with an arbitrary quantum state. Assume first for simplicity that the quantum state of the qubit is pure so that it can in general be written as
\[\left|\psi\right\rangle=\psi_{0}\left|0\right\rangle+\psi_{1}\left|1\right\rangle =\cos\frac{\theta}{2}\left|0\right\rangle+\sin\frac{\theta}{2}e^{i\eta}\left| 1\right\rangle, \tag{14}\]
where \(0\leq\theta\leq\pi\) is the polar angle of the Bloch sphere, \(0\leq\eta\leq 2\pi\) is the azimuthal angle, and \(\{\left|0\right\rangle,\left|1\right\rangle\}\) are the eigenstates of the Pauli matrix \(\sigma_{z}\). The \(l_{1}\)-norm coherence of the quantum state \(\left|\psi\right\rangle\) with respect to the incoherent basis \(\{\left|a_{z}\right\rangle\}=\{\left|0\right\rangle,\left|1\right\rangle\}\) is thus given by \(C_{l_{1}}[\left|\psi\right\rangle\left\langle\psi\right|;\{\Pi_{a_{z}}\}]=2 \big{|}\psi_{0}\psi_{1}^{*}\big{|}=|\sin\theta|\).
Next, for the purpose of computation of the KD coherence defined in Eq. (4), we express the second basis for the two dimensional Hilbert space \(\{\left|b\right\rangle\}=\{\left|b+\right\rangle,\left|b-\right\rangle\}\) as:
\[\left|b(\alpha,\beta)+\right\rangle := \cos\frac{\alpha}{2}\left|0\right\rangle+\sin\frac{\alpha}{2}e^{i \beta}\left|1\right\rangle;\] \[\left|b(\alpha,\beta)-\right\rangle := \sin\frac{\alpha}{2}\left|0\right\rangle-\cos\frac{\alpha}{2}e^{ i\beta}\left|1\right\rangle, \tag{15}\]
\(0\leq\alpha\leq\pi\), \(0\leq\beta\leq 2\pi\). We note that upon varying the angles \(\alpha\) and \(\beta\) over the whole ranges of their values, one scans over all the possible orthonormal bases of the two dimensional Hilbert space. Using this parameterization for the second basis, the KD coherence relative to the basis \(\{\left|a_{z}\right\rangle\}=\{\left|0\right\rangle,\left|1\right\rangle\}\) can then be computed straightforwardly to give
\[C_{\rm KD}[\left|\psi\right\rangle\left\langle\psi\right|;\{\Pi_ {a_{z}}\}] \tag{16}\] \[= \max_{\left|b(\alpha,\beta)\right\rangle}\sum_{a_{z}}\sum_{b} \left|{\rm Im}\{\left\langle b|a_{z}\right\rangle\left\langle a_{z}|\psi \right\rangle\left\langle\psi|b\right\rangle\}\right|\] \[= \max_{\alpha,\beta}|\sin\theta\sin(\beta-\eta)\sin\alpha|\] \[= |\sin\theta|=C_{l_{1}}[\left|\psi\right\rangle\left\langle\psi \right|;\{\Pi_{a_{z}}\}].\]
Hence, for two dimensional pure state, the KD coherence relative to the incoherent basis \(\{\left|a_{z}\right\rangle\}\) is indeed equal to the \(l_{1}\)-norm quantum coherence with respect to the incoherent basis \(\{\left|a_{z}\right\rangle\}\).
Let us discuss the geometrical meaning of the above calculation before generalizing the result to arbitrary two dimensional incoherent basis and arbitrary mixed state. First, note that the maximization over the two parameters \(\alpha,\beta\) characterizing the second basis \(\{\left|b(\alpha,\beta)\right\rangle\}=\{\left|b(\alpha,\beta)+\right\rangle, \left|b(\alpha,\beta)-\right\rangle\}\) are carried out independently of each other. The maximization over \(\alpha\), which parameterizes the amplitude of \(\left\langle a_{z}|b(\alpha,\beta)\pm\right\rangle\), is obtained for \(\alpha=\pi/2\). This means that the basis \(\{\left|b(\alpha,\beta)\right\rangle\}\) must lie on the the equator of the Bloch sphere so that it is mutually unbiased with the incoherent basis \(\{\left|a_{z}\right\rangle\}=\{\left|0\right\rangle,\left|1\right\rangle\}\). Next, the maximization over \(\beta\), which parameterizes the phase of \(\left\langle a_{z}|b(\alpha,\beta)\pm\right\rangle\), is obtained for \(\beta=\eta+\pi/2\). Combined together, the maximum is attained when the second basis is given by \(\{\left|b_{*}\right\rangle_{z}\}=\{\left|b_{*}+\right\rangle_{z},\left|b_{*}- \right\rangle_{z}\}\), where \(\left|b_{*}\pm\right\rangle_{z}=\frac{1}{\sqrt{2}}(\left|0\right\rangle\pm ie^{ i\eta}\left|1\right\rangle)\). Hence, the maximal basis
\(\left\{\left|b_{*}\right\rangle_{z}\right\}\) is orthogonal to the plane on which both the incoherent basis and the quantum state are lying. One thus finds that the maximal basis \(\left\{\left|b_{*}\right\rangle_{z}\right\}\) turns out to be also mutually unbiased with \(\left\{\left|\psi\right\rangle,\left|\psi\right\rangle^{\bot}\right\}\), where \(\left|\psi\right\rangle^{\bot}=\sin\frac{\theta}{2}\left|0\right\rangle-\cos \frac{\theta}{2}e^{i\eta}\left|1\right\rangle\) is the orthonormal pair of \(\left|\psi\right\rangle\). Moreover, note that the state \(\left|\psi\right\rangle\) reaches its maximal coherence relative to the basis \(\left\{\left|a_{z}\right\rangle\right\}\) when \(\theta=\pi/2\) so that it is mutually unbiased with both \(\left\{\left|a_{z}\right\rangle\right\}\) and \(\left\{\left|b_{*}\right\rangle_{z}\right\}\). Hence, in this case, the state, the incoherent basis, and the maximal second basis, comprise the three mutually unbiased basis for the two dimensional Hilbert space.
The computation of KD coherence in Eq. (16) suggests the following generalization for the pure state of a single qubit relative to any arbitrary incoherent basis. Consider the quantum coherence in the state \(\left|\psi\right\rangle\) with respect to the incoherent orthonormal basis \(\left\{\left|a_{\vec{n}}\right\rangle\right\}=\left\{\left|\vec{n}+\right\rangle,\left|\vec{n}-\right\rangle\right\}\), the complete set of eigenbasis of the Pauli operator \(\sigma_{\vec{n}}\) along an arbitrary unit vector \(\vec{n}\), i.e., \(\sigma_{\vec{n}}=\vec{n}\cdot\vec{\sigma}\), where \(\vec{\sigma}=\left(\sigma_{x},\sigma_{y},\sigma_{z}\right)\). We first express the state as
\[\left|\psi\right\rangle=\psi_{\vec{n}+}\left|\vec{n}+\right\rangle+\psi_{\vec{ n}-}\left|\vec{n}-\right\rangle, \tag{17}\]
where \(\psi_{\vec{n}\pm}=\left\langle\vec{n}\pm\left|\psi\right\rangle\), so that the \(l_{1}\)-norm quantum coherence reads \(C_{l_{1}}[\left|\psi\right\rangle\left\langle\psi\right|;\left\{\Pi_{a_{\vec{n }}}\right\}]=2|\psi_{\vec{n}+}\psi_{\vec{n}-}^{*}|\), where \(\Pi_{a_{\vec{n}}}=\left|a_{\vec{n}}\right\rangle\left\langle a_{\vec{n}}\right|\). Let us show that this is equal to the KD coherence \(C_{\text{KD}}[\left|\psi\right\rangle\left\langle\psi\right|;\left\{\Pi_{a_{ \vec{n}}}\right\}]\). To do this, we shall use the property (iii), namely
\[C_{\text{KD}}[\left|\psi\right\rangle\left\langle\psi\right|;\left\{\Pi_{a_{ \vec{n}}}\right\}]=C_{\text{KD}}[U\left|\psi\right\rangle\left\langle\psi \right|U^{\dagger};\left\{U\Pi_{a_{\vec{n}}}U^{\dagger}\right\}], \tag{18}\]
where \(U\) is an arbitrary unitary operator. Let us further choose a unitary operator: \(U=\left|0\right\rangle\left\langle\vec{n}+\right|+\left|1\right\rangle\left\langle \vec{n}-\right|\), so that we have the following transformation of bases: \(U\left|\vec{n}+\right\rangle\left\langle\vec{n}+\right|U^{\dagger}=\left|0 \right\rangle\left\langle 0\right|\) and \(U\left|\vec{n}-\right\rangle\left\langle\vec{n}-\right|U^{\dagger}=\left|1 \right\rangle\left\langle 1\right|\), and the quantum state of Eq. (17) is transformed into
\[\left|\psi^{\prime}\right\rangle=U\left|\psi\right\rangle=\psi_{\vec{n}+} \left|0\right\rangle+\psi_{\vec{n}-}\left|1\right\rangle. \tag{19}\]
Taking all these into account, Eq. (18) thus becomes
\[C_{\text{KD}}[\left|\psi\right\rangle\left\langle\psi\right|; \left\{\Pi_{a_{\vec{n}}}\right\}] = C_{\text{KD}}[\left|\psi^{\prime}\right\rangle\left\langle\psi^{ \prime}\right|;\left\{\Pi_{a_{z}}\right\}] \tag{20}\] \[= 2|\psi_{\vec{n}+}\psi_{\vec{n}-}^{*}|=C_{l_{1}}[\left|\psi \right\rangle\left\langle\psi\right|;\left\{\Pi_{a_{\vec{n}}}\right\}],\]
as claimed. Here, in the second line we have used the previous result for the KD coherence relative to the basis \(\left\{a_{z}\right\}=\left\{\left|0\right\rangle,\left|1\right\rangle\right\}\), noting Eq. (19). Recalling the proof of property (iii) given in Eq. (5), the maximum is obtained when the second basis \(\left\{\left|b_{*}\right\rangle_{\vec{n}}\right\}\) is given by
\(\left|b_{*}+\right\rangle_{\vec{n}}=U^{\dagger}\left|b_{*}+\right\rangle_{z}= \frac{1}{\sqrt{2}}(\left|\vec{n}+\right\rangle+ie^{i\eta}\left|\vec{n}-\right\rangle)\) and \(\left|b_{*}-\right\rangle_{\vec{n}}=U^{\dagger}\left|b_{*}-\right\rangle_{z}= \frac{1}{\sqrt{2}}(\left|\vec{n}+\right\rangle-ie^{i\eta}\left|\vec{n}-\right\rangle)\), where \(\eta\) is the relative phase between \(\psi_{\vec{n}+}\) and \(\psi_{\vec{n}-}\).
Finally, one can generalize the above proof for the equality between the KD coherence \(C_{\rm KD}[\varrho;\{\Pi_{a}\}]\) and the \(l_{1}\)-norm coherence \(C_{l_{1}}[\varrho;\{\Pi_{a}\}]\) for general density operator \(\varrho\) in two-dimensional Hilbert space relative to an arbitrary reference basis \(\{\left|a\right\rangle\}\). First, taking \(\{\left|a_{z}\right\rangle\}=\{\left|0\right\rangle,\left|1\right\rangle\}\) as the incoherent basis, and using the expression of Eq. (15) for the second basis, one straightforwardly gets \(C_{\rm KD}[\varrho;\{\Pi_{a}\}]=2|\varrho_{01}|=C_{l_{1}}[\varrho;\{\Pi_{a}\}]\), where \(\varrho_{01}=\left\langle 0|\varrho|1\right\rangle\), and the maximum is obtained for the basis in Eq. (15) with \(\alpha=\pi/2\) and \(\beta=\pi/2-\varphi_{01}\), \(\varphi_{01}=\arg\{\varrho_{01}\}\). Using this result, one can then prove the equality between the KD coherence and the \(l_{1}\)-norm coherence for general density operator relative to general incoherent basis \(\{\left|\vec{n}+\right\rangle,\left|\vec{n}-\right\rangle\}\), by again using the property (iii) of unitarily covariant and choose the unitary that transforms the incoherent basis \(\{\left|\vec{n}+\right\rangle,\left|\vec{n}-\right\rangle\}\) to the computational basis \(\{\left|0\right\rangle,\left|1\right\rangle\}\). Hence, for a single qubit, the KD coherence defined in Eq. (4) shares all the monotonic character of the \(l_{1}\)-norm coherence with respect to certain classes of incoherence preserving quantum operations [2].
We further show that for a single qubit, the inequality of Eq. (11) is also saturated for all pure states. First, without loosing generality, let us take one of the elements of the incoherent basis as the positive \(z\)-axis of the Bloch sphere. The incoherent reference basis is thus given by \(\{\left|a_{z}\right\rangle\}=\{\left|0\right\rangle,\left|1\right\rangle\}\), the complete set of orthonormal eigenvectors of \(\sigma_{z}\). For our purpose, it is convenient to express the general state of the qubit as \(\varrho=\frac{1}{2}(\mathbb{I}+r_{x}\sigma_{x}+r_{y}\sigma_{y}+r_{z}\sigma_{z})\), where \(r^{2}=r_{x}^{2}+r_{y}^{2}+r_{z}^{2}\leq 1\). One then directly has
\[C_{\rm KD}[\varrho;\{\Pi_{a_{z}}\}] = \left|r_{x}-ir_{y}\right|=\sqrt{r^{2}-r_{z}^{2}} \tag{21}\] \[\leq \sqrt{1-r_{z}^{2}}=\sum_{a_{z}}\Delta_{\hat{\Pi}_{a_{z}}}[\varrho],\]
in accord with the inequality of Eq. (11). The equality is reached for pure state where \(r^{2}=1\), as claimed. Hence, for a single qubit, KD coherence can indeed be seen as the genuine quantum share of the uncertainty out of the total quantum uncertainty quantified by the quantum standard deviation.
Next, it is instructive to compare the KD coherence defined in Eq. (4) with a quantity defined as [19; 26]
\[\mathcal{N}[\mathrm{Pr}_{\rm KD}(a,b|\varrho)]:=\sum_{a,b}|\mathrm{Pr}_{\rm KD }(a,b|\varrho)|-1. \tag{22}\]
\(\mathcal{N}[\mathrm{Pr}_{\mathrm{KD}}(a,b|\varrho)]\) quantifies the KD nonclassicity, i.e., the negativity and the nonreality in the KD quasiprobability \(\mathrm{Pr}_{\mathrm{KD}}(a,b|\varrho)\) defined over the bases \(\{|a\rangle\}\) and \(\{|b\rangle\}\) which has been argued to signify the genuine quantum behaviour in broad quantum phenomena. It has been shown in Ref. [23] that it possesses certain plausible requirements for the quantifier of KD nonclassicality. One finds in particular that KD nonclassicality of Eq. (22) is nonincreasing under decoherence operation as for the KD coherence. An interesting observation is made in Ref. [19] where the Authors consider a depolarizing model of decoherence to show that nonnegativity of the real part of the KD quasiprobability is not sufficient to guarantee a completely incoherent state.
Now, let us assume that KD coherence relative to the basis \(\{|a\rangle\}\) is vanishing, i.e., \(C_{\mathrm{KD}}[\varrho;\{\Pi_{a}\}]=0\). Then, by the property (i) of faithfulness, we have \([\varrho,\Pi_{a}]=0\) for all \(a\). In this case, noting that \(\Pi_{a}^{2}=\Pi_{a}\), the KD quasiprobability relative to the basis \(\{|a\rangle\}\) and any other basis \(\{|b\rangle\}\) can be written as
\[\mathrm{Pr}_{\mathrm{KD}}(a,b|\varrho)=\langle b|\Pi_{a}\varrho|b\rangle= \mathrm{Tr}\Big{\{}\Pi_{b}\frac{\Pi_{a}\varrho\Pi_{a}}{\mathrm{Tr}\{\Pi_{a} \varrho\}}\Big{\}}\mathrm{Tr}\{\Pi_{a}\varrho\}. \tag{23}\]
This is just the joint probability to get outcomes \((a,b)\) in the successive measurement of \(\{\Pi_{a}\}\) followed with the measurement \(\{\Pi_{b}\}\) so that it is always real and nonnegative. Hence, in this case, KD nonclassicality is vanishing, i.e., \(\mathcal{N}[\mathrm{Pr}_{\mathrm{KD}}(a,b|\varrho)]=0\). One therefore concludes that a nonvanishing KD nonclassicality, i.e., \(\mathcal{N}[\mathrm{Pr}_{\mathrm{KD}}(a,b|\varrho)]>0\) implies a nonvanishing KD coherence, i.e., \(C_{\mathrm{KD}}[\varrho;\{\Pi_{a}\}]>0\). By symmetry, the former also implies \(C_{\mathrm{KD}}[\varrho;\{\Pi_{b}\}]>0\). The implication of this result is that the presence of negativity in the KD quasiprobability \(\mathrm{Pr}_{\mathrm{KD}}(a,b|\varrho)\) even when it is real, is sufficient to guarantee the coherence relative to one of the defining bases, say \(\{|a\rangle\}\). This is so because one can always vary the other defining basis \(\{|b\rangle\}\), so that the KD quasiprobability becomes nonreal, giving a nonvanishing KD coherence.
## IV Operational and statistical meaning
One of the important problems in the quantification of quantum coherence is to find a quantifier whose definition translates directly into a set of laboratory operations, without recoursing to quantum state tomography. Such a set of the laboratory operations is then said to give an operational meaning to the coherence quantifier thus defined. Fortunately,
there are several schemes to reconstruct KD quasiprobability without recoursing first to the quantum state tomography as elaborated in Ref. [23]. Two of them are summarized below, focusing on the relevant imaginary part of the KD quasiprobability: one is based on two successive projective measurements proposed by Johansen [47], and the other is a direct reconstruction based on weak measurement with postselection [48; 49; 50] suggested by Lundeen et. al. [51; 52; 53; 54]. These schemes for the reconstruction of the KD quasiprobability lend themselves to the operational interpretation of the KD coherence defined in Eq. (4).
Let us first discuss the method suggested by Johansen based on two successive projective measurements [47]. This is done by noting that the imaginary part of the KD quasiprobability can be expressed as
\[\mathrm{Im}\{\mathrm{Pr}_{\mathrm{KD}}(a,b|\varrho)\}=\mathrm{ Im}\{\mathrm{Tr}\{\Pi_{b}\Pi_{a}\varrho\}\} \tag{24}\] \[= -\mathrm{Im}\{\mathrm{Tr}\{\Pi_{a}\Pi_{b}\varrho\}\}=\frac{1}{2 }\mathrm{Tr}\{(\varrho_{a}-\varrho)\Pi_{b|a}^{\pi/2}\}.\]
Here \(\varrho_{a}=\Pi_{a}\varrho\Pi_{a}+(\mathbb{I}-\Pi_{a})\varrho(\mathbb{I}-\Pi _{a})\) is the state of the system after the binary measurement of \(\Pi_{a}\) without learning the outcomes, where \(\mathbb{I}-\Pi_{a}\) is the complement projector to \(\Pi_{a}\), and \(\Pi_{b|a}^{\pi/2}=e^{i\Pi_{a}\pi/2}\Pi_{b}e^{-i\Pi_{a}\pi/2}\) is the new second basis after a selective rotation generated by the first basis. We note that while performing the selective rotation to obtain \(\Pi_{b|a}^{\pi/2}\) is operationally challenging, it in principle can be done. The KD coherence can thus be expressed as, upon inserting Eq. (24) into Eq. (4),
\[C_{\mathrm{KD}}[\varrho;\{\Pi_{a}\}]=\frac{1}{2}\max_{\{|b\rangle\}}\sum_{a,b} |\mathrm{Tr}\{[\varrho-\varrho_{a}]\Pi_{b|a}^{\pi/2}\}|. \tag{25}\]
Hence, to observe the KD coherence relative to the basis \(\{|a\rangle\}\), we need to measure the expectation values of \(\Pi_{b|a}^{\pi/2}\) in the states \(\varrho\) and \(\varrho_{a}\), compute the difference, and optimize over all possible choices of \(\{|b\rangle\}\). In this scheme, KD coherence therefore admits a statistical interpretation as the maximal state disturbance induced by the measurement \(\{\Pi_{a},\mathbb{I}-\Pi_{a}\}\) as observed in the expectation value of \(\{\Pi_{b|a}^{\pi/2}\}\).
Let us proceed to discuss the direct reconstruction of KD quasiprobability via weak measurement with postselection proposed by Lundeen and co-workers [51; 52; 53; 54]. Consider the weak measurement of a Hermitian observable \(A\) without significantly perturbing the preselected state \(\varrho\), followed by a postselection on a state \(|\phi\rangle\) via a normal (i.e., strong) projective measurement. One then obtains the following weak value [48; 49; 50]:
\[A^{\mathrm{w}}(\phi|\varrho)=\frac{\langle\phi|A\varrho|\phi\rangle}{\langle \phi|\varrho|\phi\rangle}. \tag{26}\]
Note that the weak value \(A^{\rm w}(\phi|\varrho)\) may take real numbers outside of the range of the eigenvalues of \(A\), and it can even be complex. Such values are called strange or anomalous weak values. The real and imaginary parts of \(A^{\rm w}(\phi|\varrho)\) can be inferred respectively from the average shift of the position and momentum of the pointer of the measuring device [55; 56]. Noting this, the imaginary part of the KD quasiprobability of Eq. (2) can therefore be directly observed by first weakly measuring \(\Pi_{a}\) with the preselected state \(\varrho\), and then followed by the postselection on \(|b\rangle\), infer the imaginary part, and multiplied by the probability of the successful postselection, i.e.,
\[{\rm Im}\{{\rm Pr}_{\rm KD}(a,b|\varrho)\}\:=\:{\rm Im}\{\frac{\langle b|\Pi_{a }\varrho|b\rangle}{\langle b|\varrho|b\rangle}\}\:\langle b|\varrho|b\rangle= {\rm Im}\{\Pi_{a}^{\rm w}(b|\varrho)\}{\rm Pr}(b|\varrho). \tag{27}\]
The KD coherence \(C_{\rm KD}[\varrho;\{\Pi_{a}\}]\) of Eq. (4) can thus be obtained by taking the sum of the absolute value of Eq. (27), and maximize over all possible choices of the postselection bases:
\[C_{\rm KD}[\varrho;\{\Pi_{a}\}]=\max_{\{|b\rangle\}}\sum_{a}\sum_{b}\big{|}{ \rm Im}\big{\{}\Pi_{a}^{\rm w}(b|\varrho)\big{\}}\big{|}{\rm Pr}(b|\varrho). \tag{28}\]
The above operational interpretation of the KD coherence in terms of the statistics of weak values suggests the following statistical interpretation inherited from the interpretation of the weak value. First, as argued in [57; 58; 59; 60], the imaginary part of the weak value \(A^{\rm w}(b|\varrho)\) defined in Eq. (26) can be interpreted as the strength of the error in an optimal estimate of \(A\) (or a real-deterministic c-valued quantity associated with \(A\) and \(\varrho\)[60]) based on information about \(\{b\}\) obtained from a projective measurement \(\{\Pi_{b}\}\), given prior information about preparation represented by \(\varrho\). With this in mind, \(C_{\rm KD}[\varrho;\{\Pi_{a}\}]\) obtained operationally in Eq. (28) can thus be interpreted as the maximum average absolute error of estimating the incoherent basis \(\{|a\rangle\}\), by varying the postselection basis \(\{|b\rangle\}\), given a preparation associated with the quantum state \(\varrho\).
Hence, the KD coherence \(C_{\rm KD}[\varrho;\{\Pi_{a}\}]\) devised in this work has transparent meanings in terms of direct laboratory operations. It is clear from the above operational schemes to observe KD coherence that the resource consuming procedure is the maximization over all possible second bases \(\{|b\rangle\}\). This classical optimization can be done via variational quantum circuits in a hybrid quantum-classical scheme. Let us note that, at least for a single qubit (two dimensional system), the method of computing e.g. the \(l_{1}\)-norm coherence by first reconstructing the density matrix via the state tomography, is much simpler than the above operational schemes either based on two successive measurements or weak measurement
with postselection. We emphasize however that the procedure for the state tomography does not tell us the operational meaning of the \(l_{1}\)-norm coherence. By contrast, KD coherence translates directly to a set of laboratory operations, leading to their statistical meaning, which might give insight into its application in quantum information processing.
Moreover, if one only aims to detect the presence of coherence of an unknown quantum state with respect to an incoherent basis \(\{|a\rangle\}\), then one may skip the operationally cumbersome maximization over classical parameters. Namely, it is sufficient to find a second basis \(\{|b\rangle\}\) so that the \(l_{1}\)-norm of the imaginary part of the KD quasiprobability is nonvanishing, i.e. \(\sum_{a}\sum_{b}\left|\mathrm{Im}\{\mathrm{Pr}_{\mathrm{KD}}(a,b|\varrho)\} \right|>0\), which, by definition of Eq. (4), guarantees a nonvanishing of the KD coherence, and thus, by virtue of Eq. (13) guarantees a nonvanishing \(l_{1}\)-norm quantum coherence. Since \(\left|\mathrm{Im}\{\mathrm{Pr}_{\mathrm{KD}}(a,b|\varrho)\}\right|=\left| \mathrm{Im}\{\mathrm{Pr}_{\mathrm{KD}}(b,a|\varrho)\}\right|\), it also indicates the coherence with respect to the basis \(\{|b\rangle\}\). The maximization over one of the two bases, i.e., over \(\{|b\rangle\}\) or \(\{|a\rangle\}\), defines the KD coherence with respect to the other basis.
Having expressed the KD coherence in terms of weak measurement with postselection as discussed above, it still makes sense operationally if the incoherent basis that is given by the set of one-dimensional (rank one) projectors \(\{\Pi_{a}\}\) is replaced by a more general measurement basis. This suggests a generalization of the KD coherence as follows. Consider a complete set of POVM measurement, i.e., \(\{M_{x}\}\), \(M_{x}\geq 0\), \(\sum_{x}M_{x}=\mathbb{I}\). We then define the KD coherence with respect to the POVM basis as
\[C_{\mathrm{KD}}[\varrho;\{M_{x}\}] := \max_{\{|b\rangle\}}\sum_{x}\sum_{b}\left|\mathrm{Im}\big{\{} \left\langle b|M_{x}\varrho|b\right\rangle\big{\}}\right| \tag{29}\] \[= \max_{\{|b\rangle\}}\sum_{x}\sum_{b}\frac{1}{2}\big{|}\left\langle b |[M_{x},\varrho]|b\right\rangle\big{|}.\]
Note however that in this case, a state is in general incoherent if \([M_{x},\varrho]=0\) for all \(x\). \(C_{\mathrm{KD}}[\varrho;\{M_{x}\}]\) reduces to Eq. (4) when \(\{M_{x}\}\) is a set of orthonormal one dimensional projectors, but it also covers the case when the rank of the projectors are larger than one, allowing the definition of coherence relative to the decomposition of the Hilbert space into subspaces with dimension larger than one, and also the case when the POVM operators are not orthogonal. See Ref. [61] for a different approach. Let us for example assume that the POVM is obtained by coarse-graining the incoherent basis, i.e., \(M_{\mathcal{A}}=\sum_{a\in\mathcal{A}}\Pi_{a}\), where \(\mathcal{A}\) is the disjoint subsets partitioning of the indices \(\{a\}\). Such a coarse-graining arises naturally
if there is a degeneracy. Then, in this case, we have:
\[C_{\rm KD}[\varrho;\{M_{\cal A}\}] = \max_{\{|b\rangle\}}\sum_{\cal A}\sum_{b}\left|{\rm Im}\big{\{}\, \langle b|\sum_{a\in{\cal A}}\Pi_{a}\varrho|b\rangle\,\big{\}}\right| \tag{30}\] \[\leq \max_{\{|b\rangle\}}\sum_{a,b}\left|{\rm Im}\big{\{}\,\langle b| \Pi_{a}\varrho|b\rangle\,\big{\}}\right|\] \[= C_{\rm KD}[\varrho;\{\Pi_{a}\}].\]
Hence, the KD coherence is nonincreasing under coarse-graining of the incoherent basis.
As a final note, KD quasiprobability has been argued as a central object in the study of quantum fluctuations arising in a broad field of quantum science [23]. This observation naturally suggests a possible application of the concept of KD coherence to characterize such quantum fluctuations. Here, we show that it can be used to characterize linear response function. The exposition below follows that of Ref. [23]. Let us consider a unitary dynamics with the Hamiltonian \(H(t)=H_{0}-\lambda(t)A\), with \(A\) a perturbation and \(\lambda(t)\) is nonzero only for \(t>0\). Then, in the linear response regime, we have, \({\rm Tr}\{B(t)\varrho(t)\}-{\rm Tr}\{B(0)\varrho(0)\}\approx\int_{0}^{t}{\rm d }t^{\prime}\lambda(t^{\prime})\Phi_{AB}(t^{\prime},t)\), where \(\varrho(t)\) is the quantum state at time \(t\), \(\Phi_{AB}(t^{\prime},t)\) is called linear response function that is given by \(\Phi_{AB}(t^{\prime},t)=i{\rm Tr}\{[A(t^{\prime}),B(t)]\varrho(0)\}\), with \(O(t)=e^{iH_{0}t}Oe^{-iH_{0}t}\). Expressing \(A(t)=\sum_{a}a\Pi_{a(t)}\) and \(B(t)=\sum_{b}b\Pi_{b(t)}\), where \(\left|a(t)\right\rangle=e^{iH_{0}t}\left|a\right\rangle\), \(\left|b(t)\right\rangle=e^{iH_{0}t}\left|b\right\rangle\), the linear response function can be written in terms of the imaginary part of the KD quasiprobability as
\[\Phi_{AB}(t^{\prime},t)=2\sum_{a,b}ab{\rm Im}\{{\rm Pr}_{\rm KD}(a(t^{\prime} ),b(t)|\varrho(0))\}. \tag{31}\]
It encodes the correlation between the observable \(B(t)\) and the perturbation. Taking the absolute value, and maximizing over all possible \(B\in\Lambda_{B}\) with the same nontrivial spectrum of eigenvalues, one thus obtains
\[\max_{B\in\Lambda_{B}}|\Phi_{AB}(t^{\prime},t)| \leq 2|a|_{*}|b|_{*}\max_{|b(t)\rangle}\sum_{a,b}\left|{\rm Im}\{{\rm Pr }_{\rm KD}(a(t^{\prime}),b(t)|\varrho(0))\}\right| \tag{32}\] \[= 2|a|_{*}|b|_{*}C_{\rm KD}[\varrho(0);\{\Pi_{a(t^{\prime})}\}],\]
where \(|a|_{*}\) and \(|b|_{*}\) are the maximum absolute eigenvalues of \(A\) and \(B\), respectively. Hence, the KD coherence in the initial state relative to the incoherent basis \(\{|a(t^{\prime})\rangle\}\) determines an upper bound to the absolute linear response function maximized over all \(B\) with a fixed spectrum. This means that a nonvanishing KD coherence is necessary for a nonvanishing linear response function.
Summary and Remarks
Given a quantum state and an incoherent basis, we have identified a quantity, KD coherence, defined as the \(l_{1}\)-norm of the imaginary part of the associated KD quasiprobability defined over the incoherent basis and a second basis, and maximized over all possible choices of the latter. It quantifies the failure of commutativity of the state with the incoherent basis, and satisfies certain desirable properties for a quantifier of coherence. It is upper bounded by the total sum of the quantum standard deviation, i.e., the quantum uncertainty, of the incoherent basis in the state. KD coherence gives a lower bound to the \(l_{1}\)-norm quantum coherence, and for arbitrary state of a single qubit, they yield equal values. We demonstrated that KD coherence can be translated directly into laboratory operations, i.e., without recoursing to quantum state tomography, in a couple of quantum-classical hybrid schemes, leading to the statistical meaning as maximum disturbance induced by the measurement of, or as the maximum mean absolute error in the estimation of the incoherent basis. Finally, we discuss the relevance of the KD coherence to characterize the linear response function. We hope our results will initiate a program to use the nonclassicality of KD quasiprobability, and its closely related concept of anomalous weak values, to access various nonclassical aspects encoded in the quantum state such as asymmetry and quantum correlation. It might thus give a better intuition and fresh insight into their roles as resources in quantum information processing, and in wide areas of quantum science where KD quasiprobability has been shown to play important roles [23].
###### Acknowledgements.
This work is partly funded by Institute for Research and Community Service, Bandung Institute of Technology, under the program of research assignment with the contract number: 2971/IT1.B07.1/TA.00/2021. It is also in part supported by the Indonesia Ministry of Research, Technology, and Higher Education through PDUPT research scheme with the contract number: 187/E5/PG.02.00.PT/2022 and 2/E1/KP.PTNBH/2019. The Authors would like to thank the anonymous Referees for constructive criticism and suggestions, and
Mohammad K. Agusta for useful discussion.
|
2306.00095 | Side-Channel VoIP Profiling Attack against Customer Service Automated
Phone System | In many VoIP systems, Voice Activity Detection (VAD) is often used on VoIP
traffic to suppress packets of silence in order to reduce the bandwidth
consumption of phone calls. Unfortunately, although VoIP traffic is fully
encrypted and secured, traffic analysis of this suppression can reveal
identifying information about calls made to customer service automated phone
systems. Because different customer service phone systems have distinct, but
fixed (pre-recorded) automated voice messages sent to customers, VAD silence
suppression used in VoIP will enable an eavesdropper to profile and identify
these automated voice messages. In this paper, we will use a popular enterprise
VoIP system (Cisco CallManager), running the default Session Initiation
Protocol (SIP) protocol, to demonstrate that an attacker can reliably use the
silence suppression to profile calls to such VoIP systems. Our real-world
experiments demonstrate that this side-channel profiling attack can be used to
accurately identify not only what customer service phone number a customer
calls, but also what following options are subsequently chosen by the caller in
the phone conversation. | Roy Laurens, Edo Christianto, Bruce Caulkins, Cliff C. Zou | 2023-05-31T18:14:38Z | http://arxiv.org/abs/2306.00095v1 | # Side-Channel VoIP Profiling Attack against Customer Service Automated Phone System
###### Abstract
In many VoIP systems, Voice Activity Detection (VAD) is often used on VoIP traffic to suppress packets of silence in order to reduce the bandwidth consumption of phone calls. Unfortunately, although VoIP traffic is fully encrypted and secured, traffic analysis of this suppression can reveal identifying information about calls made to customer service automated phone systems. Because different customer service phone systems have distinct, but fixed (pre-recorded) automated voice messages sent to customers, VAD silence suppression used in VoIP will enable an eavesdropper to profile and identify these automated voice messages. In this paper, we will use a popular enterprise VoIP system (Cisco CallManager), running the default Session Initiation Protocol (SIP) protocol, to demonstrate that an attacker can reliably use the silence suppression to profile calls to such VoIP systems. Our real-world experiments demonstrate that this side-channel profiling attack can be used to accurately identify not only what customer service phone number a customer calls, but also what following options are subsequently chosen by the caller in the phone conversation.
VoIP; side-channel attack; automated phone system
## I Introduction
Voice over Internet Protocol (VoIP) offers significant advantage compared to traditional circuit-switched voice networks. With the circuit-based call, a dedicated 64-Kbps fixed bandwidth link is required regardless of how much of the call is speech or how much is silence. A VoIP call, on the other hand, packetize all the conversation. Therefore, it can suppress the packets of silence, called Voice Activity Detection (VAD), where up to 35 percent bandwidth savings can be obtained [1]. These bandwidth savings can then be used for other network application, which makes VoIP more efficient compared to the circuit-based solution.
However, this silence suppression has unintended consequences with significant privacy implication. A cycle of voice traffic stream and silence creates a distinct pattern that can be identified and catalogued. This pattern exists even if the data stream itself is encrypted, and the actual IP-phone end point is unknown. VoIP traffic uses a codec to encode/decode the voice, and each codec uses a specific packet size and interval, and the presence of the cycle of voice traffic stream and silence cannot be obfuscated by encryption.
Normally, human conversations have enough variability in their speech pattern, speed, etc., that makes silence analysis impractical as an attack vector because even the same person will not say the same words in exactly the same way every time [2]. However, if the call is made to an automated customer service phone system (usually a toll-free 1-800 numbers), then it will be answered by an Interactive Voice Response (IVR) recording, which has a constant pattern of speech and silence due to its fixed and pre-recorded voice messages. Therefore, by profiling and cataloging the calls to such a customer service automated phone system, we can reliably identify whether subsequent calls are made to the number that we have profiled. Furthermore, we can even identify the subsequent options that the caller selects during the call using the same method.
In this paper, we set up a VoIP testbed and use it to collect and fingerprint the VoIP traffic of various popular customer service automated phone systems, such as Walmart, airlines, banks, insurance companies, etc. We demonstrate that these customer service automated phone systems have clearly distinguished voice messages that make it very easy to profile and thus are vulnerable to the presented side-channel profiling attack. Our real-world experiments show that an eavesdropper can accurately identify not only what customer service phone number a customer calls, but also what following options are subsequently chosen by the caller in the phone conversation.
## II Related Work
### _VoIP Attack_
Research on VoIP attacks are mostly focused on the call setup stage of the protocol. The technical details of the attacks are different, but the methods are more or less the same. Ghafarian et al. [3] showed that VoIP protocols can be attacked with Denial of Service (DoS). They set up a VoIP environment and launched DoS flood attack to the SIP server. As VoIP is employed on the top of IP infrastructure, security of other protocols such as DNS, DHCP, TLS/SSL, and routing protocols, among others, must also be implemented properly. Failure to do so affects VoIP critically. By targeting vulnerable protocols and signaling at the outset of call setup, rerouting calls or interception is also possible. Wang et al. [4] investigated a man-in-the-middle (MITM) attack, specifically VoIP call diversion to a bogus IVR or representative. Unlike our paper, all the attacks described in these papers assumes that the VoIP messages are unencrypted, whereas our attack is feasible even for encrypted calls.
### _Traffic analysis_
Analysis of network traffic can reveal private information, for example, Alyami et al. [5] studied a privacy attack in which
the profiling is performed using IoT devices' network traffic monitored from out-of-network. With regards to voice conversation, the analysis can be divided into active (i.e., probing) attack and passive (i.e., eavesdropping).
As an example of active attack, Shintre et al. [6] send continuous probes to targets and analyze the response traffic to reveal which target is calling one another. However, this will only work on private network.
For passive attack, there is plenty of work that looks at both the packet length, such as Wright et al. [7] and Dupasquier et al. [8], whereas Lella [9] looks at the silence suppression, which is similar to our approach. But all of them focused on identifying words or phrases that were specifically created and spoken just for the test. In contrast, our paper targets real customer service phone systems that has practical impact. We are also testing against hardware-based VoIP that is widely used in the real world, instead of app-based VoIP (i.e., WhatsApp, etc.) that cannot make a call to ordinary phone number.
## III Threat Model and Testbed Overview
In this section, we will give a short introduction about VoIP, followed by an explanation of the characteristic of the attacker and the testbed that we built.
### _VoIP Primer_
VoIP, also known as IP Telephony, is the transmission of voice signals using Internet Protocol (IP) over the data network, such as the Internet. IP Phones that runs VoIP service will encode the incoming voice signal into data stream for transmission, and it will also decode incoming voice data stream back into its original audio signal. There are various supported encoder/decoder (codec) for VoIP [1], some of which are shown in table 1. All these codecs have a constant interval between voice payload, which make them susceptible to our attack.
In addition to compatible codec, VoIP also need a signaling protocol so the IP phones can dial one another. The current industry standard for this is Session Initiation Protocol (SIP), which is an RFC standard from Internet Engineering Task Force (IETF) to establish sessions in an IP network [10]. SIP operates in a client-server model, with a SIP server facilitating signalling between VoIP phones that wants to start a communication session. The SIP server also often serves as a VoIP gateway to Public Switched Telephone Network (PSTN) so VoIP phones can call and communicate with traditional phones.
Since VoIP runs on top of existing data network, the VoIP phones and SIP server/gateway does not have to share the same physical location. In a corporate environment, a SIP server can be located in the headquarter, while the IP phones in the branch locations connect to it using existing Virtual Private Network (VPN) data link, for example. There are also many companies offering cloud phone system, where they offer to host the SIP server functionality and the customer can connect his/her IP phone to this server via the Internet. Although these voice data streams might be encrypted, the fixed size and constant interval of the stream, combined with the silence suppression (VAD) makes this transmission susceptible to the attack outlined below.
### _Threat Model_
Our attack exploits the patterns of traffic streams and silence, so the attacker will first need to create a database that catalogues the distinct stream and silence characteristic of various customer service toll-free numbers (Fig 1). This can be done by making the call by the attacker himself and extracting the packet stream features of that call. Of course such a database cannot completely cover all existing customer service phone numbers, so we will have a category of 'unclassified' entry to denote unknown numbers. However, even the unclassified entry can reveal private information because we can identify if calls are made to the same IVR system, and whether the same IVR options are chosen, even if we don't know what phone numbers are being called.
Once the database is built, the attacker can infer sensitive information about a VoIP phone call to an automated customer service phone system, by eavesdropping on the traffic stream of such a call. The observed packet stream and silence pattern will be matched against known pattern to identify the phone number and the options chosen by the caller. The sniffing can happen at any path that the VoIP data stream traverse, so we can expect the packets to be encrypted. Furthermore, the stream will experience normal network conditions, such as packet loss, latency and jitter. We will not consider app-based calls (WhatsApp, FaceTime, etc.) because these apps can only call other users of the same app, and cannot be used to call a customer service number on the traditional phone network [11][12].
### _Testbed Overview_
In order to make our test as realistic as possible, we use a popular VoIP equipment hardware and route the calls through the Internet. Therefore, the packet streams will experience real-world Internet latency and jitters, which will be reflected in the collected data (Fig 2). For the hardware, we use Cisco 2911 [13] with CallManager Express [14] to act as SIP server, the
Fig. 1: VoIP attack threat model
traditional phone line (PSTN) gateway and an IPSec tunnel [15] endpoint. As we want to expose our calls to real-world network conditions, this encrypted tunnel connection is routed from United States through Indonesia, for a total of 37 hops and an average round-trip time of 568 ms. We intentionally choose a long routing path to show that the attack is feasible even if the VoIP data stream experiences a big latency, jitter and even data loss in the path. Finally, the IPSec tunnel is terminated at a Cisco 881 Router [16] which is also connected to the Cisco 7965 [17] IP Phone (Fig 3).
For VoIP protocol configuration, we use the standard SIP signaling [10] (as opposed to Cisco-proprietary SCCP protocol [18]), and the default voice codec of G.729. It has 20 Bytes of voice payload per packet at 50 packets per second (i.e., 20 ms gap). For sniffing tools, we use Wireshark [19] and configure it to capture 174-Bytes packets, which is the voice payload plus the various network overhead (RTP, IPSec tunnel, IPv4, Ethernet). Even if there might be other packets that share the same packet size, we can differentiate between them because our target packets will have a consistent gap of 20 ms. A long gap in an otherwise constant 20 ms stream is indicative of the silence that we will exploit for our profiling of the call (Fig 4).
For VoIP protocol configuration, we use the standard SIP signaling [10] (as opposed to Cisco-proprietary SCCP protocol [18]), and the default voice codec of G.729. It has 20 Bytes of voice payload per packet at 50 packets per second (i.e., 20 ms gap). For sniffing tools, we use Wireshark [19] and configure it to capture 174-Bytes packets, which is the voice payload plus the various network overhead (RTP, IPSec tunnel, IPv4, Ethernet). Even if there might be other packets that share the same packet size, we can differentiate between them because our target packets will have a consistent gap of 20 ms. A long gap in an otherwise constant 20 ms stream is indicative of the silence that we will exploit for our profiling of the call (Fig 4).
### _Wireshark Packet Capture_
We want to capture the packets associated with the call, but because we are capturing on the tunnel side, we won't be able to look inside the packet. So the filter that we use on the Wireshark is just to capture UDP packets of size 174 that is transmitted to the tunnel endpoints: <udp and ip dst "tunnel_ip" and length=174>.
The captured packets are then exported to a csv (Comma Separated Values) file so it can be analyzed by our php script.
### _Creating Profile Database_
In order to separate the speech fragments, we must first choose the threshold of gap that the script would consider as "silence". In an ideal network, any gap greater than 20 ms is a silence because the codec has a 20-ms interval. However, our attack is performed against packet streams during transit, so it
Fig. 4: Wireshark capture showing a spike in inter packet gap due to silence suppression
Fig. 5: Two profiling calls to American Airlines phone system, showing silence gap spikes that reveal a very consistent duration of speech fragment in between the silence
Fig. 3: Testbed equipment
Fig. 2: Testbed topology
must take into account the possible adverse network conditions, such as latency, jitter and packet loss. Hence, a threshold that is too small could incorrectly identify jitters and packet losses as "silence", whereas if it is too large then it might miss a real silence. In the end, we choose ten times the normal codec interval (10 x 20 ms = 200 ms) as the optimal value.
Once the fragments are identified, we need to measure their durations. We initially thought of using the number of packets, but to make the measurement more robust, we instead use the time interval between the first packet and the last packet in a fragment. By doing this, the profile will be able to tolerate packet losses in the middle of a fragment.
In our script, we also discard the first two small fragments because based on our experiments it could take up to two rings before the automated system picks up the call.
### _Phone Number Profile Database_
We run our profiler ten times for each phone number to collect the VoIP traffic. Table II summarizes the data collection result. For the first and the second voice fragments for each phone number, the table shows the minimum, maximum, and median values based on the 10 observed duration time values.
We only collected the first two speech fragments profile because it is already enough to properly distinguish each of the phone number. As a matter of fact, most of the numbers can be identified just by looking at the first fragment. Of course there might be overlap in fragment durations if more numbers are added, and in that case additional fragments might be needed.
The result is visualized in Figure 6. The height of each bar represents the range of durations that were collected for that voice segment. Since many of the fragments have a very consistent duration, their bars on the figure are correspondingly very thin. There are three exceptions, which is for segment two of AA Credit, and segment one of Bank of America and United Ins. The case of AA Credit is unique, because after the first 5.3s fragment, the call is transferred to another system which starts with several rings before it is answered. Therefore, the variability is due to the rings, and not the speech itself. In case of Bank of America and United Insurance, we believe the first relatively long speech fragment for some reason causes this variability. But it is still distinct enough to be properly identified with the right phone number.
Some profile, such as Capital One and Geico Ins, seems to overlap on Figure 6. But if we zoom in on them, we can see that there is enough separation between them for accurate classification (Fig 7). On some profiles where the first fragment do have some overlap, such as the Southwest and AA Credit case, we can still identify them apart by looking at the second fragment, which shows clear difference (Fig 8).
The collected profile also shows that the second fragment actually has stronger one-to-one correlation to a phone number than the first fragment. However we must always start the identification process from the first because a caller might select the option (i.e., push a button) before the entire message is played by the automated phone system. Therefore, we might need to identify the call by just using the first fragment.
Based on this, we create a simple profile database and classifier program. The database will contain a table with the
Fig. 8: An overlap in first fragment is resolved by evaluating the second fragment
Fig. 6: Visualization of the range for the first and the second voice fragment from the 12 automated phone systems that were profiled. Virtually all fragments have a very consistent durations across the ten calls, hence the thin bars.
Fig. 7: Zoomed in visualization showing clear delineation between profiles that look similar in Fig 6
phone number, its first fragment duration range and its second fragment duration range. Because each phone number has a unique profile that does not completely overlap another number, we can use a simplistic classifier and does not need to use any machine learning methods.
### _Option Selection Profiling_
Using the same data gathering method, we can also profile the options that a caller chooses during the call. This option selection profiling attack will reveal more private information of the caller, thus it is a more serious attack.
We perform this profiling on two companies. In the case of WalMart, it has multiple levels of options (Fig 9). An option is selected using touch-tone (DTMF) [20], except for the second level option under "Orders", which uses voice recognition. For demonstration, here we only show the profiling attack on the first level option selection (Table III). Whereas in Geico phone system (Table IV), it entirely uses voice recognition. For this reason, the Geico phone system sometime asks for clarification if it is not sure of the options chosen by the caller (i.e., the "home?" and "claim?" row). These clarification speech messages are difficult to reproduce, as we only experience them several times and we are unable to reproduce them consistently. Also, some Geico options have only one fragment response, hence there is no second fragment to measure.
## V Evaluation
### _Phone Number Classification_
Because there is no speech profile that completely overlaps between different companies, we can make our detection algorithm more robust. First, we increase the range of durations for any fragment profile to at least 5 times the codec interval (5 x 20 ms = 100 ms) from its median. This means we are able to tolerate a packet loss of up to five packets and a jitter of up to 5 times the normal codec interval. If the measured range is already higher than that, then we keep the higher value. For example, the second fragment duration of Delta Air (which has a median of 4.34) will become 4.24 - 4.44. Second, we can use simple iF/then matching between fragment durations of the eavesdropped stream and each profile in our database. We don't need to use machine learning and its associated complexity. On the other hand, in a real attack where the database has hundreds or even thousands of profiles of customer service phone numbers, the attacker might need to rely on machine learning algorithms to do classification accurately.
As for the case of AA Credit, where the second fragment is a ringing tone, we manually change the profile for this fragment to 0.2-2s. We do this because the ringing tone could be a short one, and two-second tone is the standard ringing tone in United States [21].
Our classifier is coded in php and it will go through each phone profile to check if the current eavesdropped stream has a pattern that fits that phone's profile. If it cannot find any match after all profiles are compared, then the eavesdropped stream will be considered an "Unclassified" profile.
We run the test for five minutes for each company, which yields about 9-15 calls for each customer service phone number. The classification result is shown in Figure 10. There are two instances of incorrect identification with Capital One and AA Credit. The Capital One is due to a jitter of more than 100 ms. Whereas the AA Credit case is due to a ringing tone that goes beyond the standard two-second tone. Using the standard measurement for precision and recall [22], Capital One has 100% precision and 87% recall, whereas AA Credit has 100% precision and 92% recall. The rest of the phones all have 100% precision and recall.
### _Option Selection Classification_
For each customer service phone number, because there are only several options (mostly no more than 9) at any level for a caller to choose, we can optimize our classifier further by only evaluating the first fragment in most cases. There is a small overlap between option 3 and 4 for Walmart, which is the only case where we need to look at the second fragment for classification. We are able to correctly identify each option chosen by the caller in the eavesdropped stream, as shown in
Fig. 10: Confusion matrix of phone number classification
Fig. 9: WalMart Customer Service Phone Options Tree
Figure 11 and Figure 12 for Walmart and Geico phone numbers, respectively. In both cases, the precision and recall for all of them are 100%.
### _Limitation_
For the attack to work, the played messages from a phone number must not change between the profiling and the eavesdropping. Any changes will require a re-profiling of the phone number in question. The attack will also fail if the caller selects the options (i.e., push the touch-tone button) prior to the system finishing the first two speech fragments. In this case, the match must be performed on partial fragment, leading to inaccuracies. Finally, if the Voice Activity Detection (VAD) is disabled, there would be no silence gap and the attack become infeasible. However, this will also eliminate the bandwidth saving benefit associated with VAD.
## VI Conclusion
Our paper demonstrated that by exploiting the static speech pattern of automated customer service phone, an attacker can potentially reveal private information about a caller, including which customer service phone number (and thus the company) the call is to, and even the subsequent options chosen by the caller. Furthermore, once the correct profile is obtained, the attack is very accurate with many instances of 100% precision and recall.
For future work, we can study the same vulnerability in IVR where the personal information provided by a caller is repeated back by the automated phone system, in most cases, to let the caller confirming the accuracy of he or she inputs. If the same profiling can be performed on such IVR response, this could lead to a much more serious privacy leakage of the caller's personal information such as birth date, phone number, or account number.
|
2309.17387 | Geometric-algebraic approach to aqueous solutions of diprotic acids and
its buffer mixtures | A closed-form analytical expression for $\ce{[H3O+]}$ has been obtained for
aqueous solutions of diprotic acids and its soluble salts. This formula allows
to calculate the pH of aqueous solutions of diprotic acids, their buffer
solutions, and the titrations of these two by a strong base, from the values of
p$K_1$, p$K_2$, and the effective concentrations of the acid and the base,
$\bar{C}_\mathrm{a}$ and $\bar{C}_\mathrm{b}$ respectively. It is shown that a
strong base titration of an acid, or its buffer solutions, is always a linear
path in the $\bar{C}_\mathrm{a}$--$\bar{C}_\mathrm{b}$ plane, which allows a
simple analysis of the pH stability of buffer solutions. The mathematical
analysis of the equilibrium equations of the dissolution of a diprotic acid in
water and the physical constraints allowed to obtain two approximate equations
for the diprotic acids. One of the approximations is useful for acids with
$\mathrm{p}K_2-\mathrm{p}K_1\le\log_{10}4$, the other for acids with
$\mathrm{p}K_2-\mathrm{p}K_1\le-\log_{10}4$. | Juan C. Morales, Carlos A. Arango | 2023-09-29T16:48:06Z | http://arxiv.org/abs/2309.17387v1 | # Geometric-algebraic approach to aqueous solutions of diprotic acids and its buffer mixtures
###### Abstract
A closed-form analytical expression for [H\({}_{3}\)O\({}^{+}\)] has been obtained for aqueous solutions of diprotic acids and its soluble salts. This formula allows to calculate the pH of aqueous solutions of diprotic acids, their buffer solutions, and the titrations of these two by a strong base, from the values of p\(K_{1}\), p\(K_{2}\), and the effective concentrations of the acid and the base, \(\bar{C}_{\mathrm{a}}\) and \(\bar{C}_{\mathrm{b}}\) respectively. It is shown that a strong base titration of an acid, or its buffer solutions, is always a linear path in the \(\bar{C}_{\mathrm{a}}\)-\(\bar{C}_{\mathrm{b}}\) plane, which allows a simple analysis of the pH stability of buffer solutions. The mathematical analysis of the equilibrium equations of the dissolution of a diprotic acid in water and the physical constraints allowed to obtain two approximate equations for the diprotic acids. One of the approximations is useful for acids with p\(K_{2}-\mathrm{p}K_{1}\leq\log_{10}4\), the other for acids with p\(K_{2}-\mathrm{p}K_{1}\leq-\log_{10}4\).
## Introduction
Diprotic acids are of central importance in biochemistry, physiology, and industrial and environmental chemistry. In biochemistry, several amino acids behave as diprotic acids with two dissociated protons: one proton on the \(\alpha\) amino group and one on the \(\alpha\) carboxyl group
[1]. In physiology, the regulation of blood pH cannot be understood without considering the buffer made by carbonic acid, H\({}_{2}\)CO\({}_{3}\), and the bicarbonate ion, HCO\({}_{3}\)\({}^{-}\)[2]. In environmental chemistry, the current model for understanding ocean acidification is based on the aqueous chemical equilibrium between CO\({}_{2}\), H\({}_{2}\)CO\({}_{3}\), HCO\({}_{3}\)\({}^{-}\), and CO\({}_{3}\)\({}^{-}\)[3].
A Bronsted diprotic acid is a chemical substance H\({}_{2}\)B that partially dissolves in water producing hydronium ion, H\({}_{3}\)O\({}^{+}\), and the conjugate base, HB\({}^{-}\). This conjugate base further dissociates partially producing the second conjugate base, B\({}^{2-}\). In the state of equilibrium, the concentrations of the chemical species are constant [4, 5]. The equilibrium concentrations of the chemical species are given by the equations of chemical equilibrium, and the chemical and electric balance [6]. The aqueous dissociation of a diprotic acid and, its soluble salts, involves five chemical species and five mathematical relations between these species, therefore, in principle is possible to obtain the equilibrium concentrations of all the chemical species by solving this system of equations. In practice, the system of equations involves nonlinear terms making difficult to obtain exact mathematical expression for the equilibrium concentrations. For the dissociation of a diprotic acid and its soluble salts, the algebraic manipulation of the system of equations gives a quartic equation for the concentration of H\({}_{3}\)O\({}^{+}\), [H\({}_{3}\)O\({}^{+}\)]. The equilibrium concentration of the hydronium ion is obtained by finding the roots of its quartic equation. Although there is a quartic formula that gives the explicit roots of a quartic equation, it is not practical to use due to its complexity. The use of the quartic formulas gives four roots, each of them implies the execution of at least 57 mathematical operations. Although these type of calculation is a simple task for modern computers, the formulas obtained from the quartic equation are not simplified which causes accumulation of computational error. On the other hand, graphical and numerical solutions are easily obtained using numerical calculators and computer software [7]. Although the graphical-numerical approach is fast and efficient to calculate concentrations as function of the pH, it has some disadvantages against an analytical closed form solution. The analytical solution can be differentiated to study buffers and buffer stability, or can be easily function
composed to study titrations of acids and buffers by strong bases [8]. Another advantage of an analytical closed form is the possibility to analyze mathematically the effect on the pH of parameters such as the concentrations and the acid dissociation constants. In this work it has been found that that the constrain p\(K_{2}-\)p\(K_{1}\geq\log_{10}4\), on the p\(K\)s of the acid, has an important effect on the nature of the roots of the quartic polynomial for [H\({}_{3}\)O\({}^{+}\)]. This constrain has been previously obtained by considering isomerization of the ionization products and a detailed equilibrium scheme in which the micro-equilibrium constants correspond to partial equilibria [9, 10, 11]. Direct observation of the experimental values of p\(K_{1}\) and p\(K_{2}\) of a large set of diprotic acids allowed to find that several compounds, in particular the nitrogenous organics, follow the constrain p\(K_{2}-\)p\(K_{1}\leq-\log_{10}4\).
The main result of this paper is a closed form analytical solutions for [H\({}_{3}\)O\({}^{+}\)] for the full chemical equilibrium of the aqueous dissociation of a diprotic acid, and its monobasic and dibasic salts. The use of effective acid and base concentrations allow to have only one mathematical expression for aqueous dissolutions of diprotic acids, and buffer dissolutions of diprotic acids and its soluble salts. In this work it is shown how this unified approach to diprotic acids and its buffers allows to study the pH stability of buffer solutions in relation with the equivalent acid dissolution.
This article is organized as follows: In the Theory and Methods section, the first subsection is dedicated to establishing the notation and fundamental equation of chemical equilibrium and physical constraints. In this subsection a unified notation is introduced, allowing the same equations to be used for: aqueous solutions of diprotic acids, buffers of diprotic acids, and titrations with strong bases. In the second subsection of Theory and Methods, a mathematical expression for [H\({}_{3}\)O\({}^{+}\)] is obtained and analyzed, showing the complete expression for [H\({}_{3}\)O\({}^{+}\)]. The final expressions of [H\({}_{3}\)O\({}^{+}\)] are written in algebraic terms using basic arithmetic operations and radicals (square and cube roots). These expressions can be used with the help of computer algebra software to obtain the pH without approximations. The reader interested in the mathematical details and the procedures used to obtain
these equations is referred to the Appendix. The third and final subsection introduces exact expressions for the titration functions of the aqueous solution and the buffer solution of diprotic acids. The Results and Discussion section shows the results obtained using the expressions obtained previously. The first section shows that although most diprotic acids obey the condition \(\mathrm{p}K_{2}-\mathrm{p}K_{1}\geq\log_{10}4\), there are some acids that follow the condition \(\mathrm{p}K_{2}-\mathrm{p}K_{1}\leq-\log_{10}4\). In the next subsection, the physical constraints of diprotic acids are used to obtain two approximations for [H\({}_{3}\)O\({}^{+}\)], these approximations are used to obtain analytical expressions for the upper and lower limits of pH. In the next subsection, we discuss the common approach of monitoring the second dissociation constant and show that in the case of micro-molar concentrations this approach fails. The following subsection shows the use of exact closed forms of pH and titration functions to analyze the neutralization of aqueous solutions of diprotic acids and their buffer mixtures. The differences between the exact expressions of this work and the approximate results of recent works for two cases are shown in detail: maleic acid and 1,8-Octanediamine. Finally, the last subsection of Results and Discussion shows an analysis of the pH stability of diprotic acid buffer solutions. In this subsection, the pH stability is analyzed as a parametric curve in the plane formed by the pH of the acid and the pH of the corresponding buffer solution.
## Theory and Methods
### Aqueous solutions of weak diprotic acids and their salts
The aqueous dissociation equilibrium of a diprotic weak acid H\({}_{2}\)B is given by the chemical equations
\[\mathrm{H_{2}B+H_{2}O} \Longleftrightarrow\mathrm{H_{3}O^{+}+HB^{-}}, \tag{1}\] \[\mathrm{HB^{-}+H_{2}O} \Longleftrightarrow\mathrm{H_{3}O^{+}+B^{2-}},\] (2) \[2\,\mathrm{H_{2}O} \Longleftrightarrow\mathrm{H_{3}O^{+}+OH^{-}}. \tag{3}\]
Relevant chemical species are H\({}_{3}\)O\({}^{+}\), OH\({}^{-}\), H\({}_{2}\)B, HB\({}^{-}\), and B\({}^{2-}\) with equilibrium molar concentrations [H\({}_{3}\)O\({}^{+}\)], [OH\({}^{-}\)], [H\({}_{2}\)B], [HB\({}^{-}\)], and [B\({}^{2-}\)], respectively. The equilibria displayed in equations (1)-(2) are effective equilibria since the two protons of H\({}_{2}\)B can dissociate separately and not necessarily consecutively [1, 9, 11].
A solution of the acid H\({}_{2}\)B is prepared in water at analytical molar concentration \(C_{\mathrm{a}}\). Once the system reaches chemical equilibrium, the concentrations of the chemical species are given by five physical conditions: two weak acid dissociation constant \(K_{1}\) and \(K_{2}\), the water auto-ionization constant \(K_{\mathrm{w}}\), the electric neutrality, and the mass balance,
\[K_{1} =\frac{[\mathrm{H_{3}O^{+}}]}{C^{\circ}}\frac{[\mathrm{HB^{-}}]}{ C^{\circ}}\left(\frac{[\mathrm{H_{2}B}]}{C^{\circ}}\right)^{-1}, \tag{4}\] \[K_{2} =\frac{[\mathrm{H_{3}O^{+}}]}{C^{\circ}}\frac{[\mathrm{B^{2-}}]}{ C^{\circ}}\left(\frac{[\mathrm{HB^{-}}]}{C^{\circ}}\right)^{-1},\] (5) \[K_{\mathrm{w}} =\frac{[\mathrm{H_{3}O^{+}}]}{C^{\circ}}\frac{[\mathrm{OH^{-}}]}{ C^{\circ}},\] (6) \[[\mathrm{H_{3}O^{+}}] =[\mathrm{OH^{-}}]+[\mathrm{HB^{-}}]+2[\mathrm{B^{2-}}],\] (7) \[C_{\mathrm{a}} =[\mathrm{HB}]+[\mathrm{HB^{-}}]+[\mathrm{B^{2-}}], \tag{8}\]
respectively. The standard molar concentration is \(C^{\circ}=1\,\mathrm{M}\). The acid constants \(K_{1}\) and
\(K_{2}\) are dimensionless, and their value range typically between \(10^{-10}\) and \(10^{-1}\). In this work, the biochemical standard state \(C^{\circ}=C^{\circ}\sqrt{K_{\rm w}}\) is used to define the dimensionless variables: \(x=[{\rm H}_{3}{\rm O}^{+}]/C^{\circ},\;y=[{\rm OH}^{-}]/C^{\circ},\;z_{0}=[{\rm H }_{2}{\rm B}]/C^{\circ},\;z_{1}=[{\rm HB}^{-}]/C^{\circ},\;z_{2}=[{\rm B}^{2-} ]/C^{\circ}\), and the parameter \(c_{\rm a}=C_{\rm a}/C^{\circ}\). These definitions make the equilibrium constants \(k_{1}=K_{1}/\sqrt{K_{\rm w}}\), \(k_{2}=K_{2}/\sqrt{K_{\rm w}}\), and \(k_{\rm w}=1\). In terms of the new variables and constants, equations (4)-(8) are replaced by
\[k_{1} = \frac{xz_{1}}{z_{0}}, \tag{9}\] \[k_{2} = \frac{xz_{2}}{z_{1}},\] (10) \[k_{\rm w} = xy=1,\] (11) \[x = y+z_{1}+2z_{2},\] (12) \[c_{\rm a} = z_{0}+z_{1}+z_{2}. \tag{13}\]
The equations for electric neutrality (12) and mass balance (13) are explicitly affected by the presence of a strong base and salts of the conjugate bases HB\({}^{-}\) and B\({}^{2-}\), _e.g._ NaOH, NaHB and Na\({}_{2}\)B respectively. If the dimensionless concentrations of the strong base and salts are \(c_{\rm b}=[{\rm NaOH}]/C^{\circ},\;s_{1}=[{\rm NaHB}]/C^{\circ}\) and \(s_{2}=[{\rm Na}_{2}{\rm B}]/C^{\circ}\), the charge and mass balance equations are modified to
\[x+\bar{c}_{\rm b} = y+z_{1}+2z_{2}, \tag{14}\] \[\bar{c}_{\rm a} = z_{0}+z_{1}+z_{2}, \tag{15}\]
with effective concentrations \(\bar{c}_{\rm a}=c_{\rm a}+s_{1}+s_{2}\) and \(\bar{c}_{\rm b}=c_{\rm b}+s_{1}+2s_{2}\). These effective dimensionless variables are related to the effective molar concentrations by \(\bar{C}_{\rm a}=C^{\circ}\bar{c}_{\rm a}\), and \(\bar{C}_{\rm b}=C^{\circ}\bar{c}_{\rm b}\).
The use of \(y=1/x\), obtained from equation (11), in the equations (9), (10), (14) and (15), gives a non-linear system of four equation \({\cal S}_{4}\) with four unknowns \(x\), \(z_{0}\), \(z_{1}\) and \(z_{2}\).
### Mathematical expression for \(\,[{\rm H}_{3}{\rm O}^{+}]\)
Before obtaining the full solution of the non-linear system \({\cal S}_{4}\) is useful to analyze the linear subsystem \({\cal S}_{3}\) made by equations (9), (10) and (15). This subsystem can be easily solved to obtain the concentrations \(z_{0}\), \(z_{1}\), \(z_{2}\), in terms of \(x\). The linear system \({\cal S}_{3}\) can be expressed as \({\sf K}\mathbf{z}=\mathbf{c}\) with \({\sf K}={\sf K}(x)\) given by
\[{\sf K}=\begin{pmatrix}k_{1}&-x&0\\ 0&k_{2}&-x\\ 1&1&1\end{pmatrix}, \tag{16}\]
\(\mathbf{z}=(z_{0},z_{1},z_{2})^{\intercal}\), and \(\mathbf{c}=(0,0,\bar{c}_{\rm a})^{\intercal}\). Solving for \(z\) gives
\[\mathbf{z}=\frac{\bar{c}_{\rm a}}{\det{\sf K}}\begin{pmatrix}x^{2}\\ k_{1}x\\ k_{1}k_{2}\end{pmatrix}, \tag{17}\]
with \(\det{\sf K}=x^{2}+k_{1}x+k_{1}k_{2}\) as the determinant of \({\sf K}\). It is convenient to write this determinant as \(\det{\sf K}=(x-\kappa_{1})(x-\kappa_{2})\) with
\[\kappa_{1,2}=\tfrac{k_{1}}{2}\left(-1\pm\sqrt{1-4\kappa}\right), \tag{18}\]
and \(\kappa=k_{2}/k_{1}\) as the ratio of the diprotic dissociation constants. These \(\kappa_{1,2}\) are related to the Simms constants [7, 12], \(g_{1,2}\), by \(g_{1,2}=-\kappa_{1,2}\).
Given the condition \(\kappa\leq 1/4\), _i.e._\(k_{1}\geq 4k_{2}\), the roots \(\kappa_{1,2}\) are both no positive real numbers, \(\kappa_{1,2}\leq 0\), otherwise these roots are a pair of complex conjugate numbers with negative real part, _i.e._\(\kappa_{1}=\kappa_{2}^{*}\) and \({\rm re}(\kappa_{1})={\rm re}(\kappa_{2})<0\). The inequality \(k_{1}\geq 4k_{2}\) have been obtained previously by Adams in his analysis of polyprotic acid dissociations [9].
The solution of \({\cal S}_{3}\) gives the concentrations \(\mathbf{z}=(z_{0},z_{1},z_{2})^{\intercal}\) as functions of \(x\), \(k_{1}\), \(k_{2}\) and \(\bar{c}_{\rm a}\). Although \(z\) does not depend explicitly on \(\bar{c}_{\rm b}\), it depends implicitly through \(x\). This
dependency is specified by using \(y=1/x\) in equation (14), which gives
\[x-\tfrac{1}{x}+\bar{c}_{\mathrm{b}}=-\mathbf{q}\cdot\mathbf{z}, \tag{19}\]
with \(\mathbf{q}=(0,-1,-2)^{\intercal}\) as the vector of electric charges of \(z_{0}\), \(z_{1}\), and \(z_{2}\). This equation keeps some similarity with equation (41) used in the work of Kalka [7]. However, unlike that article, in this paper closed analytic solutions for \(x\) are obtained instead of graphical or numerical solutions.
Multiplying equation (19) by \(x\det\mathsf{K}\), and expanding the scalar product, produces
\[\left(x-\kappa_{1}\right)\left(x-\kappa_{2}\right)\left(x^{2}+\bar{c}_{ \mathrm{b}}x-1\right)=\bar{c}_{\mathrm{a}}k_{1}x\left(x+2k_{2}\right), \tag{20}\]
which can be written
\[\left(x-\kappa_{1}\right)\left(x-\kappa_{2}\right)\left(x-\sigma_{1}\right) \left(x-\sigma_{2}\right)=\bar{c}_{\mathrm{a}}k_{1}x\left(x+2k_{2}\right), \tag{21}\]
with \(\sigma_{1,2}=\frac{1}{2}\left(-\bar{c}_{\mathrm{b}}\pm\sqrt{\bar{c}_{ \mathrm{b}}^{2}+4}\right)\). The roots \(\sigma_{1,2}\) are both real numbers with \(0<\sigma_{1}\leq 1\) and \(\sigma_{2}\leq-1\). The case of \(\bar{c}_{\mathrm{b}}=0\) gives \(\sigma_{1,2}=\pm 1\).
Expansion of equation (21) gives \(P=0\) with
\[P=x^{4}+c_{3}x^{3}+c_{2}x^{2}+c_{1}x+c_{0}, \tag{22}\]
and
\[c_{3} =\bar{c}_{\mathrm{b}}+k_{1}, \tag{23}\] \[c_{2} =-\left(1+k_{1}\left(\bar{c}_{\mathrm{a}}-\bar{c}_{\mathrm{b}}-k_ {2}\right)\right),\] (24) \[c_{1} =-k_{1}\left(1+k_{2}\left(2\bar{c}_{\mathrm{a}}-\bar{c}_{\mathrm{ b}}\right)\right),\] (25) \[c_{0} =-k_{1}k_{2}, \tag{26}\]
with \(c_{4}=1\).
Before finding the roots of the equation \(P=0\) it is helpful to analyze the nature of its roots, which is studied by considering the 5-tuple of its coefficients,
\[{\rm coef}[P]=\left(c_{4},c_{3},c_{2},c_{1},c_{0}\right), \tag{27}\]
and its signs
\[{\rm sgn}({\rm coef}[P])=\left({\rm sgn}\,c_{4},{\rm sgn}\,c_{3},{\rm sgn}\,c_ {2},{\rm sgn}\,c_{1},{\rm sgn}\,c_{0}\right). \tag{28}\]
It is straightforward to see that \({\rm sgn}\,c_{4}=+\), \({\rm sgn}\,c_{3}=+\), and \({\rm sgn}\,c_{0}=-\). The sign of \(c_{2}\) and \(c_{1}\) requires a careful analysis. There are four possible cases: \(({\rm sgn}\,c_{2},{\rm sgn}\,c_{1})\): \((+,+)\), \((+,-)\), \((-,+)\), and \((-,-)\). The 5-tuple \({\rm sgn}({\rm coef}[P])\) can have four possible outcomes: \((+,+,+,-)\), \((+,+,+,-,-)\), \((+,+,-,+,-)\), and \((+,+,-,-,-)\). These 5-tuples display one or three changes of sign along the sequence of elements. Descartes's rule of signs states that the number of positive roots of a polynomial, \(P\), is either equal to the number of sign changes of \({\rm sgn}({\rm coef}[P])\), or is less than it by an even number. The application of Descartes' rule to \(P\) gives either one or three positive roots. It can be proved that the polynomial \(P\) has only one positive real root by a careful analysis of equation (21). The left hand side of equation (21) is a fourth degree polynomial \(P_{L}(x)\) with only one positive root \(\sigma_{1}\), one negative root \(\sigma_{2}\) and two roots \(\kappa_{1,2}\) that could be either negative or a complex conjugate pair. The right hand side of equation (21) is an upward parabola \(P_{R}(x)\) with roots at zero and \(-2k_{2}\). The coefficients of the quartic term of \(P_{L}(x)\) and the quadratic term of \(P_{R}\) are both positive numbers, therefore \(P_{L}(x)\) and \(P_{R}(x)\) must tend to infinity as \(x\) goes to positive or negative infinity. Since the quartic function grows always faster than the quadratic function, and regarding that \(P_{L}(0)=-k_{1}k_{1}<0\), the polynomials \(P_{L}(x)\) and \(P_{R}(x)\) must be equal at only one positive \(x\).
In the Appendix is shown that using Ferrari's method [13, 14], the quartic equation \(P=0\)
can be written as an associated resolvent cubic equation \(R=0\) with
\[R=y^{3}-c_{2}y^{2}+\left(c_{1}c_{3}-4c_{0}\right)y+\left(4c_{0}c_{2}-c_{0}c_{3}^{ 2}-c_{1}^{2}\right). \tag{29}\]
The cubic equation \(R=0\) can be solved by Cardano's method [8], for which the change of variable \(y=\bar{y}+\frac{c_{2}}{3}\) is necessary to obtain a depressed cubic equation \(R_{\rm dc}=0\) with
\[R_{\rm dc}=\bar{y}^{3}+\bar{p}\bar{y}+\bar{q}, \tag{30}\]
where
\[\bar{p} =c_{1}c_{3}-\frac{c_{2}^{2}}{3}-4c_{0}, \tag{31}\] \[\bar{q} =\frac{8c_{0}c_{2}}{3}+\frac{c_{1}c_{2}c_{3}}{3}-\frac{2c_{2}^{3} }{27}-c_{1}^{2}-c_{0}c_{3}^{2}, \tag{32}\]
and the discriminant of \(P\) is \(\Delta=-4\bar{p}^{3}-27\bar{q}^{2}\)[13, 14].
The positive root of \(P=0\) is given by three cases depending on the sign of the \(\Delta\), and the sign of the functions \(\xi_{1,2}\),
\[\xi_{1,2}=-\frac{\bar{q}}{2}\pm\frac{1}{2}\sqrt{-\frac{\Delta}{27}}. \tag{33}\]
The quantities \(\Delta\), \(\xi_{1}\), and \(\xi_{2}\) are functions of the equilibrium constants, \(k_{1}\) and \(k_{2}\), and the effective concentrations, \(\bar{c}_{\rm a}\) and \(\bar{c}_{\rm b}\). Explicitly, the positive root of \(P=0\) is given by
\[x=\begin{cases}x_{1},&\Delta>0,\\ x_{1}&\Delta<0,\;\xi_{1}>0,\;\xi_{2}>0,\\ x_{3},&\Delta<0,\;\xi_{1}<0,\;\xi_{2}<0,\\ x_{3},&\Delta<0,\;\xi_{1}>0,\;\xi_{2}<0.\end{cases} \tag{34}\]
The roots \(x_{1}\) and \(x_{3}\) are:
\[x_{1}=\tfrac{1}{2}\left(-\left(\tfrac{\bar{c}_{\rm b}+k_{1}}{2}-t_ {1}\right)+\sqrt{\left(\tfrac{\bar{c}_{\rm b}+k_{1}}{2}-t_{1}\right)^{2}-2y_{1}+ \tfrac{(\bar{c}_{\rm b}+k_{1})y_{1}+2k_{1}(1+(2\bar{c}_{\rm a}-\bar{c}_{\rm b}) k_{2})}{t_{1}}}\right), \tag{35}\] \[x_{3}=\tfrac{1}{2}\left(-\left(\tfrac{\bar{c}_{\rm b}+k_{1}}{2}+t _{1}\right)+\sqrt{\left(\tfrac{\bar{c}_{\rm b}+k_{1}}{2}+t_{1}\right)^{2}-2y_{1 }-\tfrac{(\bar{c}_{\rm b}+k_{1})y_{1}+2k_{1}(1+(2\bar{c}_{\rm a}-\bar{c}_{\rm b })k_{2})}{t_{1}}}\right), \tag{36}\]
with \(y_{1}\) and \(t_{1}\):
\[t_{1}=\sqrt{1+\tfrac{1}{4}(\bar{c}_{\rm b}+k_{1})^{2}+k_{1}\left( \bar{c}_{\rm a}-\bar{c}_{\rm b}-k_{2}\right)+y_{1}}, \tag{37}\] \[y_{1}=\bar{y}_{1}-\tfrac{1+k_{1}(\bar{c}_{\rm a}-\bar{c}_{\rm b} -k_{2})}{3}, \tag{38}\]
and \(\bar{y}_{1}\):
\[\bar{y}_{1}=\begin{cases}\tfrac{2}{3}\sqrt{1+k_{1}Q_{1}+k_{1}^{2}Q_{2}}\cos \big{(}\tfrac{\theta}{3}\big{)},&\Delta>0,\\ \sqrt[3]{|\xi_{1}|}+\sqrt[3]{|\xi_{2}|},&\Delta<0,\;\xi_{1}>0,\;\xi_{2}>0,\\ -(\sqrt[3]{|\xi_{1}|}+\sqrt[3]{|\xi_{2}|}),&\Delta<0,\;\xi_{1}<0,\;\xi_{2}<0, \\ \sqrt[3]{|\xi_{1}|}-\sqrt[3]{|\xi_{2}|},&\Delta<0,\;\xi_{1}>0,\;\xi_{2}<0.\\ \end{cases} \tag{39}\]
The functions \(\theta\), \(Q_{1}\), and \(Q_{2}\) are given in the Appendix. The concentration \(x\) for the most common case of diprotic acid, with \(k_{1}>4k_{2}\), obeys \(x=x_{1}\) for most of the concentrations.
### Strong base titration of dissolutions of diprotic acids and its buffer mixtures
The titration by a strong base of an acid solution, or an acid buffer solution, can be analyzed by using the effective concentrations \(\bar{c}_{\rm a}\) and \(\bar{c}_{\rm b}\). Recall that the effective concentrations are related to the molar analytical concentrations by \(\bar{c}_{\rm a}=C_{\rm a}/C^{\ast}\) and \(\bar{c}_{\rm b}=C_{\rm b}/C^{\ast}\). A buffer solution is made of a volume \(V_{\rm a0}\) of an acid solution with analytical concentration \(C_{\rm a0}\), and volumes \(V_{10}\) and \(V_{20}\) of salt solutions with analytical concentrations \(C_{10}\) and \(C_{20}\), respectively. The total volume of the buffer solution is \(V_{\rm B0}=V_{\rm a0}+V_{10}+V_{20}\). The case of and acid solution
is obtained by using \(V_{10}=0\), and \(V_{20}=0\), in the volume of the buffer: \(V_{\rm B0}=V_{\rm a0}\). The buffer effective concentrations are given by
\[\bar{c}_{\rm a0} = \left(c_{\rm a0}V_{\rm a0}+s_{10}V_{10}+s_{20}V_{20}\right)/V_{\rm B 0},\ {\rm and} \tag{40}\] \[\bar{c}_{\rm b0} = \left(s_{10}V_{10}+2s_{20}V_{20}\right)/V_{\rm B0}. \tag{41}\]
These expression for the effective concentrations are obtained from the analytical concentrations simply by using the scaling factor \(1/C^{\circ}\).
A volume \(V_{\rm b}\) of a strong base with analytical concentration \(C_{\rm b0}\) is added to the buffer of volume \(V_{\rm B0}\) and effective concentrations \(\bar{c}_{\rm a0}\) and \(\bar{c}_{\rm b0}\). The addition of the strong base changes the volume of the buffer to \(V_{\rm B}=V_{\rm B0}+V_{\rm b}\), and the effective concentrations to the titration effective concentrations
\[\bar{c}_{\rm a} = \left(c_{\rm a0}V_{\rm a0}+s_{10}V_{10}+s_{20}V_{20}\right)/V_{ \rm B},\ {\rm and} \tag{42}\] \[\bar{c}_{\rm b} = \left(c_{\rm b0}V_{\rm b}+s_{10}V_{10}+2s_{20}V_{20}\right)/V_{ \rm B}. \tag{43}\]
The use of the buffer effective concentrations, equations (40) and (41), in the titration effective concentrations, equations (42) and (43), gives
\[\frac{\bar{c}_{\rm a0}}{\bar{c}_{\rm a}}-1 = \frac{V_{\rm b}}{V_{\rm B0}},\ {\rm and} \tag{44}\] \[\left(\frac{c_{\rm b0}}{\bar{c}_{\rm b}}-1\right)\frac{V_{\rm b}} {V_{\rm B0}} = 1-\frac{\bar{c}_{\rm b0}}{\bar{c}_{\rm b}}. \tag{45}\]
The titration effective concentrations can be combined to obtain an equation for \(\bar{c}_{\rm b}\) in terms of \(\bar{c}_{\rm a}\), \(c_{\rm b0}\) and the buffer effective concentrations,
\[\bar{c}_{\rm b}=\frac{\bar{c}_{\rm a}}{\bar{c}_{\rm a0}}\left(\bar{c}_{\rm b0 }-c_{\rm b0}\right)+c_{\rm b0}. \tag{46}\]
This is the equation of a straight line with slope \(\left(\bar{c}_{\rm b0}-c_{\rm b0}\right)/\bar{c}_{\rm a0}\) and ordinate intercept \(c_{\rm b0}\)
The slope of this straight line could be be negative, zero, or positive, depending on the concentrations of the buffer, \(\bar{c}_{\rm b0}=s_{10}+2s_{20}\), and the titrating base, \(c_{\rm b0}\).
In addition to the case of the titration of buffer solutions, this equation can be used for the titration of acid solutions. The case of an acid solution is obtained by taking \(\bar{c}_{\rm b0}=0\) and \(\bar{c}_{\rm a0}=c_{\rm a0}\), to obtain
\[\bar{c}_{\rm b}=c_{\rm b0}-\left(\frac{c_{\rm b0}}{c_{\rm a0}}\right)\bar{c}_{ \rm a}, \tag{47}\]
where \(\bar{c}_{\rm a}=c_{\rm a}\) and \(\bar{c}_{\rm b}=c_{\rm b}\). This is the equation of a straight line with slope \(-c_{\rm b0}/c_{\rm a0}\) and ordinate intercept \(c_{\rm b0}\).
Equations (46) and (47) describe the titrations as straight line paths on the \((\bar{c}_{\rm a},\bar{c}_{\rm b})\) plane. The addition of the base, with concentration \(c_{\rm b0}\), increases \(\bar{c}_{\rm b}\) along a straight line, from \(\bar{c}_{\rm b0}\) to \(c_{\rm b0}\), meanwhile decreases \(\bar{c}_{\rm a}\), from \(\bar{c}_{\rm a0}\) to zero. The use of the contours of constant pH on the \((\bar{c}_{\rm a},\bar{c}_{\rm b})\) plane and the trajectory described by equation (46), or (47), give the full description of the titration experiments. However, it is more practical to describe the titration experiment as function of the pH, instead of the added strong base. For this, equation (19) is used, which relates \(x\), the pH, with the effective concentrations \(\bar{c}_{\rm a}\) and \(\bar{c}_{\rm b}\). After some rearrangement, equation (19) gives
\[x-\tfrac{1}{x}+\bar{c}_{\rm b}=\frac{\bar{c}_{\rm a}k_{1}\left(x+2k_{2}\right) }{x^{2}+k_{1}x+k_{1}k_{2}}. \tag{48}\]
The use of the ratio between the effective dimensionless concentrations, \(\bar{n}=\bar{c}_{\rm b}/\bar{c}_{\rm a}\), in equation (48) gives
\[\bar{n}=\frac{k_{1}\left(x+2k_{2}\right)}{x^{2}+k_{1}x+k_{1}k_{2}}+\frac{1-x^ {2}}{x\bar{c}_{\rm a}}. \tag{49}\]
This equation has been reported by Kalka previously [7], and works well in the case of \(\bar{c}_{\rm a}\) constant. However, in the titration experiment the concentrations \(\bar{c}_{\rm a}\) and \(\bar{c}_{\rm b}\) are not constants. The last term of the right hand side of this equation depends on \(\bar{c}_{\rm a}\). This dependence can be eliminated using equation (46). For this purpose, equation (46) must be written in terms
of \(\bar{n}\),
\[\bar{n}=\bar{n}_{0}+c_{\rm b0}\left(\frac{1}{\bar{c}_{\rm a}}-\frac{1}{\bar{c}_{ \rm a0}}\right), \tag{50}\]
with \(\bar{n}_{0}=\bar{c}_{\rm b0}/\bar{c}_{\rm a0}\). Algebraic manipulation of equation (50) gives
\[\frac{1}{\bar{c}_{\rm a}}=\frac{\bar{n}-\bar{n}_{0}}{c_{\rm b0}}+\frac{1}{\bar {c}_{\rm a0}}, \tag{51}\]
which can be used in the last term of equation (49) to obtain an expression for \(\bar{n}\) in terms of the pH
\[\bar{n}=\left(\bar{n}_{0}-\frac{c_{\rm b0}}{\bar{c}_{\rm a0}}\right)\frac{P_{ \rm a0}}{P_{\rm b0}}, \tag{52}\]
where \(\bar{n}\geq\bar{n}_{0}\) and
\[P_{\rm a0} =P\left(x=10^{7-{\rm pH}},\bar{c}_{\rm a}=\frac{\bar{c}_{\rm a0}c _{\rm b0}}{c_{\rm b0}-\bar{c}_{\rm b0}},\bar{c}_{\rm b}=0\right), \tag{53}\] \[P_{\rm b0} =P\left(x=10^{7-{\rm pH}},\bar{c}_{\rm a}=0,\bar{c}_{\rm b}=c_{ \rm b0}\right), \tag{54}\]
with \(P\) the polynomial given by equation (22). The case of the acid titration is given by considering \(\bar{c}_{\rm b0}=0\) in equations (52)-(54).
The equivalence points for a diprotic acid occur at \(\bar{n}=1,2\) in equation (52). The first equivalence point is given when the acid and base concentrations are equal, \(\bar{n}=1\), the second equivalence point happens when the base concentration doubles the acid concentration, \(\bar{n}=2\). Since \(\bar{n}({\rm pH})\) must be a monotonically growing function of the pH, it must fulfill the condition \(n^{\prime}({\rm pH})>0\).
## Results and discussion
The validity of equations (34)-(39) has been tested by calculating the pH, at different effective concentrations \(\bar{c}_{\rm a}\) and \(\bar{c}_{\rm b}\), of diprotic acids with reported values of \({\rm p}K_{1}\) and \({\rm p}K_{2}\) for 180 diprotic acids [10] and comparing them with the numerical solution. The average absolute
error in the pH, \(\epsilon_{\rm pH}\), at millimolar concentrations \(\bar{c}_{\rm a}\) and \(\bar{c}_{\rm b}\), is \(\epsilon_{\rm pH}\lesssim 10^{-5}\) pH units with a standard deviation \(\lesssim 10^{-4}\) pH units. The small error in the calculated pH is caused mainly by the diprotic acids with \({\rm p}K_{2}-{\rm p}K_{1}<-\log_{10}4\).
### Aqueous dissolution of a diprotic acid
The case of a diprotic acid is given by using the conditions \(\bar{c}_{\rm b}=0\) and \(\bar{c}_{\rm a}=c_{\rm a}\) in equations (34)-(39). The condition \(k_{1}\geq 4k_{2}\), _i.e._,
\[{\rm p}K_{2}-{\rm p}K_{1}\geq\log_{10}4\approx 0.602, \tag{55}\]
makes the discriminant of \(P\) a positive number, \(\Delta>0\). This is the condition for a quartic equation with four distinct real roots. An inspection of the values of \({\rm p}K_{1}\) and \({\rm p}K_{2}\) of tabulated diprotic weak acids indicates that the condition (55) is fulfilled by many di-protic weak acids [9, 10, 11]. Figure 1 displays, on the \({\rm p}K_{1}\)-\({\rm p}K_{2}\) plane, the region given by the condition (55) as the light blue region above the blue line. It is clear in the Figure that most of the diprotic weak acids (open black circles) fulfill this condition, however a simple visual inspection of Figure 1 shows that there are weak diprotic acids that fulfill the condition \({\rm p}K_{2}-{\rm p}K_{1}\leq-\log_{10}4\) (light red region). This condition can be expressed in terms of the acid constants as \(k_{2}/k_{1}\geq 4\), _i.e._\(K_{2}/K_{1}\geq 4\). Diprotic acids in the light red region have \({\rm p}K_{1}>{\rm p}K_{2}\), examples of these are several nitrogenous organic compounds as Piperazine, Quinine, Phenylbiguanide, \(L\)-Nicotine, \(p\)-Benzidine, Sulfamethazine, \(m\)-Phenylenediamine, \(p\)-Phenylenediamine, 1,2-Propanediamine, 1,3-Propanediamine, 1,4-Butanediamine, 1,6-Hexanediamine, 1,8-Octanediamine, _cis_-2,5-Dimethylpiperazine, _trans_-1,2-Cyclohexanediamine, _cis_-1,2-Cyclohexanediamine, and the alcohol 1,3-Diamino-2-propanol [10].
Concentrations \([\mathrm{H_{2}B}]\), \([\mathrm{HB^{-}}]\), and \([\mathrm{B^{2-}}]\) in terms of \([\mathrm{H_{3}O^{+}}]\)
Equation (21) can be written to give an expression for \(\det\mathsf{K}\),
\[\det\mathsf{K}=\frac{\bar{c}_{a}k_{1}x\left(x+2k_{2}\right)}{\left(x-\sigma_{1 }\right)\left(x-\sigma_{2}\right)}. \tag{56}\]
This equation can be used in equation (17), to obtain
\[z_{0} =\frac{x\left(x-\sigma_{1}\right)\left(x-\sigma_{2}\right)}{k_{1 }(x+2k_{2})}, \tag{57}\] \[z_{1} =\frac{\left(x-\sigma_{1}\right)\left(x-\sigma_{2}\right)}{x+2k_{ 2}},\] (58) \[z_{2} =\frac{k_{2}\left(x-\sigma_{1}\right)\left(x-\sigma_{2}\right)}{x \left(x+2k_{2}\right)}. \tag{59}\]
These concentrations are constrained to be positive numbers. Since \(0<\sigma_{1}\leq 1\) and \(\sigma_{2}\leq-1\), it is necessary to have \(x>\sigma_{1}\).
Figure 1: p\(K_{1}\)–p\(K_{2}\) plane for a set of diprotic acids in aqueous solution [10]. The light blue region is given by the condition \(\mathrm{p}K_{2}-\mathrm{p}K_{1}\geq\log_{10}4\), the light red region is given by the condition \(\mathrm{p}K_{2}-\mathrm{p}K_{1}\leq-\log_{10}4\)
It is also possible to have parametric dependence on \(\bar{c}_{\rm a}\), \(\bar{c}_{\rm b}\) and \(k_{2}\) by using equation (15), which gives as result equations (58), (59), and
\[z_{0}=\bar{c}_{\rm a}-\frac{\left(x+k_{2}\right)\left(x-\sigma_{1}\right)\left( x-\sigma_{2}\right)}{x\left(x+2k_{2}\right)} \tag{60}\]
instead of (57). The case of a dissolution of the diprotic acid gives \(\sigma_{1}=1\) and \(\sigma_{2}=-1\), with \(\bar{c}_{\rm b}=0\) and \(\bar{c}_{\rm a}=c_{\rm a}\).
It is convenient to employ logarithmic scaling to describe concentrations and equilibrium constants of highly diluted solutions and weak acids. The p-function of Q is defined as
\[\begin{split}\rm{pQ}&=-\log_{10}a_{\rm Q}\\ &=-\log_{10}\frac{\gamma_{\rm Q}[Q]}{C^{\circ}},\end{split} \tag{61}\]
with \(a_{\rm Q}\) and \(\gamma_{\rm Q}\) as the activity and the activity coefficient of Q respectively [4]. Since equilibrium constants are dimensionless, it is possible to define the p-function of \(K\) as \(\rm{p}K=-\log_{10}K\)[6].
The case of weak acids, and low concentrations, allows to use the ideal solution approximation, \(\gamma_{\rm Q}\approx 1\), hence the pH is given by
\[\begin{split}\rm{pH}&\approx-\log_{10}\frac{[H_{3} O^{+}]}{C^{\circ}}\\ &\approx-\log_{10}\frac{C^{\circ}x}{C^{\circ}}\\ &\approx 7-\log_{10}x.\end{split} \tag{62}\]
The p-functions for [H\({}_{2}\)B], [HB\({}^{-}\)] and [B\({}^{2-}\)] are given by \(\rm{pH}_{2}B\approx 7-\log_{10}z_{0}\), \(\rm{pH}B^{-}\approx 7-\log_{10}z_{1}\), and \(\rm{p}B^{2-}\approx 7-\log_{10}z_{2}\), respectively. The p-function \(\rm{pH}_{2}B\) can be expressed in two ways, either using \(z_{0}\) from equation (57) to obtain \(\rm{pH}_{2}B(k_{1},k_{2})\), or using \(z_{0}\) from equation (60) to get \(\rm{pH}_{2}B(c_{\rm a},k_{2})\).
Figure 2 displays the behaviour of \(\rm{pH}_{2}B\), \(\rm{pH}B^{-}\) and \(\rm{p}B^{2-}\) as functions of the pH for
different concentrations \(c_{\rm a}=2^{n}\) (\(n=0,1,\ldots,23\)) of oxalic acid, \({\rm H_{2}C_{2}O_{4}}\), which has \(K_{1}=5.62\times 10^{-2}\) and \(K_{2}=1.54\times 10^{-4}\)[10]. In this Figure the intersections between the \({\rm pH_{2}B}(k_{1},k_{2})\) curve (red) and the \({\rm pH_{2}B}(c_{\rm a},k_{2})\) curves (pink) are shown as labeled black points. These intersections give the pH for the different concentrations \(c_{\rm a}\).
Use of physical constraints of the system to obtain approximate expressions for \([{\rm H_{3}O^{+}}]\)
For the diprotic acid, the combined use of equation (60) and the condition \(z_{0}>0\), gives the inequality \(P_{z_{0}}<0\) with \(P_{z_{0}}\) given by the monic cubic polynomial
\[P_{z_{0}}=x^{3}+\left(k_{2}-c_{\rm a}\right)x^{2}-\left(1+2c_{\rm a}k_{2} \right)x-k_{2}. \tag{63}\]
Although this polynomial goes to infinity as \(x\) goes to infinity, there are values of \(x\) for which the inequality \(P_{z_{0}}<0\) is satisfied. This can be seen by analyzing the 4-tuple of
coefficients,
\[\begin{split}\text{coef}[P_{z_{0}}]&=(a_{3},a_{2},a_{1},a_{0})\\ &=(1,k_{2}-c_{\text{a}},-(1+2c_{\text{a}}k_{2}),-k_{2})\,.\end{split} \tag{64}\]
The signs of (64) are given by
\[\begin{split}\text{sgn}[P_{z_{0}}]&=(\text{sgn}\,a_ {3},\text{sgn}\,a_{2},\text{sgn}\,a_{1},\text{sgn}\,a_{0})\\ &=(+,\pm,-,-)\,,\end{split} \tag{65}\]
Regardless of the value of \(\text{sgn}\,a_{2}\), there is only one change of sign in (65), from positive to negative, for this case Descartes rule of signs gives that \(P_{z_{0}}\) must have only one positive root. This positive root is a function of \(k_{2}\) and \(c_{\text{a}}\), and gives the upper bound of \(x\). Using the method of Caicedo, et al. the upper bound of \(x\) is given by
\[x_{\text{ub}}=\tfrac{2}{3}\sqrt{(k_{2}-c_{\text{a}})^{2}+6c_{\text{a}}k_{2}+3} \cos{(\theta_{z_{0}}/3)}-\frac{k_{2}-c_{\text{a}}}{3}, \tag{66}\]
with
\[p_{z_{0}} =-\tfrac{1}{3}(k_{2}-c_{\text{a}})^{2}-2c_{\text{a}}k_{2}-1, \tag{67}\] \[q_{z_{0}} =\tfrac{2}{27}(k_{2}-c_{\text{a}})^{3}+\tfrac{1}{3}(k_{2}-c_{\text {a}})(1+2c_{\text{a}}k_{2})-k_{2},\] (68) \[\Delta_{z_{0}} =-4p_{z_{0}}^{3}-27q_{z_{0}}^{2},\] (69) \[\theta_{z_{0}} =\arctan\left(-\frac{q_{z_{0}}}{2},\frac{\sqrt{\Delta_{z_{0}}}}{ 6\sqrt{3}}\right). \tag{70}\]
The use of Wolfram Mathematica allows to prove that \(\Delta_{z_{0}}>0\) for \(c_{\text{a}}>0\) and \(k_{2}>0\). Since \(\Delta_{z_{0}}>0\), equation (70) gives the \(0<\theta<\pi\), hence \(\cos{(\theta/3)}\geq 1/2\). Furthermore, the use of Wolfram Mathematica allows to find that \(\lim_{c_{\text{a}}\to 0}x_{\text{ub}}=1\).
It was shown above that the dissociation constants of many diprotic acids are constraint by the condition \(k_{1}\geq 4k_{2}\). The use of equations (9) and (10) in the inequality \(k_{1}\geq 4k_{2}\) leads
to the constraint \(z_{1}^{2}\geq 4z_{0}z_{2}\) between the concentrations of the acid and its conjugate bases. The use of equations (58), (59) and (60) in the inequality \(z_{1}^{2}\geq 4z_{0}z_{2}\) gives the inequality \(P_{z}>0\) with
\[P_{z}=x^{3}+2k_{2}x^{2}-(1+4c_{\rm a}k_{2})x-2k_{2}. \tag{71}\]
This polynomial \(P_{z}\) is the same polynomial for a monoprotic weak acid with \(k_{\rm a}=2k_{2}\). Since \({\rm sgn}[P_{z}]=(+,+,-,-)\), Descartes rule of signs indicates that \(P_{z}=0\) has only one positive root. Using the method of Caicedo et al., this root is given by
\[x_{\rm lb}=\tfrac{2}{3}\left(\sqrt{4k_{2}^{2}+3c_{\rm a}k_{2}+3}\cos\left( \theta_{z}/3\right)-k_{2}\right), \tag{72}\]
Figure 3: Upper (blue) and lower (green) bounds of the pH for oxalic acid (left) and 1,5-Pentanediamine (right) as functions of the molar concentration \(C_{\rm a}\). The orange curve represents the exact pH.
with
\[p_{z} =-\tfrac{4k_{2}^{2}}{3}-c_{\rm a}k_{2}-1, \tag{73}\] \[q_{z} =\tfrac{16k_{2}^{3}}{27}+\tfrac{2c_{\rm a}k_{2}^{2}}{3}-\tfrac{k_{2 }}{3},\] (74) \[\Delta_{z} =-4p_{z}^{3}-27q_{z}^{2},\] (75) \[\theta_{z} =\arctan\left(-\frac{q_{z}}{2},\frac{\sqrt{\Delta_{z}}}{6\sqrt{3} }\right). \tag{76}\]
The discriminant \(\Delta_{z}\) is a positive quantity, in fact, by using Wolfram Mathematica it is shown that \(\Delta_{z}\geq 4\). Furthermore using the same software it is shown that \(\lim_{c_{\rm a}\to 0,k_{2}\to 0}x_{\rm lb}=1\), and that \(\lim_{c_{\rm a}\to 0,k_{2}\to\infty}x_{\rm lb}=1/\sqrt{2}\).
The lower and upper bounds to the pH are obtained by \(7-\log_{10}x_{\rm ub}\) and \(7-\log_{10}x_{\rm lb}\) respectively. Figure 3 displays the lower and upper bounds to the pH as a function of the molar concentration \(C_{\rm a}\) for oxalic acid (left) and 1,5-Pentanediamine (right). In the case of oxalic acid, the exact pH and the lower pH bound (green) are nearly identical for concentrations \(C_{\rm a}<10^{-2}\,\)M. On the other hand, for the compound 1,5-Pentanediamine the exact pH and the upper pH bound (blue) are nearly identical for concentrations \(C_{\rm a}<10^{-4}\,\)M. It is interesting to notice from both panels of Figure 3 that the upper bound pH curve (blue) overstinate the pH for a constant difference for concentrations greater than \(10^{-2}\,\)M.
### Analysis of the dependence of the pH on p\(K_{2}\)
The pH is calculated by \(\mathrm{pH}=7-\log_{10}x\) with \(x\) given by equation (34). The partial derivatives
\[\delta\mathrm{pH}_{1} =\left(\frac{\partial\mathrm{pH}}{\partial\mathrm{p}K_{1}}\right) _{C_{\rm a},\mathrm{p}K_{2}}, \tag{77}\] \[\delta\mathrm{pH}_{2} =\left(\frac{\partial\mathrm{pH}}{\partial\mathrm{p}K_{2}}\right) _{C_{\rm a},\mathrm{p}K_{1}}, \tag{78}\]
measure how much the pH depends on p\(K_{1}\) or p\(K_{2}\), respectively.
Figure 4 displays contours of constant \(\delta\)pH\({}_{2}\) as function of p\(K_{1}\) and p\(K_{2}\) for different acid concentrations, \(C_{\rm a}\): (a) 0.1 M; (b) 0.01 M; (c) \(10^{-3}\) M; (d) \(10^{-6}\) M. Only the acids that fulfill the condition p\(K_{2}\geq\) p\(K_{1}+\log_{10}4\) are shown. This Figure also shows contours of constant pH as red curves with their respective call-outs indicating the value of the pH. In all the panels, the contours of the derivative \(\delta\)pH\({}_{2}\) are shown with contour shading from blue to yellow; the dark blue has \(0.01<\delta\)pH\({}_{2}<0.02\), the grey has \(0.02<\delta\)pH\({}_{2}<0.06\), the dark orange \(0.06<\delta\)pH\({}_{2}<0.1\), the orange \(0.1<\delta\)pH\({}_{2}<0.14\), the light orange \(0.14<\delta\)pH\({}_{2}<0.15\), the
Figure 4: Contours of constant pH (red lines) for different concentrations \(C_{\rm a}\): (a) 0.1 M; (b) 0.01 M; (c) \(10^{-3}\) M; (d) \(10^{-6}\) M. Contours of \(\delta\)pH\({}_{2}\) are shown in shading colors from blue to yellow: dark blue 0.01, grey 0.02, dark orange 0.06, orange 0.1, light orange 0.14, dark yellow 0.15, and yellow 0.16. The light blue region has \(\delta\)pH\({}_{2}<0.01\). The open circle markers are the values of (p\(K_{1},\)p\(K_{2}\)) for different weak diprotic acids [10].
dark yellow \(0.14<\delta{\rm pH}_{2}<0.15\), and the yellow \(0.16<\delta{\rm pH}_{2}\). The open circle markers in all the panels of Figure 4 are the values of \(({\rm p}K_{1},{\rm p}K_{2})\) for different diprotic weak acids [10]. The maximum value of \(\delta{\rm pH}_{2}\) is obtained by evaluating \(\delta{\rm pH}_{2}\) along the line \({\rm p}K_{2}={\rm p}K_{1}+\log_{10}4\). By doing this a function \(\delta{\rm p}K_{2}({\rm p}K_{1},C_{\rm a})\) is obtained. The use of the function \({\sf NM Maximize}\) of Wolfram Mathematica gives that \(\max\left(\delta{\rm p}K_{2}({\rm p}K_{1},C_{\rm a})\right)\approx 0.17153\) regardless the values of \({\rm p}K_{1}\) and \(C_{\rm a}\).
Panel (a) of Figure 4 shows that, for \(C_{\rm a}=0.1\,\)M, \({\rm p}K_{2}\) has weak influence on the pH for \({\rm p}K_{1}>4\). This is evident by observing that the contours of constant pH are practically vertical lines for \({\rm pH}>2.5\) and also by the fact that \(\delta{\rm pH}_{2}<0.01\) for the same values of the pH. In the same panel can be observed that the strongest influence of \({\rm p}K_{2}\) on the pH, _i.e._\(0.1<\delta{\rm p}K_{2}\lesssim 0.17153\), is seen in the regions with orange and yellow contour shading, and pH\(<1.5\). The pH contours in this region are curved instead of straight vertical lines. Panel (a) shows that the approximation of considering the pH independent \({\rm p}K_{2}\) is very good for all the acids at a concentration \(C_{\rm a}=0.1\,\)M. The highest observed value of \(\delta{\rm pH}_{2}\) is about 0.17153 units of pH by 1 unit of \({\rm p}K_{2}\). This value of \(\delta{\rm pH}_{2}\) indicates that a change on 0.5 units of \({\rm p}K_{2}\) would produce at most a change of 0.08 units on the pH. This change in the pH is sufficiently small to be within the experimental error, hence, the heuristic approximation of considering the pH dependent only on \({\rm p}K_{1}\) is an good approximation at relatively high concentrations of the acid.
Panels (b) and (c) of Figure 4 show similarities in shape with panel (a). It can be seen that for these concentrations the pH is insensitive to the value of \({\rm p}K_{2}\) for \({\rm p}K_{1}>5\) and \({\rm p}K_{1}>6\), for concentrations \(C_{\rm a}=10^{-2}\) (b) and \(C_{\rm a}=10^{-3}\) (c) respectively. Regarding the pH contours, they are insensitive to the \({\rm p}K_{2}\) value for pH\(>3.5\) and pH\(>4.5\) for concentrations \(C_{\rm a}=10^{-2}\) (b) and \(C_{\rm a}=10^{-3}\) (c) respectively. Panel (d), \(C_{\rm a}=10^{-6}\,\)M, shows evident differences with respect to panels (a) to (c). At this low acid concentration the region with \(5.73<{\rm pH}<6.5\) displays strong sensitivity to the value of \({\rm p}K_{2}\), with the strongest effect for pH\(\approx 5.9\) and \({\rm p}K_{1}\approx 6\).
Panels (a) to (c) of Figure 4 show that the pH contour with \(\mathrm{pH}=-\log_{10}C_{\mathrm{a}}\) is a straight line with negative slope. This line is a boundary between two regions: one with pH contours that are asymptotically independent of \(\mathrm{p}K_{2}\) (vertical lines) and other region with pH contours that are asymptotically independent of \(\mathrm{p}K_{1}\) (horizontal lines). Although panel (d) does not display this straight line with negative slope, it is clear that there are also regions with asymptotic independence on \(\mathrm{p}K_{1}\) and \(\mathrm{p}K_{2}\).
### Strong base titration of diprotic acids and its buffer mixtures
Figure 5 displays the contours of constant pH for maleic acid (left panel) and 1,8-Octane-diamine (right panel). Maleic acid has \(\mathrm{p}K_{1}=1.92\) and \(\mathrm{p}K_{2}=6.23\), that is \(\mathrm{p}K_{2}-\mathrm{p}K_{1}\geq\log_{10}4\), meanwhile the compound 1,8-Octanediamine has \(\mathrm{p}K_{1}=11\) and \(\mathrm{p}K_{2}=10.1\), and \(\mathrm{p}K_{2}-\mathrm{p}K_{1}\leq-\log_{10}4\). Maleic acid is an example of a pH given uniquely by equation (35) with \(\bar{y}_{1}\) given by the first case of equation (39), therefore maleic acid displays only one region
Figure 5: Contours of constant pH on the concentrations plane (\(C_{\mathrm{a}},C_{\mathrm{b}}\)) for diprotic acids. Left: maleic acid, \(\mathrm{p}K_{1}=1.92\), \(\mathrm{p}K_{2}=6.23\); Right: 1,8-Octanediamine \(\mathrm{p}K_{1}=11\), \(\mathrm{p}K_{2}=10.1\). The red region is given for \(\Delta_{\mathrm{dc}}>0\), the green region for \(\xi_{1}<0\) and \(\xi_{2}<0\), and the blue \(\xi_{1}>0\) and \(\xi_{2}<0\). The titration line is shown in dashed on both figures. The equivalence points \(n=1,2\) are shown as cyan open circles. The half equivalence points \(n=1/2,3/2,5/2\) are shown as magenta open circles.
in the \((C_{\rm a},C_{\rm b})\) plane. On the other hand the compound 1,8-Octanediamine displays three regions on the \((C_{\rm a},C_{\rm b})\) plane: the red region is given by using equation (35) with \(\bar{y}_{1}\) given by the first case of (39), the green region given by equation (36) with \(\bar{y}_{1}\) given by the second case of equation (39), and the blue region is given by (36) with \(\bar{y}_{1}\) given by the third case of equation (39).
Figure 6 shows the titrations curves obtained by adding a volume \(V_{\rm b}\) of a strong base with concentration \(C_{\rm b0}=1\,\)mM to a volume \(V_{\rm a0}=10\,\)ml of solution \(C_{\rm a0}=1\,\)mM of maleic acid (left) and 1,8-Octanediamine (right). These titration curves are given by the pH along the dashed lines of Figure 5 for the case \(C_{\rm a0}=1\,\)mM and \(C_{\rm b0}=1\,\)mM.
The left panel of Figure 6 displays the typical equivalence points for diprotic acids with two equivalence points. The first equivalence point occurs at \(V_{\rm b}\approx 10\,\)ml with \({\rm pH}\approx 5\), the second equivalence point occurs at \(V_{\rm b}\approx 20\,\)ml with \({\rm pH}\approx 8\). In contrast, the right panel of Figure 6 does not display equivalence points. The titration curve for 1,8-Octanediamine is made by joining three different titration curves: red, green and blue (from left to right). The initial solution of 1,8-Octanediamine has pH slightly below 7, as the base is added the pH grows rapidly reaching a pH above 10. This behaviour is described by the red curve of Figure 6 (right). As the volume of added base increases, the pH grows from 10 to approximately 10.5, following the green curve. Finally, for \(V_{\rm b}>25\,\)ml the titration experiment follows the
Figure 6: Titration curves for 10 ml of diprotic acids at concentration \(C_{\rm a}=1\,\)mM using a volume \(V_{\rm b}\) of strong base with concentration \(C_{\rm b}=1\,\)mM. (Left) Maleic acid, \({\rm p}K_{1}=1.92\), \({\rm p}K_{2}=6.23\); (Right) 1,8-Octanediamine \({\rm p}K_{1}=11\), \({\rm p}K_{2}=10.1\). The red, green and blue curves are calculated using the first, third, and fourth case of equation (39), respectively.
blue curve reaching a final pH slightly above 10.5 at \(V_{\rm b}=40\,\)ml.
The use of equation (52) allows to obtain the pH at the equivalence points for acid and buffer solutions. The equivalence points \(\bar{n}=1,2\), and the half-equivalence points \(\bar{n}=1/2,3/2\), as functions of the pH, are displayed in Figure 7 for maleic acid, p\(K_{1}=1.92\), p\(K_{2}=6.23\), and for 1,8-Octanediamine, p\(K_{1}=11\), p\(K_{2}=10.1\). It is seen in this Figure that the equivalence point for \(\bar{n},\bar{n}_{\rm K}=3/2\) are the same for maleic acid, but not for 1,8-Octanediamine. This Figure also shows that the use of equation (49) gives a wrong pH at the equivalence points \(\bar{n}=1,2\). The first equivalence point is shifted to more acid pHs meanwhile the second equivalence point is shifted to basic pHs.
### pH stability of buffer solutions
In the titration experiment a volume \(V_{\rm b}\) of a base solution, with concentration \(c_{\rm b0}\), is added to a volume \(V_{\rm B0}\) of a buffer solution with concentrations \(\bar{c}_{\rm a0}\) and \(\bar{c}_{\rm b0}\). The concentrations \(\bar{c}_{\rm a}\) and \(\bar{c}_{\rm b}\) as functions of \(V_{\rm b}\) are given by equations (42) and (43). Although equation (52) can be used to analyze the stability of a buffer solution, it is more convenient to use the
Figure 7: Titration functions \(\bar{n}\)(pH), and \(\bar{n}_{\rm K}\), obtained from equations (52) (green and red) and (49) (cyan and magenta), respectively. The first and second equivalence points occur at \(\bar{n},\bar{n}_{K}=1,2\), the half-equivalence points occur at \(\bar{n},\bar{n}_{\rm K}=1/2,3/2,5/2\). Maleic acid (red and magenta) displays the typical titration curve of a diprotic acid, 1,8-Octanediamine (green and cyan) does not diaplay equivalence points.
parametric curve
\[\beta(V_{\rm b})=\left\{{\rm pH}_{\rm acid}\left(V_{\rm b}\right),{\rm pH}_{\rm buffer }\left(V_{\rm b}\right)\right\}, \tag{79}\]
where \({\rm pH}_{\rm acid}\left(V_{\rm b}\right)\) is the pH of the acid as function of added base, and \({\rm pH}_{\rm buffer}\left(V_{\rm b}\right)\) is the pH of the buffer as function of added base. Figure 8 displays \(\beta(V_{\rm b})\) for acid and buffer solutions prepared with the same number of moles of the acid, \(C_{\rm a}V_{\rm a0}=7.5\times 10^{-3}\) moles, and titrated with the same strong base, \(C_{\rm b0}=7.5\,{\rm mM}\). The buffer solutions of panel (a) are prepared by adding \(C_{10}V_{10}=2.5\times 10^{-3}\) moles of the monobasic salt only; The buffer solutions of panel (b) are prepared by adding \(C_{20}V_{20}=2.5\times 10^{-3}\) moles of the dibasic salt only; The buffer solutions of panel (c) are prepared by adding \(C_{10}V_{10}=2.5\times 10^{-3}\) moles of the monobasic salt, and \(C_{20}V_{20}=2.5\times 10^{-3}\) moles of the dibasic salt. The three panels show four curves for different acids all with \({\rm p}K_{1}=1\) and \({\rm p}K_{2}=1\) (red), \({\rm p}K_{2}=4\) (green), \({\rm p}K_{2}=6\) (blue), and \({\rm p}K_{2}=8\) (cyan).
The pH stability of a buffer solution is given when the \(\beta(V_{\rm b})\) curve is horizontal, _i.e._ regardless the change in the pH of the acid solution, the pH of the buffer remains stable. The red curve of panels (a) and (c) of Figure 8 display the best pH stability. These red curves are produced by acids with \({\rm p}K_{1}=1\) and \({\rm p}K_{2}=1\). The red \(\beta(V_{\rm b})\) curves of panels (a) and (c) display buffer stability at \({\rm pH}_{\rm buffer}\approx 3\) and \({\rm pH}_{\rm acid}>4\). The green \(\beta(V_{\rm b})\) curves of panels (a) and (c) display buffer stability at \({\rm pH}_{\rm buffer}\approx 4.7\) and \({\rm pH}_{\rm acid}>6\), for and acid
Figure 8: \(\beta(V_{\rm b})\) curves for buffer solutions of different acids prepared with: (a) only monobasic salt, (b) only dibasic salt, and (c) both monobasic and dibasic salts. All the acids have \({\rm p}K_{1}=1\) and: \({\rm p}K_{2}=1\) (red), \({\rm p}K_{2}=4\) (green), \({\rm p}K_{2}=6\) (blue), and \({\rm p}K_{2}=8\) (cyan).
with p\(K_{1}=1\) and p\(K_{2}=4\). The cyan curves of panels (b) and (c) of Figure 8 show that the dibasic salt produces basic pH stability for acids with higher p\(K_{2}\).
## Declarations
### Ethical Approval
This work does not involve studies in humans and/or animals. There was no need for ethics committee approval.
### Competing interests
The authors declare no competing interests.
### Authors' contributions
Juan C. Morales made analytical and numerical calculations. Carlos A. Arango performed analytical and numerical calculations, wrote the manuscript, prepared the figures, and performed the analysis of results.
### Funding
This work has been financed by the OMICAS program, Project ID: FP44842-217-2018, and the internal research grants of Universidad Icesi. The OMICAS program acronym stands for "In-silico Multiscale Optimization of Sustainable Agricultural Crops", a member of the Scientific Colombia Ecosystem, sponsored by the World Bank, and the Colombian Ministries of Science, Technology and Innovation (Minciencias), Education, Industry and Tourism, and the ICETEX.
### Availability of data and materials
The data and Wolfram Mathematica codes used for this study are available from the corresponding author on request.
## Appendix
### Mathematical solution of \(P=0\)
#### The resolvent cubic equation
The solution of equation \(P=0\) by Ferrari's method requires to find its resolvent cubic equation [13, 14]. The standard procedure to obtain the resolvent cubic of a quartic equation begins by writing \(P=0\) in its equivalent form
\[\left(x^{2}+\tfrac{1}{2}c_{3}x\right)^{2}=\left(\tfrac{1}{4}c_{3}^{2}-c_{2} \right)x^{2}-c_{1}x-c_{0}. \tag{80}\]
The addition of a quantity \(y/2\) inside the squared term of the left hand side, and the addition of the compensation terms on the right hand side gives, after simplification,
\[\left(x^{2}+\tfrac{1}{2}c_{3}x+\tfrac{y}{2}\right)^{2}=\left(\tfrac{1}{4}c_{3} ^{2}-c_{2}+y\right)x^{2}+(\tfrac{1}{2}c_{3}y-c_{1})x-c_{0}+\tfrac{1}{4}y^{2}. \tag{81}\]
The left hand side of this equation can be written as a complete square, that is, equation (81) can be written
\[\left(tx+\frac{c_{3}y-2c_{1}}{4t}\right)^{2}=\left(\tfrac{1}{4}c_{3}^{2}-c_{2 }+y\right)x^{2}+(\tfrac{1}{2}c_{3}y-c_{1})x-c_{0}+\tfrac{1}{4}y^{2}, \tag{82}\]
with \(t=t(y)\neq 0\), given that \(t^{2}=\tfrac{1}{4}c_{3}^{2}-c_{2}+y\), and
\[\left(\frac{c_{3}y-2c_{1}}{4t}\right)^{2}=\tfrac{1}{4}y^{2}-c_{0}. \tag{83}\]
The expansion of equation (83) gives, after simplification, the resolvent cubic \(R=0\), with
\[R=y^{3}-c_{2}y^{2}+\left(c_{1}c_{3}-4c_{0}\right)y+\left(4c_{0}c_{2}-c_{0}c_{3}^{ 2}-c_{1}^{2}\right). \tag{84}\]
This equation has three roots \(y_{i}\), \(i=1,2,3\). The use of one of these roots in equations (81) and (82) gives
\[\left(x^{2}+\tfrac{1}{2}c_{3}x+\tfrac{y_{i}}{2}\right)^{2}=\left(t_{i}x+\frac{ c_{3}y_{i}-2c_{1}}{4t_{i}}\right)^{2}, \tag{85}\]
with \(i=1,2,3\) and \(t_{i}=t(y_{i})\). Each of these equations split in two quadratic equations,
\[x^{2}+\left(\tfrac{1}{2}c_{3}-t_{i}\right)x+\tfrac{1}{2}y_{i}- \frac{c_{3}y_{i}-2c_{1}}{4t_{i}} =0, \tag{86}\] \[x^{2}+\left(\tfrac{1}{2}c_{3}+t_{i}\right)x+\tfrac{1}{2}y_{i}+ \frac{c_{3}y_{i}-2c_{1}}{4t_{i}} =0, \tag{87}\]
with \(i=1,2,3\). The roots of \(P=0\) satisfy these quadratic equations [13, 14]. The discriminants of the quartic equation, \(\Delta\), and its resolvent cubic equation, \(\Delta_{\rm rc}\), are identical [13, 14]. The restriction \(k_{1}\geq 4k_{2}\) gives that \(\Delta>0\), hence \(\Delta_{\rm rc}>0\) and the resolvent cubic equation (84) must have three distinct real roots. For the case \(\Delta_{\rm rc}<0\) the cubic \(R=0\) has one real root and two non-real complex conjugate roots [13, 14].
#### Solution of the resolvent cubic equation \(R=0\)
The third order polynomial equation \(R=0\) can be solved by Cardano's method. The change of variable \(y=\bar{y}+\frac{c_{2}}{3}\) gives the depressed cubic equation \(R_{\rm dc}=0\), with
\[R_{\rm dc}=\bar{y}^{3}+\bar{p}\bar{y}+\bar{q}, \tag{88}\]
\[\bar{p} =c_{1}c_{3}-\frac{c_{2}^{2}}{3}-4c_{0}, \tag{89}\] \[\bar{q} =\frac{8c_{0}c_{2}}{3}+\frac{c_{1}c_{2}c_{3}}{3}-\frac{2c_{2}^{3}} {27}-c_{1}^{2}-c_{0}c_{3}^{2}, \tag{90}\]
and discriminant \(\Delta_{\rm dc}=-4\bar{p}^{3}-27\bar{q}^{2}\), which is equal to \(\Delta\) and \(\Delta_{\rm rc}\)[13, 14].
The use of Vieta's substitution, \(\bar{y}=\bar{z}-\frac{\bar{p}}{3\bar{z}}\), gives the polynomial equation
\[\bar{z}^{3}-\frac{\bar{p}^{3}}{27\bar{z}^{3}}+\bar{q}=0. \tag{91}\]
Multiplication of (91) by \(\bar{z}^{3}\) gives
\[\bar{z}^{6}+\bar{q}\bar{z}^{3}-\frac{\bar{p}^{3}}{27}=0, \tag{92}\]
which is equivalent to the quadratic equation \(\xi^{2}+\bar{q}\xi-\frac{\bar{p}^{3}}{27}=0\), in the variable \(\xi=\bar{z}^{3}\), with roots
\[\begin{split}\xi_{1,2}&=-\frac{\bar{q}}{2}\pm \sqrt{\frac{27\bar{q}^{2}+4\bar{p}^{3}}{108}}\\ &=-\frac{\bar{q}}{2}\pm\frac{1}{2}\sqrt{-\frac{\Delta_{\rm dc}}{2 7}}.\end{split} \tag{93}\]
The physical case of diprotic acids with \(k_{1}\geq 4k_{2}\)[9, 11] gives \(\Delta_{\rm dc}>0\) for \(\bar{c}_{\rm a}\geq 0\) and \(\bar{c}_{\rm b}\geq 0\), therefore \(\xi_{1,2}\) are a complex conjugate pair. On the other hand, for the less common case of diprotic acids with \(k_{1}<4k_{2}\) is possible to have \(\Delta_{\rm dc}<0\), hence \(\xi_{1,2}\) are real conjugates, on part of the plane \(\bar{c}_{\rm a}\)-\(\bar{c}_{\rm b}\), with \(\xi_{1}>\xi_{2}\).
It is convenient to define \(\zeta=\xi_{1}\) and \(\zeta^{*}=\xi_{2}\) for the case \(k_{1}\geq 4k_{2}\) and \(\xi=\xi_{1}\) and \(\bar{\xi}=\xi_{2}\) for the case \(k_{1}<4k_{2}\), with \(\zeta^{*}\) and \(\bar{\xi}\) as the complex and real conjugate of \(\zeta\) and \(\xi\) respectively.
The use the polar representation for \(\zeta\) gives \(\zeta=\|\zeta\|e^{i\theta}\) for the case \(k_{1}\geq 4k_{2}\), with
\[\|\zeta\| =\frac{1}{2}\sqrt{\bar{q}^{2}+\frac{\Delta_{\rm dc}}{27}}=\sqrt{ \frac{-\bar{p}^{3}}{27}}, \tag{94}\] \[\theta =\arctan{\left(-\frac{\bar{q}}{2},\frac{\sqrt{\Delta_{\rm dc}}}{6 \sqrt{3}}\right)}, \tag{95}\]
with \(\theta\in(0,\pi)\) as the angle between \(\zeta\) and the positive real axis on the Argand plane. The angle \(\theta\) is related to the trigonometric solution obtained by Nickalls for the roots of the cubic equation [15]. The polar representation of \(\xi\) and \(\bar{\xi}\), case \(k_{1}<4k_{2}\), gives \(\xi=|\xi_{1}|e^{i\theta_{1}}\) and \(\bar{\xi}=|\xi_{2}|e^{i\theta_{2}}\) with \(\theta_{1,2}=\frac{\pi}{2}(1-\mbox{sgn}\,\xi_{1,2})\), and \(\xi>\bar{\xi}\).
The roots \(\bar{y}\) of the depressed cubic equation \(R_{\rm dc}=0\) are given by Cardano's formula
\[\bar{y}=\alpha+\beta, \tag{96}\]
where \(\alpha=\sqrt[3]{\zeta}\) and \(\beta=\sqrt[3]{\zeta^{*}}\) for \(k_{1}\geq 4k_{2}\), and \(\alpha=\sqrt[3]{\xi}\) and \(\beta=\sqrt[3]{\xi}\) for \(k_{1}<4k_{2}\). The cubic roots \(\alpha\) and \(\beta\) have three values each, \(\alpha_{n}\) and \(\beta_{n}\), with \(n=0,1,2\). The combined use of the three roots \(\alpha\) and the three roots \(\beta\) must give the three roots of \(R_{\rm dc}=0\).
The cubic roots \(\alpha\) and \(\beta\), for the case \(k_{1}\geq 4k_{2}\), are given by
\[\alpha_{n} = \sqrt[3]{\|\zeta\|}\exp{\left(i\left(\frac{\theta}{3}+\frac{2n \pi}{3}\right)\right)}, \tag{97}\] \[\beta_{n} = \sqrt[3]{\|\zeta\|}\exp{\left(i\left(-\frac{\theta}{3}+\frac{2n \pi}{3}\right)\right)}, \tag{98}\]
with \(n=0,1,2\), and
\[\sqrt[3]{\|\zeta\|}=\frac{1}{3}\sqrt{1+k_{1}Q_{1}+k_{1}^{2}Q_{2}}. \tag{99}\]
for which \(Q_{1}\) and \(Q_{2}\) are
\[Q_{1} = -3k_{2}\bar{c}_{\rm b}^{2}+(6\bar{c}_{\rm a}k_{2}+1)\bar{c}_{\rm b }+2\bar{c}_{\rm a}-14k_{2}, \tag{100}\] \[Q_{2} = k_{2}^{2}+(4\bar{c}_{\rm a}-\bar{c}_{\rm b})k_{2}+3+(\bar{c}_{ \rm a}-\bar{c}_{\rm b})^{2}. \tag{101}\]
The case \(k_{1}<4k_{2}\) has
\[\alpha_{n} = \sqrt[3]{|\xi_{1}|}\exp\left(i\left(\frac{\theta_{1}}{3}+\frac{2n\pi }{3}\right)\right)\!, \tag{102}\] \[\beta_{n} = \sqrt[3]{|\xi_{2}|}\exp\left(i\left(\frac{\theta_{2}}{3}+\frac{2n \pi}{3}\right)\right), \tag{103}\]
with \(n=0,1,2\).
Since for the case \(k_{1}\geq 4k_{2}\) the cubic equation \(R_{\rm dc}=0\) has three real roots, the addition of two cubic roots \(\alpha_{n}+\beta_{m}\) must give a real number. This is possible only if \(\mathop{\rm Im}(\alpha_{n})=-\mathop{\rm Im}(\beta_{m})\). There are only three possible combinations that fulfill this requirement:
\[\alpha_{0}+\beta_{0} = 2\sqrt[3]{\|\zeta\|}\cos\big{(}\frac{\theta}{3}\big{)}, \tag{104}\] \[\alpha_{2}+\beta_{1} = -2\sqrt[3]{\|\zeta\|}\cos\big{(}\frac{\theta+\pi}{3}\big{)},\] (105) \[\alpha_{1}+\beta_{2} = -2\sqrt[3]{\|\zeta\|}\sin\big{(}\frac{\theta+2\pi}{3}\big{)}, \tag{106}\]
which are the roots of \(R_{\rm dc}=0\): \(\bar{y}_{1}\), \(\bar{y}_{2}\), and \(\bar{y}_{3}\), respectively, with \(\bar{y}_{1}>\bar{y}_{2}>\bar{y}_{3}\).
The case \(k_{1}<4k_{2}\) has only one real solution, and a complex conjugate pair. Since \(\xi>\bar{\xi}\), there are three possibilities:
* \(\theta_{1}=\theta_{2}=0\): the roots are \(\alpha_{i}+\beta_{i}\) with \(i=0,1,2\). The root \(\bar{y}_{1}=\alpha_{0}+\beta_{0}\) is the only real solution, \(\bar{y}_{1}=\sqrt[3]{|\xi_{1}|}+\sqrt[3]{|\xi_{2}|}\).
* \(\theta_{1}=\theta_{2}=\pi\): the roots are \(\alpha_{i}+\beta_{i}\) with \(i=0,1,2\). The root \(\bar{y}_{1}=\alpha_{1}+\beta_{1}\) is the only real solution, \(\bar{y}_{1}=-(\sqrt[3]{|\xi_{1}|}+\sqrt[3]{|\xi_{2}|})\).
* \(\theta_{1}=0\), \(\theta_{2}=\pi\): the roots are \(\alpha_{0}+\beta_{1}\), \(\alpha_{1}+\beta_{0}\), and \(\alpha_{2}+\beta_{2}\). The root \(\bar{y}_{1}=\alpha_{0}+\beta_{1}\) is the only real solution, \(\bar{y}_{1}=\sqrt[3]{|\xi_{1}|}-\sqrt[3]{|\xi_{2}|}\).
In summary, the solution \(\bar{y}_{1}\) of the depressed cubic equation is
\[\bar{y}_{1}=\begin{cases}\frac{2}{3}\sqrt{1+k_{1}Q_{1}+k_{1}^{2}Q_{2}}\cos\left( \frac{\theta}{3}\right),&\Delta_{\rm dc}>0,\\ \sqrt[3]{|\xi_{1}|}+\sqrt[3]{|\xi_{2}|},&\Delta_{\rm dc}<0,\;\xi_{1}>0,\;\xi_{2 }>0,\\ -(\sqrt[3]{|\xi_{1}|}+\sqrt[3]{|\xi_{2}|}),&\Delta_{\rm dc}<0,\;\xi_{1}<0,\;\xi_ {2}<0,\\ \sqrt[3]{|\xi_{1}|}-\sqrt[3]{|\xi_{2}|},&\Delta_{\rm dc}<0,\;\xi_{1}>0,\;\xi_{2 }<0.\end{cases} \tag{107}\]
The root \(y_{1}\) of the resultant cubic equation \(R=0\) is given by
\[y_{1}=\bar{y}_{1}-\frac{1+k_{1}(\bar{c}_{\rm a}-\bar{c}_{\rm b}-k_{2})}{3}. \tag{108}\]
This root is substituted in the quadratic equations (86) and (87), with \(t_{1}=t(y_{1})\) given by
\[t_{1}=\sqrt{1+\frac{1}{4}(\bar{c}_{\rm b}+k_{1})^{2}+k_{1}\left(\bar{c}_{\rm a }-\bar{c}_{\rm b}-k_{2}\right)+y_{1}}. \tag{109}\]
The four roots of the quartic equation \(P=0\) are given by
\[x_{1,2}=\frac{1}{2}\left(-\left(\frac{\bar{c}_{\rm b}+k_{1}}{2}-t_{1}\right) \pm\sqrt{\left(\frac{\bar{c}_{\rm b}+k_{1}}{2}-t_{1}\right)^{2}-2y_{1}+\frac{ (\bar{c}_{\rm b}+k_{1})y_{1}+2k_{1}(1+(2\bar{c}_{\rm a}-\bar{c}_{\rm b})k_{2} )}{t_{1}}}\right), \tag{110}\] \[x_{3,4}=\frac{1}{2}\left(-\left(\frac{\bar{c}_{\rm b}+k_{1}}{2}+ t_{1}\right)\pm\sqrt{\left(\frac{\bar{c}_{\rm b}+k_{1}}{2}+t_{1}\right)^{2}-2y_{1}- \frac{(\bar{c}_{\rm b}+k_{1})y_{1}+2k_{1}(1+(2\bar{c}_{\rm a}-\bar{c}_{\rm b} )k_{2})}{t_{1}}}\right). \tag{111}\]
Only the roots \(x_{1}\) and \(x_{3}\) can have physical significance, and \(x\) is given by
\[x=\begin{cases}x_{1},&\Delta_{\rm dc}>0,\\ x_{1},&\Delta_{\rm dc}<0,\;\xi_{1}>0,\;\xi_{2}>0,\\ x_{3},&\Delta_{\rm dc}<0,\;\xi_{1}<0,\;\xi_{2}<0,\\ x_{3},&\Delta_{\rm dc}<0,\;\xi_{1}>0,\;\xi_{2}<0.\end{cases} \tag{112}\] |
2307.16535 | Introducing and Interfacing with Cybersecurity -- A Cards Approach | Cybersecurity is an important topic which is often viewed as one that is
inaccessible due to steep learning curves and a perceived requirement of
needing specialist knowledge. With a constantly changing threat landscape,
practical solutions such as best-practices are employed, but the number of
critical cybersecurity-related incidents remains high. To address these
concerns, the National Cyber Security Centre published a Cybersecurity Body of
Knowledge (CyBOK) to provide a comprehensive information base used to advise
and underpin cybersecurity learning. Unfortunately, CyBOK contains over 1000
pages of in-depth material and may not be easy to navigate for novice
individuals. Furthermore, it does not allow for easy expression of various
cybersecurity scenarios that such individuals may be exposed to. As a solution
to these two issues, we propose the use of a playing cards format to provide
introductory cybersecurity knowledge that supports learning and discussion,
using CyBOK as the foundation for the technical content. Upon evaluation in two
user studies, we found that 80% of the participants agreed the cards provided
them with introductory knowledge of cybersecurity topics, and 70% agreed the
cards provided an interface for discussing topics and enabled them to make
links between attacks, vulnerabilities and defences. | Ryan Shah, Manuel Maarek, Shenando Stals, Lynne Baillie, Sheung Chi Chan, Robert Stewart, Hans-Wolfgang Loidl, Olga Chatzifoti | 2023-07-31T10:01:42Z | http://arxiv.org/abs/2307.16535v1 | # Introducing and Interfacing with Cybersecurity - A Cards Approach
###### Abstract.
Cybersecurity is an important topic which is often viewed as one that is inaccessible due to steep learning curves and a perceived requirement of needing specialist knowledge. With a constantly changing threat landscape, practical solutions such as best-practices are employed, but the number of critical cybersecurity-related incidents remains high. To address these concerns, the National Cyber Security Centre published a Cybersecurity Body of Knowledge (CyBOK) to provide a comprehensive information base used to advise and underpin cybersecurity learning. Unfortunately, CyBOK contains over 1000 pages of in-depth material and may not be easy to navigate for novice individuals. Furthermore, it does not allow for easy expression of various cybersecurity scenarios that such individuals may be exposed to. As a solution to these two issues, we propose the use of a playing cards format to provide introductory cybersecurity knowledge that supports learning and discussion, using CyBOK as the foundation for the technical content. Upon evaluation in two user studies, we found that 80% of the participants agreed the cards provided them with introductory knowledge of cybersecurity topics, and 70% agreed the cards provided an interface for discussing topics and enabled them to make links between attacks, vulnerabilities and defences.
security, playing cards, knowledge base, design +
Footnote †: journal: Information Systems
## 1. Introduction
Cybersecurity remains a fundamental concern to users of computer systems, with security often being overlooked due to its portrayal as a subject pertaining to issues of perceived technical difficulty, steep learning curves and a requirement of specialist knowledge and/or expertise (Hans and Schuster, 2010; Schuster, 2011; Schuster, 2011). While the security foundations of computer-based systems have improved over time, limiting the potential for, or mitigating the effects of, attacks arising from vulnerabilities, requires the involvement of all users of these systems (e.g. the general population) and is a necessary step to improve the understanding of cybersecurity (Krishnan, 2017). Moreover, the increasing complexity and diversity of the threat landscape for cybersecurity (Hans and Schuster, 2010; Schuster, 2011; Schuster, 2011) further substantiates the need for improving understanding of cybersecurity. In the domain of software engineering, practical solutions to achieve this include activities such as the documentation of vulnerabilities of computer systems and updating respective knowledge bases. Open databases such as the Common Vulnerabilities and Exposures (CVE) (Ball et al., 2016) and Common Weakness Enumeration (CWE) (Krishnan, 2017), have played a pivotal role in raising the awareness of known vulnerabilities such that appropriate defensive measures can be developed or updated. While these reference databases are well maintained, they may still appear complex to the general population and may contribute to the already existing problems of inaccessibility and specialist requirements that are pinned against the topic of cybersecurity. Because of this, several knowledge bases have been developed to inform and underpin cybersecurity education and training (Ball et al., 2016; Stals et al., 2017; Stals et al., 2018), which aim to address these issues at a high-school or higher-education level. Although they may be a useful learning resource for providing key cybersecurity knowledge, their primary purpose is to be used by those who are already knowledgeable in cybersecurity to develop further curricula to teach those who may have little-to-no knowledge of cybersecurity. Furthermore, among these knowledge bases, there may be some key topics which are not covered and their format and density may not be perceived as accessible to novice users. Thus, this may
directly impact one's ability to understand key cybersecurity topics but also to make links between these topics to capture real-world cybersecurity scenarios. Ultimately, the weaknesses of existing solutions regarding limitations of accessibility, steep learning curves and a perceived requirement of specialist knowledge/expertise, must be ameliorated by a new solution that provides an answer to the following research questions. Specifically, can a new solution:
1. Provide introductory cybersecurity knowledge to novice users?
2. Provide material for expressing interpretation and documentation of key cybersecurity topics, which can support independent learning and self-efficacy?
3. Act as an index for the CyBOK knowledge base which provides an interface for discussion on key cybersecurity topics?
4. Provide links between key cybersecurity topics, allowing the generation of concepts which can capture various cybersecurity scenarios?
In this paper, we provide an answer to these research questions by proposing the use of a playing cards format as a medium to provide: introductory knowledge of key cybersecurity topics, acting as an index for the CyBOK knowledge base (Cyp, 2016; 2017); support independent learning and self-efficacy; and allow for links to be made between key cybersecurity topics to capture real-world scenarios. The novelty of this work is three-fold. We first present the design principles for the cybersecurity cards to address these limitations. Second, we provide an evaluation of the cards in a workshop with masters-level students to understand whether the cards satisfy the aforementioned provisions. The output of this evaluation is a second revised deck of the cybersecurity cards. Third, we carried out the same workshop but with a different demographic to the first, with participants at late primary and early secondary school level (ages ranging from 10 to 15 years old, mean 12.8 years).
The remainder of this paper is organised as follows. Section 2 provides background and related work, as well as the selection procedure we applied to the production of our cybersecurity cards using the CyBOK knowledge base and the limitations of other approaches. The design principles applied to the cybersecurity cards, as well as the initial implementation (Version 1), are described in Section 3. An evaluation of Version 1 of the cards is provided in Section 4. In Section 5, we present Version 2 of the cards as a result of the findings from the first evaluation, as well as a further evaluation of the second version of the cards in Section 6. In Section 7, we provide a discussion of the results from both evaluations and the paper concludes in Section 8.
## 2. Background
The need for practical and easy-to-learn cybersecurity learning material is a constant problem which stems from the evolving nature of cybersecurity and computing technologies as the number of connected users and devices scales. In recent years, the number of critical cybersecurity incidents have increased significantly, correlating with increasing numbers of online users during the Covid-19 pandemic, for example, as well as an increase in the adoption of various connected computer systems in day-to-day activities. Among these incidents, research shows that around 95% of cybersecurity breaches occur as a result of human error (Bah and others, 2017) and that organisations lack the sophistication, interest and/or knowledge to handle these threats (Bah and others, 2017; 2017).
It has been shown that those in cybersecurity careers require a set of skills, involving the abilities to carry out various tasks at any time in non-traditional environments, and adapt to the dynamic nature of these environments (Bah and others, 2017). In the domain of software engineering, basic cybersecurity training such as password best-practices and multi-factor authentication are employed for individuals to conform to, with the aim of alleviating concerns and mitigating the
potential for liabilities that arise as a result of cybersecurity-related incidents [11, 42, 52]. It has been identified that a large number of Android applications contain security-related code snippets copied and pasted from Stack Overflow, of which nearly 98% contained at least one insecure code snippet [23]. The value of security information depends strongly on its source [40] and reputable information sources are only useful, so long as they are well-understood and perceived as actionable [41, 47]. While sites such as StackOverflow are reputable for providing actionable solutions, it is clear that the security of solutions are not well understood. For novice individuals, such as those who write and/or deploy software code without formal software engineering training, it may not be true that they may fully comprehend the impact of not adhering to security best-practices.
To address this, various curricula guidelines and knowledge frameworks have been developed for cybersecurity, covering a range of fundamental topics ranging from software and hardware security, to networks and cyber-physical systems. The Joint Task Force (JTF) on Cybersecurity Education proposed a draft of curricular guidance on cybersecurity to support educational efforts in the USA [17]. They designed a framework model for a body of knowledge that covers six knowledge areas which several concepts span over, targeting specific disciplines and application areas that pertain to the demographic of cybersecurity professionals. The National Initiative for Cybersecurity Education (NICE) [38] is a cybersecurity workforce framework, developed by NIST in the USA, which aims to provide a foundation for describing and sharing information about knowledge, skills and abilities in cybersecurity to strengthen an organisation's cybersecurity. The National Cyber Security Center (NCSC) in the UK proposed a Certified Master's Program that defines several pathways to address knowledge and skill gaps in cybersecurity education, which describe what topics must be covered and to what depth [5]. While all these frameworks tend to agree on key cybersecurity topics that must be understood, they only promote greater emphasis on a subset of topics. For example, NICE covers a wide range of key topics but gaps exist such as with topics related to cyber-physical systems and human factors. The NCSC Certified Master's Program does not place much emphasis on attacks and defences, but in contrast focuses on key topics such as software security.
The Cybersecurity Body of Knowledge (CyBOK) is a knowledge base developed by University of Bristol funded by the NCSC. It was developed to encompass the wide variety of topics within the field of cybersecurity and to show that it also spans multiple disciplines. In practice, it has been successful in providing a framework for NCSC certified degrees and academic/professional training programmes [2]. CyBOK is decomposed into 21 knowledge areas (KAs) (as of version 1.1), each introduced by a reference document and a set of topics presented as a branch of the overall _Knowledge Tree_ (Figure 1) [3]. Each of these knowledge areas are organised into a hierarchy of between 3 to 5 categories that present as a tree of topics. For each KA in CyBOK, there are a number of chapters that form an encyclopedic collection of knowledge of key concepts that are based on state-of-the-art academic literature. These key concepts are known as _Topics_, with some _Topics_ decomposed further into a set of more specialised subjects (_Sub-Topics_). For example, the category of _Software Security_ in the _Software and Platform Security_ KA contains 4 overarching themes, split into 20 sub-topics (e.g. structured output generation vulnerabilities), each of which describe further specialised information (e.g. sql injection).
It has been shown that, in comparison with other knowledge frameworks, CyBOK covers a wider range of knowledge areas and does not have gaps that are present within other frameworks [26]. While CyBOK facilitates a body of knowledge which attributes to the production of material for cybersecurity education and professional training, there are some weaknesses which may render it an inaccessible resource to more novice individuals such as those in the domain of software engineering who write or deploy code with no formal software engineering training. First, the links between meaning and relationships among topics and sub-topics vary across the entire Knowledge Tree, which
prevents easy expression of various cybersecurity scenarios. Second, the material across the CyBOK knowledge base and its indexing structure is not easy to traverse for novice users. Gonzalez et al. [25] show that it would be difficult for novice individuals to infer the links between various topics, given that some follow either a single predominant theme or span several topics themselves. Ultimately, to support novice users as well as those more experienced, key cybersecurity knowledge provided by knowledge bases such as CyBOK require adequate presentation that can facilitate independent learning whilst also providing a suitable interface for discussion of various cybersecurity scenarios to make the links between meaning and relationships among topics.
Aside from knowledge frameworks, cybersecurity information has also been presented in other ways. Capture the Flag (CTF) activities provide a series of competitive exercises used to find vulnerabilities in computer systems and applications and have shown to be a valuable learning tool [44; 45; 48]. Thomas et al. [47] propose the use of a collectible card game (CCG) as a means of teaching cybersecurity to high school students, given the benefits of prevalence culturally to all age groups (familiarity) and encouraging the understanding of competitive strategy and mistake-making as a way of learning [49]. Anvik et al. [13] propose the use of a web-based card game for learning programming and cybersecurity concepts, using simple vocabulary to create ubiquitous learning experiences. Denning et al. [21] propose the use of a tabletop card game, Control-Alt-Hack, with the aim of providing awareness training for cybersecurity, arguing that playing card games can provide a reachable foundation for providing digestible cybersecurity information to large audiences. However, while these gamified approaches show various levels of success, there are limitations. First, many of these different approaches have different target personas and goals. Second, card game approaches such as Control-Alt-Hack [21] do not cover a broad range of key cybersecurity topics, such as those identified by knowledge frameworks such as CyBOK, and do not adequately highlight the links between vulnerabilities, attacks and defences. Specifically, attacks are typically highlighted first, which does not help users understand how attacks present themselves (opportunistic vulnerability targeting) and how to protect against them. Third, while CTF activities, for example, are beneficial in this aspect [45; 48], a key disadvantage pertains to novice users wherein competitions rely on technical expertise and the ability to traverse computer systems using various command-line tools and other bespoke applications [24], or requiring (at a minimum) a basic understanding of cybersecurity concepts in order to progress in finding vulnerabilities [37].
Figure 1: Partial View of CyBOK 1.1 Knowledge Tree [20]. The knowledge areas and topics that are highlighted show the subset of the CyBOK knowledge base that was selected due to the link to the domain of software engineering.
## 3. Cybersecurity cards - Version 1
To answer the research questions presented in Section 1, we propose a novel approach to represent key cybersecurity knowledge, utilising specially designed playing cards. In recent years, it has been shown that specially designed cards used as a tool for education attributes to positive outcomes in the space of learning, attitudes and critical thinking skills (Stein
(Figure 2), attack and vulnerability cards are encased in a square border, with attack cards filled with a solid red colour while vulnerability cards are signalled with a diagonal line to separate white from red. The defence cards resemble an octogonal shape (akin to a shield) which has a white fill colour. The diagonal separation of white and red in vulnerability cards aims to highlight vulnerabilities as the central focal point. General cards in the deck include: a title representing the KA topic; a type represented by a symbol (see 1); and a description related to the topic it represents. Detailed cards are assigned a unique identifier in the form \(a\_Bi\) in the top right corner 2, where \(a\) refers to the class of the Detailed card, \(B\) referring to the first letter of the General card it is categorised under and \(i\) acting as an index number in the set of \(N\) Detailed cards for the category \(B\). The symbol for the Detailed card's category is made opaque in the center of the card, behind the description of the Detailed card. Attack cards also contain a description of the impact of the attack. Defence cards contain a target symbol to help further identify the vulnerability it aims to protect against attacks. Vulnerability cards also describe an attack vector with associated attack card identifiers, as well as the consequence of the vulnerability with associated defence card identifiers 3. The aim of the identifiers is to provide a means to make links between key cybersecurity topics by allowing users to capture various attack-defence scenarios that revolve around specific vulnerabilities (_RQ_4). In total, there are 124 cards in the deck which is composed of 30 vulnerability cards, 32 attack cards and 47 defence cards, each of which are categorised under one of the 15 General cards.
#### 3.2.1. Relationships Between Cards
Given that cybersecurity stems from conflict between attackers and defenders targeting one or more vulnerabilities, creating a capturable _many-to-many relationship_ is essential when introducing cybersecurity concepts. Thus, the cybersecurity cards should represent this relationship, where multiple attacks can target multiple vulnerabilities that can, in turn, be mitigated or countered by multiple defences. While CyBOK does contain information about these relationships, it is hard to infer these from implicit references within the reference material. Thus, the cards aim to act as an index for the CyBOK knowledgebase by providing a means for presenting these implicit relationships in a manner that supports independent learning and self-efficacy (_RQ_2, _RQ_3). In Figure 2, we can see that the vulnerability card "_Blind Trust of User Input_" is linked to a set of identifier codes 3 for a number of
Figure 2. Organisation and Design of Cybersecurity Cards (Version 1). General cards contain the icon of its corresponding class (1), with Detailed cards containing an identifier code (2) which is used to make links between attacks, defences and vulnerabilities (3). The arrows demonstrate the links between the presented attacks, vulnerabilities and defences.
related attacks and defences. This example shows a link to the _"Command Injection"_ and _"Memory Thief"_ attacks (_RQ4_). The first attack involves the execution of unauthorised commands to a system (which may be input by a user) and the second involves the stealing of confidential information from memory. In contrast, two defence examples are shown as links to the attacks and the vulnerability, which include _"Code Assertions"_ and _"Input Sanitisation"_ which involve monitoring code execution and sanitising user input to eliminate malicious code or escape characters, respectively.
## 4. Evaluation
The logical next step after designing the cybersecurity cards was to conduct a user evaluation to see if the participants could use the cards, and whether the cards provided a clear communication of these topics and an interface for discussing them.
### Methodology
Participants were recruited to take part in a workshop, advertised to university students, where they interacted with the cybersecurity cards in order to analyse and devise a cybersecurity scenario. Ethical approval was granted by the University ethics committee for the workshop recruitment and procedure before the study took place. A participant information sheet was provided via a link in the online registration form, as well as being used to obtain written consent from participants. Furthermore, the workshop was held in Spring 2022 and we had a special clause added to state we would follow local government guidelines on the Covid-19 protocol (Beng et al., 2022) which provided guidance on reducing risks from transmitting Covid-19, such as staying home if participants had symptoms or tested positive for Covid-19. In total, we managed to recruit 13 participants but only 11 had usable data. Upon registration for the workshop, we found that the 11 participants were masters-level students who were between 22 and 35 years old (mean 26.3 years). The participants were enrolled in a conversion masters program in computer science and came from either an engineering (6), mathematics (3), computing (1) or biology (1) background. When asked if the participants had any prior experience or skills with cybersecurity, only one participant mentioned they had experienced anomaly detection but others stated they had no prior experience or skills in cybersecurity. Further, when asked if they had any experience or skills with secure coding, 8 of the participants said they had no experience or skills with secure coding, 2 responded neutral and 1 said they had some experience. We hypothesise that the low number of participants to be attributed to the Covid-19 global pandemic leaving individuals with more dynamic priorities and commitments.
\begin{table}
\begin{tabular}{l|l l c}
**Phase** & **Activity** & **Rationale** & **Duration (min)** \\ \hline Prep & Participant information sheet & Inform participants & 5 \\ & Informed consent form & Inform consent & 5 \\ & Demographic Questionnaire & Participant profiles & 5 \\ \hline Activity & Introduction to workshop theme & Introduce theme & 10 \\ & Introduction to cybersecurity cards and experts roaming & Familiarity with cards & 10 \\ & around the room providing help when needed & & \\ & Theme exploration – devising attack and defence scenarios & Cybersecurity trichotomy & 20 \\ \hline Post & Cybersecurity Questionnaire & Evaluate cards & 10 \\ \hline \end{tabular}
\end{table}
Table 1. Workshop Structure Showing Phases, Activities and Duration (min)
An overview of the workshop structure can be seen in Table 1. Participants were given the information sheet, consent form and demographic questionnaire to fill out in advance of the workshop, with copies of the consent form brought on the day as back ups in case they forget to send or bring the consent form. When they arrived they were split into three groups (consisting of three or four participants). During the workshop, participants were first introduced to the theme of the workshop which was _Code Security_ - the practice of developing software that embeds security and best-practices into the code. After this, they were given an introduction to the deck of cybersecurity cards where they had time to explore and familiarise themselves with the various attacks, defences and vulnerabilities. Cybersecurity experts that were present would roam around the room and were able to answer any questions participants may have had when going through the deck. After this, they would use the cybersecurity cards to find vulnerabilities and devise attack and defence scenarios within the theme of the workshop. At the end of the workshop, each participant was individually asked to fill in a self-assessment questionnaire online (Appendix A) on how using the cards contributed to the themes described in Section 4.2. The rationale behind this questionnaire was to collect both quantitative and qualitative data regarding a participant's experience of interacting with the cybersecurity cards. The quantitative aspect of the questionnaire (Q1) makes use of a 7-point Likert scale, ranging from strongly disagree to strongly agree, to assess the performance and effort from participants while using the cards. The qualitative aspect (Q2-5) aims to understand motivations and thoughts behind using the cards. Question 2 and 4 gave participants a series of options (checkboxes), which were then coupled with a follow-up question (Q3 and Q5 respectively) to further elaborate on their choices. Two experienced postdoctoral researchers independently analysed and coded the qualitative responses, grouping them into themes, which were then discussed systematically and final agreements on codes were made in consensus. This analytical technique has been applied successfully in various bodies of work (Shi et al., 2019; Wang et al., 2020). The data from questionnaires is presented as italics in quotation marks, alongside a participant ID (e.g. W1P1 for this first workshop) where relevant.
### Results
An overview of the results from this evaluation can be seen in Figure 3. For subsequent discussion, the results from the questionnaires will initially follow three themes, noted by the title of each subsection.
#### 4.2.1. Providing a Knowledge Base
The first theme aimed at determining whether our cybersecurity cards provided users with introductory cybersecurity knowledge, whilst also supporting learning in a well-documented manner.
Figure 3. Results of Workshop 1. This figure shows an overview of the responses for Q1 in Appendix A. (1) Providing a knowledgebase = (a)–(d); (2) Independent learning and self-efficacy = (e)–(f); (3) Interface for discussion: (g)–(h).
Providing a knowledge base looked into four aspects: providing knowledge of individual concepts, cybersecurity terminology, wider scope of topics, and the trichotomy of attacks, defences and vulnerabilities (**a-d**) in Question 1 of the questionnaire in Appendix A). Overall, we observed that our cards were able to effectively provide key cybersecurity knowledge, adapted from the CyBOK knowledge base, in an accessible manner. Interestingly, with regard to individual concepts and the relationships between attacks, defences and vulnerabilities, three participants strongly agreed with this. Furthermore, four participants also strongly agreed that the cards were able to provide them with knowledge about wider scope. For cybersecurity terminology, only two responded with somewhat disagree, which we believe may be due to the text on cards using too much technical terminology. It is clear that the cards have indeed achieved a positive result regarding providing users with knowledge of fundamental cybersecurity knowledge (_RQ1_), as well as providing links between topics that allows the generation of concepts which capture the relationships between attacks, vulnerabilities and defences in scenarios pertaining to the theme of _Code Security_ (_RQ4_).
#### 4.2.2. Independent Learning and Self-Efficacy
With regard to independent learning and self-efficacy (**e,f**), we found that 8 participants agreed the cards enabled them to undertake independent learning with three strongly agreeing. Furthermore, we found that 9 participants agreed the cards provided them with access to key cybersecurity knowledge when one of the rotating experts was not present in the group. In both cases, only a single participant disagreed. Ultimately, the results demonstrate that our cards approach provides a means for expressing interpretation and documentation of key topics that supports independent learning and self-efficacy (_RQ2_).
#### 4.2.3. Providing an Interface for Discussion
The final theme looks at whether the cards provide users with an interface for discussing key cybersecurity topics (**g,h**). Interestingly, we found that more participants agreed that they could hold discussions on these topics with the expert (9), in comparison with other participants in their group (7). However, while both cases show a majority in agreement, when discussing topics with others in their group more participants remained neutral as opposed to disagreeing. While we see a positive outcome regarding the cards providing an interface for the discussion of key cybersecurity topics (_RQ3_), it is important to determine what was not clear and why this was the case, as well as understanding how the cards could be improved.
#### 4.2.4. Understanding Drawbacks Of Using The Cards
The next part of the questionnaire involves questions that were designed to help uncover and understand any limitations of the cybersecurity cards, such that improvements can be made to better fulfill the goals of the research questions described in Section 3. The first step was to determine which category or subset of the cybersecurity cards the participants did not use and why this was the case. We found that the least used cards were the general and detailed defence and vulnerability cards. The next question asked participants why the cards they had selected were not used, one participant stated that _"general cards gave some idea about the content"_ (W1P6) and another stated that they were only _"engaged in attack and defence"_ (W1P11). With this said, however, most participants described that they had used all of the categories, for example stating they had used _"at least one card [from] each"_ (W1P1) or only _"concentrate[d] on a few cards"_ (W1P9). This suggests that while the cards which were said to not have been used may not have been critical to discussions they had about certain topics, they may have still been looked at and thought about. We then asked participants about how the cards could be improved. We found that 5 participants recorded that the number of cards in the deck was too high. This may be due to the use of a physical medium and the number of categories and types of each General card and is further suggested by 3 other participants stating that there were too many types or categories. Interestingly, 5 participants stated that both the color coding of the types/categories was not clear, as well as the relationships between the cards, with one participant describing that _"Threat cards [were]
difficult to handle/understand\({}^{*}\)_(W1P7). This may also correspond with the feeling that the number of cards was too high, but also a perceived difficulty of understanding links between the cards and the terminology used. In contrast to this, however, one participant mentioned that the _"Cards provides an entry point for more detailed scenarios of cybersecurity and helps to create relationship between attack and defense situations more clear\({}^{*}\)_(W1P4). Interestingly, 3 participants recorded that the information on the cards were too abstract, with no participants stating that they were too detailed. Thus, this helps reinforce the understanding that the difficulty relating to understanding the cards may be linked to a lack of clarity on the relationships between them.
## 5. Cybersecurity cards - Version 2
Upon review of the results from the evaluation questionnaire, the cybersecurity cards were in places redesigned and the new deck is hereafter referred to as _Version 2_. As well as providing the subset of the Version 1 cards for viewing, the equivalent subset for Version 2 of the cards have also been made available and can be found online2. The full set of cards will be made publicly available under the CC BY-NC-SA Creative Commons license after the research project ends in January 2024.
Footnote 2: [https://anonymous.4open.science/r/cybersecurity_cards-9P00/](https://anonymous.4open.science/r/cybersecurity_cards-9P00/) (anonymous repository for double-blind reviewing purposes)
### Structural Redesign
One of the limitations that required addressing was with regard to the number of cards in the deck, which some participants felt was too high and _"difficult to handle and understand (W1P7)"_. In contrast with a standard deck of playing cards used for the likes of Solitaire and Poker, which has 52 cards, Version 1 of our cards has more than double this amount (109) resulting in it being perceived by participants as hard to handle. Thus, reducing the number of cards will likely be better received, whilst still meeting our requirements. In Version 2, the first major revision is reducing the number of cards in the deck. The first decision made with regard to this was the removal of General cards (types of attack, defence or vulnerability), given that participants specified these as the least used cards in the deck. In Version 2, General cards are replaced with a _glossary_ (Appendix B) that provides a description for each type, alongside a symbol associated with the type in the top left of the card to help improve readability and visibility (4 in Figure 4). We also merged some cards when distinctions were too specific, by identifying cards of overlapping topics and a set of 2 or more attack similar attack cards were merged by a more general card. For example, the _Smudge Attack_, _Shoulder Surfing_ and _Social Engineering_ cards were merged to _Social Engineering_. This reduces the number of cards in Version 2 to a total of 70 cards, made up of 20 attack, 20 vulnerability and 30 defence cards. By providing a glossary, users can refer to this sheet as a form of guidance when looking at devising particular cybersecurity scenarios. Furthermore, we also added two new classes of vulnerability. First, _Human_ is split into _User_ and _Management_. The _User_ class captures the impact of vulnerabilities from careless or malicious users (e.g. insider attackers), while _Management_ captures bad management practices such as poorly implemented security policies. Second, we provide a _System_ vulnerability type which relates to computer infrastructure, to better capture attacks and defences that target the computer system.
In terms of general design (Figure 4), the red background is made slightly lighter making it easier on the eyes, with the symbols watermarked in the background of the card also made lighter to maintain focus on the content of the card whilst also still highlighting the card type. Further, the language of the text in the cards has been improved with the aim of improving clarity and reducing technical jargon that some participants had struggled with. For example, "attack
vectors" and "consequences" were replaced with "related attacks" and "related defences" respectively. Finally, defence cards are now more akin to a stop sign to aid with better distinguishing them from the attack cards.
### Relationship Clarification
The next revision made in Version 2 of the cards is a clarification of the links between the cards to highlight relationships between various vulnerabilities, attacks and defences, which was described to be inadequate. Figure 4 shows a comparison between Version 1 and 2 of the cards. In Version 1, identifier codes in the top right corner were used to distinguish between types of attacks, defences and vulnerabilities 1. The relationships between attacks, defences and vulnerabilities is clarified by linking attack and defence codes within the vulnerability cards 2. In Version 2, card identifiers are replaced with a symbol and card number related to each type positioned in the top left corner 4. The card number is the number of the card within a type. For example, _Command / Data Injection_ is card number 2 for the _Injection_ attack type. The links between vulnerabilities follow a similar approach to Version 1, but with the codes replaced by pairs of symbols with ID numbers and related vulnerabilities are detailed within attack and defence cards 3. Links to vulnerabilities are given to encourage users to use intuition to make links by learning from card content, rather than looking at explicit relationships.
Figure 4. Design Comparison Between Version 1 and 2. Version 1 identifier codes at the top right of each card (1) has been replaced by icons corresponding to its class and a unique class index number for each card (3). Links between cards using codes (2) have been replaced with the corresponding icons in vulnerabilities (4), and links between presented cards are demonstrated using the arrows.
Evaluation - Version 2
To evaluate the second version of the cards, we make use of the same workshop format and methodology (Table 1) as described in Section 4. In this second workshop, we recruited 23 participants with ages ranging from 10 to 15 years old (mean 12.8 years) from either senior primary school or early high school. When asked to describe any experience and/or skills they may already have with cybersecurity, one participant stated they know the basics, one stated that passwords are important, one stated they know about hacking, and all other participants saying they have either no experience (17) or are unsure (3). When asked about experience or skills with coding, two participants stated they had limited experience with Python programming, six stating they have used Scratch or other block-based visual programming tools, with the remainder (15) having no programming experience. Only 2 participants stated they had experience with secure coding, with one neutral and the remainder either disagreeing or strongly disagreeing with this. While the demographic of participants in this second workshop differs from the first workshop, with participants in this workshop being younger, their experience with cybersecurity and code security remains similar and the activity within both workshops are the same following the theme of _Code Security_. After the workshop, the participants were given the same questionnaire to answer as those received in the first workshop (Appendix A). For subsequent discussion, we will discuss the results of the evaluation on the second version of the cards on the same themes as the first version presented in Section 4.2. The data from questionnaires is presented as italics in quotation marks, alongside a participant ID (e.g. W2P6 for this second workshop) where relevant. An overview of the results for the second evaluation can be seen in Figure 5.
### Results
#### 6.1.1. Providing a Knowledge Base
For the first theme of determining whether the cards provided introductory cybersecurity knowledge that supports learning in a well-documented manner (\(RQ1\)), we look at the four items (**a-d**) in Question 1 of the questionnaire in Appendix A. For individual concepts **(a)**, 17 of the participants agreed that the cards provided them with knowledge of cybersecurity concepts, terminology and topics (6 somewhat agree, 4 agree and 7 strongly agree) with 3 participants scoring neutral and 3 somewhat disagreeing with this. For wider scope **(b)**, 17 participants agreed (3 somewhat agree, 6 agree and 8 strongly agree) that the cards provided them with this knowledge, with 2 disagreeing and 4 remaining neutral. For relationships between attacks, defences and vulnerabilities **(c)**, 16 agreed (6 somewhat agree, 3 agree and 7 strongly agree) that the cards facilitated the understanding of the links between cards
Figure 5. Results of Workshop 2. This figure shows an overview of the responses for Q1 in Appendix A.
that capture the vulnerability-attack-defence trichotomy that is present in cybersecurity (_RQ4_), with 4 neutral and 3 disagreeing. Finally, with regarding to cybersecurity terminology **(d)**, 14 participants agreed (4 somewhat agree, 4 agree and 6 strongly agree) that the cards provided them with knowledge on terminology. Three participants disagreed with this and the remaining 6 were neutral.
#### 6.1.2. Independent Learning and Self-Efficacy
The theme of independent learning and self-efficacy relate to items (**e,f**) in Question 1. The first item asked participants if the cards helped them undertake independent learning of cybersecurity **(e)**. We found that for the second version of the cards, 11 participants agreed (2 somewhat agree, 4 agree, 5 strongly agree), 7 were neutral and 5 disagreeing (1 strongly disagreeing). For the second item, we asked if the cards provided access to cybersecurity knowledge without the presence of an expert. We found that 13 participants agreed (3 somewhat agree, 3 agree and 7 strongly agree), with 5 participants neutral to this and 5 disagreeing (1 strongly disagreeing). These results show that the second version of our cards successfully supports independent learning and self-efficacy (_RQ2_).
#### 6.1.3. Providing an Interface for Discussion
The final theme explored relates to the cards providing an interface for discussing key cybersecurity topics (_RQ3_) and relates to items (**g, h**) in Question 1. The first question related to this theme asked participants if the cards enabled them to discuss cybersecurity topics with an expert **g**. We found that 14 participants agreed with this (3 somewhat agree, 5 agree and 6 strongly agree), with 6 participants remaining neutral to this and 3 disagreeing (1 strongly disagreeing). The final question in this theme asked them if the cards enabled them to hold discussions on key cybersecurity topics with others in their group **h**. We found that 14 participants agree with this (4 somewhat agree, 4 agree and 6 strongly agree), with 5 participants neutral and 4 disagreeing with this.
#### 6.1.4. Understanding Drawbacks of Version 2
As done in the first workshop, the next three questions were designed to gather an understanding of any limitations to the design of the cards and whether there may be any improvements that can be made. In Question 2, we asked participants whether there were any categories or subset of cards not used by them in the workshop. The difference in this question compared to the same questionnaire employed for evaluating Version 1 is due to the removal of the General cards and that they are replaced with a Glossary. Thus, in Appendix A, we refer to _Question 2 (Version 2 Evaluation)_ which includes the Glossary as an option, as well as cards within a specific category for each of the classes. We found that the least used items were the Glossary (7 participants) and Race Condition (7) cards, with the next least used cards being Injection (6), Memory (6), Mitigation (6) and Education (6). Looking at Question 3 to understand why these types of cards (excluding the glossary) in particular were not used, some participants mentioned that they _"didn't need them"_ (W2P8), were _"unnecessary"_ (W2P20) or _"don't know"_ (W2P9). This was because some participants had decided to focus on a different aspect of cybersecurity. With regard to the glossary, some participants stated that _"nobody reads the glossary"_ (W2P10), suggesting that this was not needed and the cards alone for some participants were enough to facilitate learning and understanding of key cybersecurity topics. Interestingly, some participants had stated that they had _"used all the cards"_ (W2P4, W2P6), with some others giving a similar reason yet selecting many (if not all) checkboxes for all cards. In Question 4, we asked participants if there were any other improvements to the cards they would like to suggest. Most participants (14) said _"No"_ to this question, however one participant suggested to use _"simple wording"_, which may correlate with why certain cards (e.g. Race Condition or Memory attacks) were not used. As well as this, three participants suggested to make the cards _"easier to use"_ (W2P2, W2P14), however any suggestions as to how they might think this could be achieved were not elaborated.
Discussion
The aim of this research is to understand how a playing cards approach can provide clear communication of key cybersecurity topics and an interface for discussing them, leveraging the information from the CyBOK knowledge base. To investigate this, we were guided by the research questions proposed in Section 3. Subsequent discussion will encompass the evaluations of both versions of the cards and follows each of the research questions in turn.
#### RQ1: Do the cybersecurity cards provide introductory cybersecurity knowledge to novice users?
Cybersecurity is often overlooked as a subject due to issues such as perceived technical difficulty, steep learning curves and a requirement of specialist knowledge and/or expertise. In industry, for example, the lack of cybersecurity professionals has been linked to a lack of practical cybersecurity content within learning materials [(18)]. While CyBOK aims to rectify this learning gap, traversing the knowledge base and understanding the material requires prior knowledge, as evidenced by the primary usage of CyBOK in the development of higher education programmes [(26)]. Ultimately, this means it may not be considered accessible for novice individuals. In this work, we show that our cybersecurity cards achieved a positive result with regard to providing introductory knowledge of key cybersecurity topics to novice users. Specifically, we found that 82% of participants from the first workshop in higher education agreed with this (Section 4.2.1). In our second workshop involving late-primary and secondary aged school children. We found that around 70% of them also agreed with this (Section 6.1.1). Both of these results are significant as participants described themselves initially as having little-to-no cybersecurity experience.
RQ2: Do the cybersecurity cards provide material for expressing interpretation of key topics that supports independent learning and self-efficacy?
One of the concerns surrounding cybersecurity relates to the preconception of steep learning curves and a requirement of specialist knowledge and expertise [(15; 16)], which is a problem that is not adequately managed by the CyBOK knowledge base. From our evaluation of the cards, we found that most of our participants in higher education from the first workshop agreed they were able to understand key cybersecurity topics independently (Section 4.2.2). In the second workshop, we found that half of the primary and secondary participants also agreed with this (Section 6.1.2). It appears that removing the general cards and replacing them with a glossary may have impacted this, backed up by statements from the participants saying that nobody reads the glossary. In terms of self-efficacy, we found that participants in the first workshop agreed that even without a cybersecurity expert present in the group, they were able to access cybersecurity knowledge solely using our cybersecurity cards. In the second workshop, we found that more participants agreed on this compared to understanding topics independently.
RQ3: Do the cards act as an index for the CyBOK knowledge base, which provides an interface for discussion on key cybersecurity topics?
CyBOK has been shown to lack depth of cybersecurity knowledge and is hard to traverse, particularly in the aspect of practical experience such as discussing key topics, which is essential for mastering cybersecurity skills [(26; 34)]. The use of playing card formats as an alternative approach for learning cybersecurity topics has shown to be successful, but are either designed for those with existing cybersecurity knowledge [(44; 45; 48)] or do not leverage a peer-reviewed and well-established knowledge foundation such as CyBOK [(13; 21)]. This is important as the value, actionability and perception of security information strongly depends on the source [(40; 41)]. While in this work we also propose the use
of a playing cards format, we leverage a strong information foundation and the intended use is not in the context of games which other playing cards approaches are. Furthermore, our cards have a single target persona of someone with little-to-no expertise in cybersecurity that writes software code. The first version of our cybersecurity cards show that the majority of participants agreed they were able to discuss key topics with cybersecurity experts that periodically checked in on them, as well as with other members in their group (Section 4.2.3). In Version 2, we found similar results with more strongly agreeing compared to those in higher education using the previous cards. While in both cases some participants disagreed with this, this may be due to reasons such as simply not wanting to discuss topics with the experts. Interestingly, in the second workshop, we found that more participants disagreed that the cards enabled them to hold discussions with others in their group. This could be due to a lack of interest in doing so, or potentially they simply did not know they could do that. In this work, we demonstrate that our playing cards approach can act as a suitable index for CyBOK and provides an interface for discussing key cybersecurity topics for even novice or non-technical users.
RQ4: Do the cards provide links between key cybersecurity topics, allowing for the capture of various scenarios?
In the evaluation questionnaire, we asked participants whether the cards provided them with knowledge about the relationships between attacks, defences and vulnerabilities (Section 4.2.1). We found that the majority of participants agreed with this. This is further strengthened given that most participants in the first workshop also agreed the cards promoted discussion of key cybersecurity topics with both experts and other members in their groups (Section 4.2.3), with similar results seen in the second workshop (Section 6.1.3). While the CyBOK knowledge base does encapsulate the attack-defence-vulnerability trichotomy, it is difficult to identify whether some topics focus on a single predominant theme or whether it spans across various themes (Krishnan et al., 2017). Furthermore, it has been shown that because of this difficulty, these links could be identified from series of keywords which are only meaningfully extracted via specialised algorithms such as topic model analysis (Krishnan et al., 2017; Krishnan et al., 2017). With regard to other learning approaches for cybersecurity, Capture The Flag (CTF) activities can aid with highlighting links between attacks, defences and vulnerabilities by prioritising the focus of finding vulnerabilities (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017). However, the disadvantage to CTF approaches relates to requiring technical expertise (e.g. using command-line tools) in order to progress in finding vulnerabilities (Krishnan et al., 2017). In this work, we found that those with little-to-no cybersecurity or coding expertise can use our cards to make links across the cybersecurity trichotomy, whilst devising various cybersecurity scenarios in the domain of code security.
### Reported Limitations
From the evaluation of the first version of the cards, we identified some limitations from participant responses. First, some participants felt the overall number of cards was too high (124 cards in total) and General cards were not used. In the second version, the deck size was reduced to improve physical handling, whilst also adhering to the primary goals, with General cards also replaced by a glossary. In the second workshop, we found that there were no further suggestions to reduce the size of the deck. Second, the layout and content of first version of the cards were described as difficult to go through by participants in the first workshop. In version 2, we improved the contrast of the red colour, as well as the layout of card elements to help improve clarity, as well as improving the terminology used to describe each of the cards. In the second workshop, none of the participants suggested any further design changes. Finally, we believed the issue of readability in Version 1 to form links from vulnerabilities was likely due to the identifier codes and instead replaced them with a symbol and index number.
### Future Work
A first point of future work would be to revisit the glossary, as some participants indicated that the glossary was not useful and in some cases not used at all. Given that the purpose of the glossary is to improve meaning and uniformity in the usage of technical terminology, one potential solution to improve our approach is to improve the content of the glossary (e.g. the language). As well as this, the importance of the glossary can also be better highlighted when using the cards. While the current approach makes links to the glossary using the class symbol (e.g. attack) may be suitable, better engagement with the glossary could be achieved through informing users of the importance of the glossary when introducing the cards. In the case of digital cards, hyperlinks within cards (e.g. via the symbol) to the glossary could be made clear for improved accessiblity. With regard to the layout and written content, the language used for each of the cards can be further refined. As well as this, another design change can be to better visibly distinguish the classes of cards. In previous work, it has been studied whether different colours of warnings can affect one's perception of risk (Han et al., 2017; Zhang et al., 2018). For example, the same red color could be used for attack cards as it is typically associated with danger and a higher perceived relative amount of risk. An orange or amber colour is typically used for warnings and the colour yellow for caution, which could be used to signal the vulnerability cards. The defence cards could be represented with a blue colour. For example, blue has represented defence in some areas of military, such as to signal a friendly icon for NATO APP-6/A affiliation (Zhou et al., 2018).
## 8. Conclusion
Cybersecurity is a complex subject area that is constantly changing due to the dynamic nature of vulnerabilities, attacks and defences. Many users who may have little-to-no knowledge of cybersecurity are left vulnerable, with the key question of whether best practices are truly understood remains unclear. Existing knowledge bases such as CyBOK provide key cybersecurity information, but are typically designed to support the development of educational curricula or those with prior knowledge. In this work, we propose an approach leveraging a playing cards format with the goals of introducing cybersecurity topics to novice users, facilitating independent learning and understanding of the various relationships found in the cybersecurity ecosystem. Upon evaluation, we found that our approach was successful in achieving these goals for a wide age group (10-35 years) of both non-technical users and those with some experience. Using the data from this evaluation, we designed a second version of these cards and further evaluated them and showing they still meet the proposed requirements. Ultimately, our cybersecurity cards provide a comprehensive and effective tool that allows novice individuals to gain introductory knowledge of cybersecurity, while promoting understanding and independent learning of cybersecurity.
|
2309.15590 | Self-energy correction to energy levels of highly charged ions in a path
integral formalism | Self-energy corrections to the energy levels of bound electrons are
calculated in the framework of path integrals. We arrive at the full fermion
propagator, using methods of functional integrals, in the form of
Schwinger-Dyson equation (SDE). From the full fermion SDE, the self-energy
corrected propagator is identified and the energy shift is obtained from the
poles of the spectral function. The numerical calculations are performed using
complex contour integrals and the B-spline representation of basis functions.
We identify ions with Lamb shifts observable via modern mass spectrometric
methods. | Sreya Banerjee, Zoltán Harman | 2023-09-27T11:45:18Z | http://arxiv.org/abs/2309.15590v1 | # Self-energy correction to energy levels of highly charged ions in a path integral formalism
###### Abstract
Self-energy corrections to the energy levels of bound electrons are calculated in the framework of path integrals. We arrive at the full fermion propagator, using methods of functional integrals, in the form of Schwinger-Dyson equation (SDE). From the full fermion SDE, the self-energy corrected propagator is identified and the energy shift is obtained from the poles of the spectral function. The numerical calculations are performed using complex contour integrals and the B-spline representation of basis functions. We identify ions with Lamb shifts observable via modern mass spectrometric methods.
## I Introduction
Radiative corrections to energy levels have been at the focal point of theoretical and experimental studies with atoms, and, more recently, highly charged ions (HCI). Experimental advances in the production of HCI and the measurement of their properties with unprecedented accuracy (see e.g. Refs. [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18]) call for versatile theoretical frameworks for the study of such systems. The calculations pertaining to radiative corrections in quantum electrodynamics (QED), most importantly, the self-energy (SE) effect, have been rigorously studied and developed by a multitude of authors [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39]. Historically, one of the first such studies was performed by Gell-Mann, Low and Sucher using the adiabatic \(S\)-matrix method [40; 41]. Later, a plethora of theoretical formalisms based on Green's functions were developed [42; 43; 44; 45; 46; 47; 48; 49].
In this Letter, we arrive at the SE corrected fermion propagator in the Furry picture using the method of functional integrals conceived by Feynman [50] and developed further by several authors, both in non-relativistic and relativistic quantum mechanics [51; 52; 53; 54; 55; 56; 57; 58; 59]. The SE correction to energy levels is evaluated for the first time in the path integral framework. We derive the dressed bound-fermion propagator using the Schwinger-Dyson equations (SDE). The derivation of the energy shift induced by the interaction of the bound fermion with its own electromagnetic field is performed by separating the expression into the so-called zero-, one-, and many-potential terms [25], each of which are defined using perturbative path integrals. The finite contributions due to these individual terms are calculated numerically using complex contour integrals, extensively worked out in Refs. [25; 27; 34]. Computations are performed by the known B-spline representation of bound-electron basis states with existing numerical methods [60].
The introduction of functional methods in atomic physics is also motivated by the prospects of including non-electromagnetic interactions into precision theory. For example, hadronic vacuum polarization effects have been calculated by means of a quantum chromodynamic (QCD) Schwinger-Dyson approach [61; 62]. The improvement of experimental accuracy may necessitate in future the inclusion of such QCD corrections in atomic spectra [63; 64; 65]. Prospects of new physics searches with low-energy atomic precision experiments (see e.g. [2; 66; 67; 68; 69; 70]) also suggest to employ a versatile formalism enabling the description of various types of exchange bosons.
## II Schwinger-Dyson equation for the bound-fermion propagator
We begin by deriving the complete expression for the dressed bound-fermion propagator. This is accomplished by defining the SDE for the fermionic propagator using path integrals, in analogy to Ref. [71]. The QED Lagrangian is given as
\[\mathcal{L}_{\rm QED}(x)= -\frac{1}{4}F^{\mu\nu}F_{\mu\nu}-\frac{1}{2\xi}(\partial^{\mu}A_ {\mu})^{2}\] \[+\bar{\psi}(x)(i\not{D}-m)\psi(x)-\bar{\psi}(x)e\gamma^{\mu}A_{ \mu}(x)\psi(x)\,,\]
where \(D_{\mu}(x)=\partial_{\mu}(x)+ie\mathcal{A}_{\mu}(x)\), \(\mathcal{A}_{\mu}\) is the field of the nucleus; \(A_{\mu}\) is the gauge-field operator for the photon field, \(\psi(x)\) is the field of the electron, \(m\) is its bare mass, \(e\) is the elementary charge, and \(F^{\mu\nu}=\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu}\) is the electromagnetic field operator or the curvature of the field. The \(\gamma\) are the usual Dirac matrices and \(\mu,\nu\in\{0,1,2,3\}\) represent the Lorentz indices. We apply the background field method [72; 73; 74], where we consider the external field to be the classical background field of the nuclear charge that is gauge invariant, and the photon gauge field is treated as a fluctuation whose gauge has been fixed as seen in the second term of Eq. (II), where \(\xi\) is the gauge-fixing coefficient. We do not need to concern ourselves with fixing the gauge for the external field since the effective action in the presence or absence of a classical background field is equivalent, and hence the choice of gauge plays no role. We can also safely ignore this gauge-fixing term since we are interested in deriving the electron propagator in the presence of an external field, and it does not contribute to the effective action of the theory.
The generating functional is constructed using the above Lagrangian, and Grassmann-valued sources \(\eta\) and \(\bar{\eta}\) of the fermion fields \(\bar{\psi}\) and \(\psi\), respectively, and the source \(J_{\mu}\) for the gauge field:
\[Z[\eta,\bar{\eta},J_{\mu}] =\int{\cal D}\bar{\psi}{\cal D}\psi{\cal D}A\exp\!\left\{i\int d^{4 }x[{\cal L}_{\rm QED}\right.\] \[\left.+J_{\mu}(x)A^{\mu}(x)+\bar{\psi}(x)\eta(x)+\bar{\eta}(x)\psi (x)]\right\},\]
where \({\cal D}\) represents the integral measure over all field configurations. To arrive at the SDE, we consider that the functional integral of a total derivative is zero, i.e.,
\[\int{\cal D}[\phi]\frac{\delta}{\delta\bar{\phi}}=0\,,\]
where \(\frac{\delta}{\delta\bar{\phi}}\) is the functional derivative w.r.t. \(\phi\), which represents any arbitrary field variable. For the electron propagator, the derivative is taken with respect to the fermion field \(\bar{\psi}(x)\),
\[\int{\cal D}[\bar{\psi}\psi A]\frac{\delta}{\delta\bar{\psi}(x)} \exp\{i[{\cal S}(\bar{\psi},\psi,A) \tag{3}\] \[+\int{\rm d}^{4}x\left(J_{\mu}(x)A^{\mu}(x)+\bar{\psi}(x)\eta(x)+ \bar{\eta}(x)\psi(x)\right)]\}=0\,,\]
with \({\cal S}=\int{\rm d}^{4}x\,{\cal L}_{\rm QED}\) being the action. Eq. (3) can be written in terms of a differential equation in the generating functional \(Z\)
\[\left[\frac{\delta{\cal S}}{\delta\bar{\psi}(x)}\quad\left(-i \frac{\delta}{\delta J_{\mu}},i\frac{\delta}{\delta\eta},-i\frac{\delta}{\delta \bar{\eta}}\right)\right.\] \[\left.\qquad\qquad\qquad\qquad\qquad\qquad+\eta(x)\Bigg{]}Z[ \eta,\bar{\eta},J_{\mu}]=0\,.\]
We take the functional derivative of the action \({\cal S}\) by implementing the Gateux derivative method [75; 76], and arrive at the differential equation
\[\left[\quad\left(i\partial\!\!\!/-m-e\gamma^{\mu}{\cal A}_{\mu}- e\gamma^{\mu}(-i)\frac{\delta}{\delta J^{\mu}(x)}\right)\right.\] \[\left.\times(-i)\frac{\delta}{\delta\bar{\eta}(x)}+\eta(x) \Bigg{]}Z[\eta,\bar{\eta},J_{\mu}]=0\,.\]
To obtain the two-point Green's function of the electron, we perform another derivation with respect to the source field \(\eta(y)\) and represent it in terms of the functional for the connected Green's functions \(W\), (\(Z=e^{W}\))
\[e^{W[\eta,\bar{\eta},J_{\mu}]}\bigg{[}\delta(x-y)-\bigg{(}i \partial\!\!\!/-m-e\gamma^{\mu}{\cal A}_{\mu}\] \[-ie\gamma^{\mu}\frac{(-i)\delta W}{\delta J^{\mu}(x)}-ie\gamma^{ \mu}(-i)\frac{\delta}{\delta J^{\mu}(x)}\bigg{)}S(x,y)\bigg{]}=0\,.\]
Here, \(S(x,y)\) is the bound-fermion propagator. We rewrite Eq. (II) in terms of classical fields, in analogy to [71]
\[\delta(x-y)-\bigg{(}i\partial\!\!\!/-m-e\gamma^{\mu}{\cal A}_{\mu}\] \[-ie\gamma^{\mu}A_{\mu}-ie\gamma^{\mu}(-i)\frac{\delta}{\delta J^ {\mu}(x)}\bigg{)}S(x,y)=0\,.\]
We obtain the equation
\[(-i)\frac{\delta S(x,y)}{\delta J^{\mu}(x)}\] \[=-i\int{\rm d}^{4}z\,\frac{\delta A_{\nu}(z)}{\delta J^{\mu}(x) }\frac{\delta}{\delta A_{\nu}(z)}\left(\frac{\delta^{2}\Gamma}{\delta\psi(x) \delta\bar{\psi}(y)}\right)^{-1}\,.\]
Using the expressions for the complete bound-electron propagator, photon propagator and the electron-photon vertex function \(\Gamma^{\mu}\), one obtains
\[(-i)\frac{\delta S(x,y)}{\delta J^{\mu}(x)}\] \[=-e\int{\rm d}^{4}z\,{\rm d}^{4}u\,{\rm d}^{4}w\,D_{\mu\nu}(x-z) S(x,w)\Gamma^{\nu}(w,u;z)S(u,y)\,.\]
Using Eqs. (II) and (II), setting the external source fields equal to zero, and considering that the nuclear field is a static scalar field, Eq. (II) reduces to
\[\delta(x-y) = (i\partial\!\!\!/-m-e\gamma^{0}{\cal A}_{0})S(x,y)\] \[+\int{\rm d}^{4}u\,\Sigma(x,u)S(u,y)\,,\]
where
\[\Sigma(x-y)=-ie^{2}\gamma^{\mu}\int{\rm d}^{4}z\,{\rm d}^{4}w\,D_{\mu\nu}(z-x) S(x,w)\Gamma^{\nu}(w,y;z)\,. \tag{11}\]
To obtain the SDE for the bound propagator in coordinate space, we multiply Eq. (II) throughout with the inverse propagator, yielding
\[S^{-1} (x,y)=(i\partial\!\!\!/-m-e\gamma^{\mu}{\cal A}^{0})\delta(x-y)\] \[-ie^{2}\gamma^{\mu}\int{\rm d}^{4}z\,{\rm d}^{4}w\,D_{\mu\nu}(z-x) S(x,w)\Gamma^{\nu}(w,y;z)\,.\]
This equation is pictorially represented in Fig. (1).
## III Derivation of the self-energy shift
The second term on the r.h.s. of Eq. (II), given by Eq. (11) gives the SE corrected bound propagator and is
Figure 1: Diagrammatic representation of the Schwinger-Dyson equation for the electron propagator. The double line represents the propagator in the nuclear Coulomb field, the wave line represents a virtual photon, while the thick line depicts the full, dressed electron propagator.
the well-known SE operator. Fourier transforming this with respect to the time variable and considering the electron-photon vertex operator \(\Gamma^{\nu}\) to _lowest order_, i.e. replacing it with the matrix \(\gamma^{\nu}\), we obtain
\[\Sigma(E)=-ie^{2}\int\frac{\mathrm{d}\omega}{2\pi}\gamma^{\mu}S( \mathbf{x},\mathbf{w};E-\omega)D_{\mu\nu}(\mathbf{z}-\mathbf{x};\omega)\gamma^ {\nu}. \tag{13}\]
Using this SE operator and Feynman rules [77], we can construct the SE corrected Green's function between points \(i\) and \(f\):
\[G(\mathbf{x}_{f},E_{f};\mathbf{x}_{i},E_{i})\sim ie^{2}\!\left( \frac{i}{2\pi}\right)^{2}\int d^{3}\mathbf{z}_{1}\int d^{3}\mathbf{z}_{2}\int d\omega\] \[\times S(\mathbf{x}_{f},\mathbf{z}_{2};E_{f})e\gamma^{\mu}S( \mathbf{z}_{2},\mathbf{z}_{1};\eta)e\gamma^{\nu}\] \[\times S(\mathbf{z}_{1},\mathbf{x}_{i};E_{i})D_{\mu\nu}(\mathbf{z}_{1 }-\mathbf{z}_{2};\omega)\delta(E_{f}-E_{i})\,. \tag{14}\]
A Green's function, e.g. that in Eq. (14) can also be given in the spectral representation
\[G(\mathbf{x}_{f},\mathbf{x}_{i},E)=\sum_{n}\frac{\phi_{n}(\mathbf{x}_{f}) \bar{\phi}_{n}(\mathbf{x}_{i})}{E-E_{n}(1-i\varepsilon)} \tag{15}\]
in terms of the perturbed states \(\phi_{n}\), where we have made the replacement \(E=E_{i}=E_{f}\), and \(\varepsilon\) is infinitesimally small. The spectral function \(G_{a}(E)\equiv\bra{a}G(E)\ket{a}\) for a given atomic reference state \(\ket{a}\), which contains the perturbed eigenenergies \(E_{n}\) of the basis states \(\phi_{n}\), has a pole around the perturbed eigenenergy of state \(\ket{a}\):
\[G_{a}(E)\approx\frac{C_{a}}{E-E_{a}}\,, \tag{16}\]
where the constant \(C_{a}\) is the residue term. While in case of the path integral treatment of the quantum mechanical H atom the poles can be simply seen in the analytical expression of the Green's function [78; 55], here the energies can be determined from the poles using complex contour integration [43; 44; 79; 80]. Considering a small contour \(\Gamma\) which surrounds an isolated pole at the bound-state energy \(E_{a}\), one can easily obtain
\[\frac{1}{2\pi i}\oint_{\Gamma}dE\,E\,G_{a}(E)=E_{a}C_{a}\,,\quad \text{and} \tag{17}\]
\[\frac{1}{2\pi i}\oint_{\Gamma}dE\,G_{a}(E)=C_{a}\,. \tag{18}\]
The ratio of the above equations gives us the level energy
\[E_{a}=\frac{\frac{1}{2\pi i}\oint_{\Gamma}dE\,E\,G_{a}(E)}{\frac{1}{2\pi i} \oint_{\Gamma}dE\,G_{a}(E)}\,. \tag{19}\]
The energy shift with respect to the unperturbed (Dirac) energy is
\[\Delta E_{a}=\frac{\frac{1}{2\pi i}\oint_{\Gamma}dE\,\Delta E\, \Delta G_{a}(E)}{1+\frac{1}{2\pi i}\oint_{\Gamma}dE\,\Delta G_{a}(E)}\,, \tag{20}\]
where
\[\Delta E_{a}\equiv\Delta E_{a}^{(1)}+\Delta E_{a}^{(2)}+\ldots\,,\] \[\Delta G_{a}\equiv\Delta G_{a}^{(1)}+\Delta G_{a}^{(2)}+\ldots\,,\]
have been expanded in terms of the fine-structure constant, \(\alpha\), in a perturbation series, and \(\Delta G_{a}=G_{a}(E)-\Delta G_{a}^{(0)}\); \(\Delta G_{a}^{(0)}=\frac{1}{E-E_{a}^{(0)}}\) being the zeroth-order contribution. We expand the Eq. (20) in a geometric series and obtain
\[\Delta E_{a}^{(1)}=\frac{1}{2\pi i}\oint_{\Gamma}dE\,\Delta E\, \Delta G_{a}^{(1)}(E)\,. \tag{21}\]
We can now construct the function in Eq. (16) from the Green's function in Eq. (14) and consider a single state with eigenenergy \(E_{a}\),
\[G_{a}(E) = \left(\frac{i}{2\pi}\right)^{2}\frac{1}{(E-E_{a})^{2}}\] \[\times \int d\omega\sum_{n}\frac{\langle an|\,I(\omega)\,|na\rangle}{E- \omega-E_{n}(1-i\varepsilon)}\,,\]
where we have introduced the photon exchange operator [34]\(I(\mathbf{z}_{1}-\mathbf{z}_{2};\omega)=e^{2}\alpha^{\mu}\alpha^{\nu}D_{\mu\nu}( \mathbf{z}_{1}-\mathbf{z}_{2};\omega)\) with \(\alpha^{\mu}=\gamma^{0}\gamma^{\mu}\). Thus, we have the SE correction to the bound-state energy level, using Eqs. (17)-(21), as
\[\Delta E_{a}^{(1)}=\bra{a}\Sigma(E_{a})-\delta m\gamma^{0}\ket{a}\,, \tag{23}\]
where \(\bra{a}\Sigma(E_{a})\ket{b}=\frac{i}{2\pi}\int d\omega\sum_{n}\frac{\langle an |I(\omega)|nb\rangle}{E-\omega-E_{n}(1-i\varepsilon)}\); \(\ket{an}\) denotes a two-electron tensor product state and the \(\delta m\) is the mass counter-term.
Eq. (23) is however fraught with divergences and we follow [34; 27] and separate the SE shift into zero-, one-, and many-potential terms
\[\bra{a}\Sigma(E_{a})\ket{a} = \bra{a}\Sigma^{(0)}(E_{a})\ket{a}\] \[+ \bra{a}\Sigma^{(1)}(E_{a})\ket{a}+\bra{a}\Sigma^{(2)}(E_{a})\ket{a }\,.\]
The individual terms in Eq. (24) can be written in terms of the Coulomb-Dirac Green's functions [78], and the photon propagator using perturbative path integrals[52; 54; 81]. The zero-potential term is given as
\[\bra{a}\Sigma^{(0)}(E_{a})\ket{a} \tag{25}\] \[= 2i\alpha\int\mathrm{d}\omega\int\mathrm{d}^{3}\mathbf{r}_{1}\, \mathrm{d}^{3}\mathbf{r}_{2}\,\psi_{a}^{\dagger}(\mathbf{r}_{2})\alpha^{\mu}G^{( 0)}(E_{a}-\omega)\] \[\times\alpha^{\nu}D_{\mu\nu}(\omega)\psi_{a}(\mathbf{r}_{1})\,,\]
where \(G^{(0)}(E_{a}-\omega)\) is the free electron Green's function.
Similarly, the one-potential term can be expressed as
\[\bra{a}\Sigma^{(1)}(E_{a})\ket{a} \tag{26}\] \[= 2i\alpha\int\mathrm{d}\omega\int\mathrm{d}^{3}\mathbf{r}_{1}\, \mathrm{d}^{3}\mathbf{r}_{2}\,\psi_{a}^{\dagger}(\mathbf{r}_{2})\alpha^{\mu}G^{( 1)}(E_{a}-\omega)\] \[\times\alpha^{\nu}D_{\mu\nu}(\omega)\psi_{a}(\mathbf{r}_{1})\,,\]
where the Green's function for a single interaction of the electron with the nuclear potential \(V\) in terms of the free Green's function is [52; 54; 81]
\[G^{(1)}(\mathbf{r}_{2},\mathbf{r}_{1};E_{a}-\omega)\] \[= \left[\int_{0}^{\infty}\mathrm{d}^{3}\mathbf{x}_{1}\,V(\mathbf{x}_ {1})\right]\left[\prod_{i=0}^{1}G^{(0)}(\mathbf{x}_{i+1},\mathbf{x}_{i};E_{a}- \omega)\right]\]
with \(\mathbf{r}_{2}=\mathbf{x}_{2}\) and \(\mathbf{r}_{1}=\mathbf{x}_{0}\). Following the same methodology, the many-potential term is given by
\[\left\langle a\right|\Sigma^{(2+)}(E_{a})\left|a\right\rangle \tag{28}\] \[=2i\alpha\int\mathrm{d}\omega\int\mathrm{d}^{3}\mathbf{r}_{1} \,\mathrm{d}^{3}\mathbf{r}_{2}\,\psi_{a}^{\dagger}(\mathbf{r}_{2})\alpha^{\mu} G^{(2+)}(E_{a}-\omega)\] \[\times\alpha^{\nu}D_{\mu\nu}(\omega)\psi_{a}(\mathbf{r}_{1})\,.\]
The Green's function for the many-potential term between the coordinates \(\mathbf{r}_{3}\) and \(\mathbf{r}_{4}\), as seen in Fig. (2), with \(n\) insertions of the nuclear interaction where \(n\to\infty\), can be given in terms of the free Green's function as
\[G(\mathbf{r}_{4},\mathbf{r}_{3};E_{a}-\omega) \tag{29}\] \[\times\left[\prod_{i=0}^{n}G^{(0)}(\mathbf{x}_{i+1},\mathbf{x}_ {i};E_{a}-\omega)\right]\Bigg{\}}\,.\]
This effectively gives us the exact Dirac-Coulomb Green's function [78].
The energy shift due to the many-potential contribution can thus be written as
\[\left\langle a\right|\Sigma^{(2+)}(E_{a})\left|a\right\rangle \tag{30}\] \[=2i\alpha\int\mathrm{d}\omega\int\mathrm{d}^{3}\mathbf{r}_{1}\, \mathrm{d}^{3}\mathbf{r}_{2}\,\mathrm{d}^{3}\mathbf{r}_{3}\,\mathrm{d}^{3} \mathbf{r}_{4}\,\psi_{a}^{\dagger}(\mathbf{r}_{2})\alpha^{\mu}\] \[\times G^{(0)}(\mathbf{r}_{2},\mathbf{r}_{4};E_{a}-\omega)V( \mathbf{r}_{4})G(\mathbf{r}_{4},\mathbf{r}_{3};E_{a}-\omega)\] \[\times V(\mathbf{r}_{3})G^{(0)}(\mathbf{r}_{3},\mathbf{r}_{1};E_ {a}-\omega)\alpha^{\nu}D_{\mu\nu}(\omega)\psi_{a}(\mathbf{r}_{1})\,.\]
Using the spectral representation for the free and bound-electron propagators [27], the energy shift is given as
\[\left\langle a\right|\Sigma^{(2+)}(E_{a})\left|a\right\rangle= \frac{i}{2\pi}\int\mathrm{d}\omega \tag{31}\] \[\times\sum_{\alpha,\beta,i}\frac{\left\langle i\right|V\left| \beta\right\rangle\left\langle a\beta\right|I(\omega)\left|\alpha a\right\rangle \left\langle\alpha\right|V\left|i\right\rangle}{(E_{a}-\omega-\epsilon_{a} \varepsilon^{+})(E_{a}-\omega-\epsilon_{i}\varepsilon^{+})(E_{a}-\omega- \epsilon_{\beta}\varepsilon^{+})}\,,\]
where \(\left|\alpha\right\rangle\) and \(\left|\beta\right\rangle\) are free-electron states, the \(\left|i\right\rangle\) represents bound-electron states, and we use the notation \(\varepsilon^{+}=(1-i\varepsilon)\).
Following [34; 27], we introduce frequency-dependent effective basis functions
\[\left|\phi_{i}^{(\pm)}(\omega)\right\rangle=\sum_{\alpha}\frac{\left\langle \alpha\right|V\left|i\right\rangle}{\omega-\epsilon_{\alpha}(1\mp i \varepsilon)}\left|\alpha\right\rangle\,, \tag{32}\]
which simplify Eq. (31) to the form
\[\left\langle a\right|\Sigma^{(2+)}(E_{a})\left|a\right\rangle=\frac{i}{2\pi} \int\mathrm{d}\omega\sum_{i}\frac{\left\langle a\phi_{i}^{(-)}\right|I(\omega )\left|\phi_{i}^{(+)}a\right\rangle}{E_{a}-\omega-\epsilon_{i}(1-i \varepsilon)}\,. \tag{33}\]
One can further reduce Eq. (33) by expanding the numerator inside the integral on the r.h.s. in partial waves [27] such that the angular integrations can be performed analytically, yielding the reduced many-potential term expressed with generalized Slater integrals [34]. The zero- and one-potential terms with the well-known loop functions are regularized in momentum space following Ref. [34], and the numerical calculations for the zero-, one-, and many-potential terms are performed using a B-spline representation of basis states [82; 83; 84], implementing the dual kinetic balance approach [85].
## III Numerical results
As a test of our method, numerical results for the SE shift in H-like HCI are shown in Table. 1 and compared to Refs. [87; 27; 88]. Our results are generally in good agreement with existing tabulations. We used the model of a homogeneously charged spherical nucleus, with root-mean-square nuclear radii from Ref. [86]. The differences to other works originate from differences of the nuclear radii and charge distribution models used.
With the increase of the atomic number, QED effects are boosted to the regime where they are well observable with novel mass spectrometric methods with uncertainties on the 1-eV level or below [4; 5; 89; 90]. Such ions will allow, for the first time, the test of QED via measuring the mass difference of the H-like ion and the bare nucleus, directly yielding the electronic binding energy by exploiting the energy-mass equivalence relation. It is not only the \(1s\) hydrogenic ground state which features well observable QED effects, but also excited states possess sizeable radiative shifts. E.g. the Lamb shift of the \(2s\) state approximates the SE correction to the binding energy of a Li-like ion, which can be spectrometrically determined by measuring the mass difference of the Li- and the He-like ions in their ground state. In our approach presented here, electron interaction effects are neglected, which, for heavy ions, is a justified first approximation. The formalism however may be extended in future to many-electron systems, by allowing for the exchange of photons between electrons. (We note that the Li-like sequence has been extensively studied in the framework of
Figure 2: Diagrammatic representation of the many-potential Green’s function. Single lines represent a free propagator, while wave lines terminated by a cross represent the nuclear Coulomb field.
other formalisms [91; 92; 93].) Similarly, the SE results given for the \(2p_{1/2}\) and \(2p_{3/2}\) states (with the subscripts denoting the total angular momentum \(j\)) approximate the radiative shift of the binding energy of the valence electron in the B- and N-like sequences, respectively. Very heavy ions in these charge states still feature QED corrections accessible via mass spectrometry.
## IV Summary
We develop an alternate formalism for the evaluation of the SE shift of atomic energy levels. The Green's function for the SE corrected bound electron is extracted from the Schwinger-Dyson equation derived using the functional integral technique. We avoid the operator formalism and present an elegant and intuitive framework that preserves all the symmetries of the system, and also treats the background nuclear field non-perturbatively. This formalism can be extended to study higher-order radiative corrections, and energy shifts in many-electron systems. The functional methods developed in this work can be naturally generalized to various hypothetical gauge bosons, enabling the detailed study of new physics-effects in atomic spectra.
## V Acknowledgements
Supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 273811115 - SFB 1225.
|
2309.09807 | Efficient Concept Drift Handling for Batch Android Malware Detection
Models | The rapidly evolving nature of Android apps poses a significant challenge to
static batch machine learning algorithms employed in malware detection systems,
as they quickly become obsolete. Despite this challenge, the existing
literature pays limited attention to addressing this issue, with many advanced
Android malware detection approaches, such as Drebin, DroidDet and MaMaDroid,
relying on static models. In this work, we show how retraining techniques are
able to maintain detector capabilities over time. Particularly, we analyze the
effect of two aspects in the efficiency and performance of the detectors: 1)
the frequency with which the models are retrained, and 2) the data used for
retraining. In the first experiment, we compare periodic retraining with a more
advanced concept drift detection method that triggers retraining only when
necessary. In the second experiment, we analyze sampling methods to reduce the
amount of data used to retrain models. Specifically, we compare fixed sized
windows of recent data and state-of-the-art active learning methods that select
those apps that help keep the training dataset small but diverse. Our
experiments show that concept drift detection and sample selection mechanisms
result in very efficient retraining strategies which can be successfully used
to maintain the performance of the static Android malware state-of-the-art
detectors in changing environments. | Molina-Coronado B., Mori U., Mendiburu A., Miguel-Alonso J | 2023-09-18T14:28:18Z | http://arxiv.org/abs/2309.09807v1 | # Efficient Concept Drift Handling for Batch Android Malware Detection Models
###### Abstract
The rapidly evolving nature of Android apps poses a significant challenge to static batch machine learning algorithms employed in malware detection systems, as they quickly become obsolete. Despite this challenge, the existing literature pays limited attention to addressing this issue, with many advanced Android malware detection approaches, such as Drebin, DroidDet and MaMaDroid, relying on static models. In this work, we show how retraining techniques are able to maintain detector capabilities over time. Particularly, we analyze the effect of two aspects in the efficiency and performance of the detectors: 1) the frequency with which the models are retrained, and 2) the data used for retraining. In the first experiment, we compare periodic retraining with a more advanced concept drift detection method that triggers retraining only when necessary. In the second experiment, we analyze sampling methods to reduce the amount of data used to retrain models. Specifically, we compare fixed sized windows of recent data and state-of-the-art active learning methods that select those apps that help keep the training dataset small but diverse. Our experiments show that concept drift detection and sample selection mechanisms result in very efficient retraining strategies which can be successfully used to maintain the performance of the static Android malware state-of-the-art detectors in changing environments.
Article INFO
## 1 Introduction
Drift, which refers to the phenomenon where the statistical properties of the data being analyzed change over time, can be caused by data drift and/or concept drift. Data drift refers to changes which occur in the distribution of the input data over time, whereas concept drift or model drift is caused by changes in the relationship between the input data and the outcome of models, i.e., the conditional probability distribution of the class variable given the input Gama et al. (2014). Even if both drift types are interesting and deserve analysis, it has been demonstrated that concept drift is an urgent issue in Android malware detection since it causes the trained static machine learning (ML) models to experience a steady decrease of their performance over time Pendlebury et al. (2019), Molina-Coronado et al. (2023), Chen et al. (2023). In this sense, in the rest of this paper, whenever we mention the term drift, we will refer to concept drift.
It is evident that the Android application ecosystem has an evolving nature, because for example, new types of malware appear or new software features are added to the development framework Molina-Coronado et al. (2023). However, most current anti-malware research solutions for Android rely on batch ML algorithms Liu et al. (2020). Under laboratory conditions, these algorithms have demonstrated extraordinary malware detection rates with low numbers of false positives, which make them a promising solution against malware Ucci et al. (2019). However, batch ML algorithms are designed for static environments. They are used to train models offline on large datasets of labeled samples of malicious and benign apps, which are then used to enable accurate detection of new, previously unseen malware. Therefore, detectors that rely on these algorithms quickly become obsolete and lose effectiveness due to concept drift Gama et al. (2014), Bayram et al. (2022).
In recent years, concept drift management methods have emerged as a promising solution to the challenges posed by drift in non-stationary applications Lu et al. (2019) and in a variety of domains, including fault diagnosis Zliobaite
et al. (2016), credit card fraud detection Blazquez-Garcia et al. (2021), network intrusion detection Molina-Coronado et al. (2020), and game recommender systems Al-Ghossein et al. (2021). Concept drift management methods can be classified into two major groups: (1) retraining, which consists of replacing old models with new ones trained on the latest available data, and (2) incremental algorithms, which continuously update models as new data arrives. While incremental solutions are specific learning algorithms, retraining offers the advantage of being an agnostic approach that can be applied to any ML-based detector.
For Android malware detection, several researchers have proposed adaptive solutions to overcome the challenges posed by concept drift, either relying on incremental algorithms Narayanan et al. (2017); Xu et al. (2019) or retraining procedures Karbab and Debbabi (2021); Guerra-Manzanares and Bahsi (2022). These algorithms propose completely novel detection approaches and ignore the relevance of most available state-of-the-art Android malware detectors, which rely on static analysis of code to extract the features that represent the apps, and leverage batch ML algorithms to perform detection Liu et al. (2020). At this point, it remains interesting whether these existing static detectors can be enhanced and adapted to changing scenarios using simple retraining mechanisms, avoiding the need to develop new detectors.
The successful implementation of retraining on existing detectors hinges upon a series of critical implementation decisions. These decisions involve establishing an retraining policy that determines when and with what data to perform the model retraining and replacement operations Webb et al. (2016). An inadequate retraining policy may result in unnecessary, too frequent, or insufficient retraining operations that render the model unable to adapt to changes in the distribution of data Baena-Garcia et al. (2006); Bifet and Gavalda (2007). Equally crucial is the selection of representative data reflecting current trends in the distribution but without forgetting reoccurring or stable patterns. New data has to be continuously stored, analyzed (sometimes manually) and labeled prior to being used for retraining Android malware detectors. Moreover, as the volume of the new incoming data increases, the storage, labeling efforts and computing requirements for retraining also increase proportionally Tam et al. (2017).
The purpose of this paper is to investigate the potential of retraining as a valid approach to enhance state-of-the-art batch Android malware detectors. Indeed we focus on retraining existing detectors and analyze techniques that reduce the cost of retraining. Particularly, we focus on two critical aspects: (1) the frequency of retraining and (2) the data used for this operation. Since the factors that cause drifts and thus, model aging, could be diverse and variable, model performance is monitored to trigger an update procedure whenever a degradation of the performance is observed. Regarding the training set used for model updates, we propose strategies to keep its size small and reduce the cost of labeling new data. Thus, minimizing the cost of retraining supervised models. Through a comprehensive set of experiments, we demonstrate that retraining offers a practical solution to address concept drift in solutions that use batch ML algorithms for Android malware detection.
The rest of this paper is organized as follows. Section 2 analyzes the literature related to the present work. Section 3 introduces batch Android malware detection and how retraining can easily be applied to achieve model evolution. Then, in the next two sections, we focus on the specific methods that we analyze in this paper to determine the retraining frequency (Section 4) and the data used for retraining (Section 5). Section 6 presents our experimental setup, introduces the three state-of-the-art batch Android malware detectors used in our experiments and describes the evaluation procedure followed for the analysis. Section 7 presents the obtained results and, finally, we discuss the main findings of our work, future research lines and conclude this paper in Section 8.
## 2 Related Work
Learning in evolving environments requires defining two main aspects: 1) the mechanism used to update the model and 2) the data used to update the model. In this section, we briefly review the related proposals in the area of Android Malware detection, considering these two axes.
### Adaptative Malware Detectors
As mentioned, the first decisive aspect when building a classifier in environments with drift is the mechanism used for adapting the model. Indeed, among the proposed adaptable Android malware detectors, we can find incremental learning algorithms that update their models with each data point, or retraining approaches, that train new models and replace the existing ones.
In a recent work Guerra-Manzanares and Bahsi (2022) propose the use of a pool of batch RandomForest classifiers and an anomaly detection model fed with system call features. Detection is performed by majority voting the output of
the models. Whenever models in the pool disagree, the anomaly detector is used to conclude the class of samples. In order to enable model adaptation, true labels are assumed to be known and the worst performing RandomForest model from the pool and the anomaly detector are retrained at fixed time chunks. In Narayanan et al. (2017), the use of an incremental learning detector that leverages contextual API call information as the feature set is proposed. The model is updated with every incoming sample. However, it assumes that the true label of every sample is known at real-time. The detector proposed in Karbab and Debbabi (2021), uses a pool of Convolutional Neural Networks (CNN) fed with sequences of method, object and field names invoked in the code. Retraining is performed at fixed time chunks and using only samples for which the predictions are sufficiently reliable, so that labels obtained with majority voting of the pool are assumed to be accurate. In each retraining round the entire pool of CNN models is replaced. In DroidEvolver Xu et al. (2019), a pool of incremental linear models is presented. For updates, models with low agreement decisions with respect to the rest of the models in the pool are adapted. For labeling the data, the approach uses pseudo labels obtained through majority voting of model decisions.
As mentioned in the introduction, all these approaches are completely novel detectors, which do not leverage any previously published state-of-the-art batch detector, at least not directly. In this sense, the difference between these works and our proposal is that we attempt to directly use the existing research using model agnostic retraining policies to enhance or maintain their performances when concept drift is present. Additionally, these proposals present issues related to the labeling of samples. For instance, using pseudo labels computed from model decisions has been shown to cause model contamination over time Kan et al. (2021), while obtaining true labels incurs a cost that is often overlooked.
### Out-of-Distribution Samples
The second aspect that must be taken into account when using retraining policies in drifting environments is the selection of data used for retraining. This data must be representative of the current concept, but the cost of labeling this data and retraining the model is proportional to the amount of data we use in this process. In this sense, some data selection strategies have been proposed in the Android literature.
The most common approach is to use the confidence of the current model in the prediction of a new sample as a way to analyze whether this new sample has been generated by the same probability distribution or not Yang et al. (2021). Confidence of a new sample can be measured by analyzing the consensus of several classifiers when predicting its class. In Xu et al. (2019) and Zhang et al. (2020), low confident samples (for which models disagree the most) are used to update the models. Contrary to these approaches and despite it being potentially detrimental to the adaptation ability of models, in Narayanan et al. (2017) and Karbab and Debbabi (2021) low-confident data is treated as noisy and discarded from the update process to avoid model contamination when using pseudo labels. Similarly, Barbero et al. (2022) presents a decision rejection framework which aims to keep model decisions accurate over time by discarding unreliable model decisions for drifting samples. The framework presents a non-conformity measure which identifies drifting samples with respect to a set of reference samples used to train the model.
Other authors have proposed using specific models based on clustering ideas. Yang et al. (2021) uses a neural network based on contrastive learning to group samples into either goodware or a specific malware family. A sample is identified as drifting if it lies far from all the identified groups in a certain retraining step. This proposal has been recently improved in Chen et al. (2023) using a hierarchical contrastive learning classifier that ranks samples according to the fitness of the CL embedding and the prediction score of the classifier. The aim is to provide a more robust drifting sample selection in unbalanced scenarios.
All these OOD (out-of-distribution) selection proposals, focus on identifying the best samples to increase the detection ability of models. However, none of them can be directly used in a simple retraining framework that is model agnostic (i.e., is built over any detector). Additionally, they are general approaches that do not leverage the particular behavior of the Android environment to design specific sample selection strategies. In this paper we will analyze, CL approaches and uncertainty sampling as model-agnostic retraining policies, and an ad-hoc sample selection method specifically designed for this problem.
## 3 Preliminary Concepts
We have discussed how most of the published literature on Android malware detection ignored concept drift as a foundational feature of Android malware detection. This section briefly describes how malware detection is typically
performed using batch ML algorithms, as well as how these state-of-the-art detectors can be integrated into a retraining pipeline.
### Batch Malware Detection
Typically, the Android malware detection process using batch ML consists of three main phases: a preprocessing stage, a training phase and a prediction phase Ucci et al. (2019). This process is depicted in Figure 1. To begin with (preliminary step), a set of apps is required, and two tasks must be carried out. First, all the apps must be labeled. The labeling process consists of analyzing the code, metadata, and application behavior to identify any suspicious activity or known malware signatures, tagging the applications in the dataset as goodware or malware. Additionally, in this preprocessing stage, apps are examined, extracting the features indicative of their functionality and representing them in a structured manner. Examples of these features include permissions, function names, strings in the code, etc. Once the app labels and their features are obtained, in the training phase, ML algorithms help determine the most characteristic patterns of goodware and malware. As a result of this training stage, a ML model capable of predicting the class label (goodware or malware) of new apps is obtained. Finally, the prediction phase consists of extracting the features identified during the training phase from a new incoming app. Afterwards, these features are fed into the previously trained ML model so that it determines whether the app is goodware or malware.
### Retraining for Batch Malware Detectors
Retraining mechanisms consider a detector as a black box tool. This means that any existing batch detector can be integrated into the retraining process without modification. Figure 2 depicts how the retraining mechanism can be integrated into any existing detector. In order for new models to correctly represent the current data distribution, the training data has to be continuously updated with representative apps. Since Android malware detectors rely on supervised algorithms, these apps must be labeled. Retraining is signaled by a supervisor. Whenever the signal is raised, a new model is trained to replace the old one. This involves preprocessing all (or some) apps in the dataset to extract their features, training the new model with this information, and replacing the old model.
A very simple retraining policy is to activate the update process at fixed time intervals, for example, once a month. It can also be triggered whenever a certain number of new labeled apps become available, e.g., when 10 000 new apps have been identified. Nonetheless, the most effective strategy would be to trigger retraining whenever a drift is detected. The supervisor can monitor the performance of the model, or measure the degree of dissimilarity between training apps and incoming apps. In the following sections, we investigate the impact of some of these retraining strategies, as well as the impact of different retraining data management policies on the efficiency of batch Android malware detectors. Particularly, we focus on two mechanisms: fixed-period retraining and using a monitor that identifies changes to trigger updates. In addition, we explore three approaches for managing retraining data: a forgetting mechanism that discards
Figure 1: Diagram of the batch learning process. (1) Preprocessing phase: a structured feature set and a label is obtained for each app; (2) Training phase: the structured and labeled training dataset is used to generate a model using ML algorithms; (3) Prediction phase: the generated model is used during the prediction phase to determine the class of new apps.
old apps, three active learning methods that select highly-relevant data and a sample selection technique that removes uninformative data.
## 4 Retraining Frequency
In this section, we discuss the two different retraining policies mentioned above: (1) scheduling the update operation periodically and (2) using a change monitor that triggers the update when the performance of the detector drops.
### Fixed Period Retraining
A naive update policy is to retrain Android malware detectors in batches using a fixed periodicity: weekly, monthly or any other. This method has several advantages, including ease of implementation and predictability. By following a fixed schedule, the system can regularly retrain the model to keep it up-to-date with the latest malware trends and behavioural patterns. However, this approach also has some limitations, because the rate of change of the data distribution might not be uniform or periodical. Due to the unpredictability of changes in Android data, choosing a fixed update frequency may be suboptimal. If the time between updates is long, the model may miss malware that has appeared and lasted for a short period of time. On the other hand, a high retraining frequency would eliminate this problem, but would result in unnecessary costs if changes in the data distribution are slow Gama et al. (2004).
### Change Detection Mechanisms
An alternative update policy to fixed period retraining is to use a change detection mechanism which monitors the current data or the performance of the model, triggering an update round only when there is evidence of change.
For this purpose, in this paper we consider the Page-Hinkley (PH) test Page (1954), Hinkley (1970), a popular (and easy to implement) drift detection algorithm that detects changes by monitoring the performance of the model. The PH test has several advantages over other change detection methods. First, it is non-parametric and does not make any assumptions about the underlying data distribution. Secondly, it is computationally efficient and requires minimal memory, which makes it suitable for monitoring high-speed data streams. Finally, it is also robust to outliers and can detect gradual changes in the data distribution Bifet and Gavalda (2007).
\[C_{n}=\begin{cases}0&\text{if }n=1\\ \min\left(0,C_{n-1}+(A_{mean_{n}}-\bar{x}_{n-1})\right.&\text{if }n>1\end{cases} \tag{1}\]
\[\bar{x}_{n-1}=\frac{\sum_{t=1}^{n-1}A_{mean_{n}}}{n-1} \tag{2}\]
Figure 2: Diagram of the batch learning process with retraining. The supervisor firstly monitors when changes take place, once a change is detected, the data is updated to reflect the current trend and a model retraining signal is raised. This trains a new model with the updated data that is used to replace the old model.
\[PH_{n}=\begin{cases}1&\text{if }\lambda+C_{n}<0\\ 0&\text{if }\lambda+C_{n}>=0\end{cases} \tag{3}\]
The PH test is applied as follows: it periodically (or whenever a certain batch of new instances are obtained) monitors a test value calculated based on the performance of the model, in our case, measured by the \(A_{mean}\) (see Table 1). Specifically, at each instant \(n\), the PH method computes the CUSUM (\(C_{n}\)) of the deviations between the current performance value (\(A_{mean_{n}}\)) and the mean of the performance values obtained in all the previous periodic checks (see Equations 1 and 2). If the CUSUM of the deviations falls below a pre-defined \(\lambda\) threshold (see Equation 3), the PH test signals a change (\(PH_{n}=1\)) that triggers a model update at instant \(n\). Note that a higher tolerance value may result in a lower rate of false alarms, but also in a lower performance, as updates can be delayed. When a change is detected, the values used for the test are reset. This means that the instant \(n\), at which the test flags the change, is set as the starting point (0) for subsequent calculations of the test.
## 5 Data Used for Retraining
The effectiveness and efficiency of retraining also depends on the data used to update the models. In this section, we analyze the use of fixed-size sliding windows and active learning methods such as uncertainty sampling, contrastive learning OOD methods and a problem-specific sample selection strategy.
### Sliding Windows
In this approach, a fixed-size sliding window is used to select the \(m\) most recent instances for retraining. The window moves forward whenever new data becomes available, and instances that fall within the window are stored and subsequently used for retraining, while older apps are discarded. We depict this policy in Figure 3. Its implementation is straightforward, but it may have some drawbacks. First, it assumes that the \(m\) most recent apps are sufficiently representative to generate a good model, which may not always be true: behaviours of discarded apps may reappear later. Secondly, this method does not consider the characteristics of the instances within the window. A common feature of the Android app environment is the presence of majority groups, that is, apps that are nearly identical and appear in large quantities. Not considering the characteristics of the last \(m\) apps might result in datasets where some types of apps are over-represented while others are largely underrepresented, thus leading to biased models which ignore the minority samples Goncalves Jr et al. (2014).
Figure 3: Sliding window (dotted line) of size \(m=5\) which is used to train the model at each instant. Colored cubes represent apps with different behavioral patterns and whose predominance may vary over time. A model trained at time \(t\) may be biased towards the “yellow” behaviour, while being unable to recognize the “blue” behavior.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(TPR=\frac{TP}{P}\) & \(TNR=\frac{TN}{N}\) & \(A_{mean}=\frac{TPR+TNR}{2}\) \\ \hline \end{tabular}
\end{table}
Table 1: Metrics used in this paper to assess the performance of detectors. TPR = True Positive Rate. TNR = True Negative Rate.
### Uncertainty Sampling
Uncertainty sampling is a technique commonly proposed in the active learning literature to reduce labeling efforts and improve the learning ability of models Yang et al. (2021). The method measures the reliability or uncertainty of the decisions provided by models for samples. The degree of uncertainty of a sample is computed as the complementary of the absolute difference of the class (goodware and malware) probabilities returned by a model. Since low confident decisions for samples result in similar probability values for both classes, the uncertainty value will be high (close to 1), whereas samples where one class probability dominates the other will obtain low uncertainty values close to 0. The method assumes that samples with the highest uncertainty are the most representative of changes and better candidates for learning a new model. Therefore, two alternative criteria can be used to build the training dataset: (1) set a fixed number \(n\) of samples to select and, (2) set a minimum uncertainty value to select the samples. Finally, the selected samples are added to the samples used for the previous retraining period and a new model is built with all this data.
### Contrastive Learning OOD
More advanced sampling mechanisms use Contrastive-Learning (CL) schemes that rely on an encoder-decoder architecture to identify drifting samples. The CL model is trained to generate similar embedded representations (or embeddings) for samples of a same class or malware family, whereas the embeddings for samples of different classes (malware vs goodware) or malware families are dissimilar. Since the CL encoder identifies the characteristics in the training data that help to separate the samples that pertain to different classes, CL sampling methods measure the dissimilarity of new samples according to how their embedding differs from those of the training data Yang et al. (2021); Chen et al. (2023). Similarly to uncertainty sampling methods, a ranking is constructed based on the dissimilarity measure and samples are selected by: (1) selecting the \(n\) most dissimilar samples, or (2) setting a threshold over the dissimilarity measure as the minimum sample selection criterion. Afterwards, the selected samples are appended to the training samples of the previous retraining period.
### Problem Specific Sample Selection
As mentioned previously, in the context of Android malware detection, apps with some specific features may be more prevalent than others. Recurring malware that fades away and resurfaces is also a reality. To exemplify this, Figure 4 depicts the distribution of goodware and malware into known and unknown behaviors for every quarter between January of 2013 and December of 2019. In this context, an app behavior is represented by a particular set of API call frequencies extracted from its code. We assume that apps with similar behavior present equivalent API call frequencies in their code. The exact process for the computation of known and unknown behaviors is explained more in detail later in this section. Green slashed bars represent the proportion of samples on each period that contain similar behaviors to apps observed in previous periods (known), whereas grey dotted bars represent the proportion of samples whose behavior has not been observed previously (unknown). As can be seen in Figures 4 and 4, the apparition of unknown app behaviors from one period to another confirms the existence of data drift in the Android application ecosystem, which can cause model degradation Chen et al. (2023). It also shows that the incidence of drift is variable (for example, differences in malware between 2015Q2 and 2015Q3). In this regard, goodware tends to present more novel patterns over time, whereas malware frequently exhibits more known behaviors that have been observed in preceding periods. Indeed, the fluctuations observed for the malware follow a common infection pattern. Each time a new form of infection emerges, the apps (samples) exploiting this method will be initially classified as unknown (see, for example, 2015Q1). Then, the infection mechanism becomes popular as new malware apps use it. This is shown, for example, by the increase of known groups in the subsequent periods to 2015Q1. This popularity keeps increasing until the infection pattern is detected and a new form or a variation of the original exploitation mechanism is developed.
The over-representation of malware with known behaviors during most periods can lead to biased detectors when retraining ML models, as algorithms are designed to optimize performance metrics and may focus solely on these majority groups Zhao et al. (2021). Hence, using all the data for training can hinder the ability of detectors to accurately distinguish minority (unknown or new) malware. With the aim of improving the effectiveness of the adaptation mechanism and producing more reliable malware detectors, we propose the use of an ad-hoc sample selection approach for Android malware detection. This technique ensures that the retraining data is diverse and informative Molina-Coronado et al. (2023). It involves filtering out uninformative or duplicated apps, controlling the size of the dataset, and reducing the labeling costs and training complexity of ML algorithms.
Particularly, in this work we propose a sample selection method using the continuous clustering process described in a previous work Molina-Coronado et al. (2023) and initially proposed in Portnoy (2001). For this algorithm, and
based on previous findings, we represent the apps as a vector of frequencies of their Android API calls. Then, sample selection is carried out in two phases: the calibration phase and the online phase.
The objective of the calibration phase is to find the different behavioral groups present in the training data, for both malware and goodware. To do so, the apps in the training set are chronologically ordered by their publication date and sequentially assigned to their closest cluster. This assignment is only performed if the sample lies within a predefined \(\epsilon\) radius from the cluster's representative; otherwise, a new cluster is created with the sample as the representative. We assume that samples within a group contain similar code patterns and thus, that each cluster represents a particular behavioral pattern. Note that cluster's representatives are maintained throughout the process. The Euclidean distance is used to measure the similarity between every pair of samples. Once all the apps are clustered, we label the clusters as goodware or malware according to the class label of the representative app of that cluster. Within this calibration phase we compute the average number of apps in all the behavioral clusters found (\(k\)). Then, we only keep the most recent \(k\) components (apps) from each cluster. In this way, we try to keep the training set both small (keeping only a few samples of a given behavior) and diverse (keeping samples of all the different behaviors detected).
During the online phase (concept drift handling), the algorithm assigns each new incoming sample to its closest cluster if it meets the admission condition (the sample is within the \(\epsilon\) radius of the representative). If the cluster already contains \(k\) samples, the oldest sample in the cluster is replaced by the new one. This process can be seen as a multi-window approach in which a sliding window of size \(k\) is maintained for each of the behavioral clusters. If a sample cannot be associated with an existing cluster, a new cluster is created with that sample as its representative. At the end of the clustering process, we compute the isolation level of clusters as the average Euclidean distance between the cluster representative and the representatives of other clusters. For labeling, we select the representatives of the \(l_{b}\) most isolated clusters, being \(l_{b}\) a labeling budget parameter. Finally, the retraining dataset is constructed by appending to the samples used on the previous retraining period, the \(k\) most recent samples of each labeled cluster. Note that the apps that are assigned to a cluster are automatically labeled with the class label of the cluster representative. This avoids labeling many apps.
## 6 Experimental Framework
This section describes the experimental set-up and the methodology followed to evaluate the different adaptation mechanisms analysed in this work.
### Dataset
In our experiments, we use the dataset presented in Molina-Coronado et al. (2023). This dataset consists of eight years of malware and goodware sorted in a monthly basis, from January 2012 to December 2019. In the preprocessing step, class labels are assigned based on the number of VirusTotal detections (VTD) Kantchelian et al. (2015).
Figure 4: Diversity of the dataset throughout the evaluation period (2013-2019). The bars represent the percentage of apps captured in the indicated period that are very similar to apps that have already appeared in previous periods (“Known”), or apps that exhibit new behaviors (“Unknown”).
with a VTD value equal to 0 are tagged as goodware, and apps with a VTD value greater than or equal to 7 are tagged as malware. The remaining apps (those with a VTD value between 1 and 7) are discarded. This labelling methodology is common in the Android malware literature Zhu et al. (2020), Salem et al. (2019). The instructions to download the dataset are available in our GitLab repository1.
Footnote 1: [https://gitlab.com/serralba/concept_drift](https://gitlab.com/serralba/concept_drift)
Once this is done, we split this dataset into two separate subsets: one for training and one for evaluation. The training dataset consists of 100 monthly samples of each class, goodware and malware, between January 2012 and December 2012. Note that malware detection on Android is a highly unbalanced problem where malware actually accounts for about 10% of the apps Pendlebury et al. (2019). However, the training dataset is compiled offline and, thus, can be constructed using an unrealistically balanced ratio between the two classes. Contrarily, in order to mimic a real situation, the remaining 7 years of data (from January 2013 to December 2019), used for model evaluation purposes, will consists of 10 malware and 100 goodware samples per month over this period, which are obtained by randomly sampling apps from the original dataset.
### Batch Malware Detectors
For the purpose of this paper, we rely on three state-of-the-art malware detectors, Drebin, DroidDet and MaMaDroid, that, according to a recent comparison Molina-Coronado et al. (2023), are the best performing batch Android malware detectors published to date. These detectors were not originally conceived to cope with concept drift and they all rely on features extracted through static analysis of APK2 files to represent the apps, and on batch models for detection. In this section we briefly describe their detection mechanisms:
Footnote 2: Android Application Package, i.e., the file format used by Android to distribute applications.
**Drebin**Arp et al. (2014) uses a full set of features extracted from APKs, including hardware components, permissions, application components, intent filters, strings, and a restricted set of API calls. It uses a linear SVM model fed with this data to perform malware detection.
**DroidDet**Zhu et al. (2018) relies on data obtained exclusively from the app code to detect malware. More specifically, it uses a filtered set of requested and required permissions, intent filters and API calls. After the first extraction of all possible values, the relevance of the features is calculated to eliminate those that are not informative. The most relevant features are finally used for model generation using the RotationForest algorithm.
**MaMaDroid**Onwuzurike et al. (2019) constructs a Markov chain of the API calls found in the app code. The Markov chain represents the transition frequency between each API pair. Actually, the package to which an API call belongs is used as a higher level abstraction to reduce the number of final features. The RandomForest algorithm is used to identify malware with this information.
### Parameter Settings
The baseline (original) detectors have been trained using the default parameters reported in their respective works. For all configurations, the evaluation and (possible) retraining process is set to quarterly intervals. This choice is a compromise between obtaining an adequate visualization of the results but restricting the number of new models, since training the models has a significant experimental cost. In addition, for the change detection method, after preliminary experiments, we set the \(\lambda\) threshold for the PH test to 0.02 based on preliminary results (see Appendix A.1), as a threshold between detection performance and the number of retraining steps.
In relation to the methods proposed for selecting the data used for retraining, sliding windows of 100, 1000, and 2000 apps are considered in the experimentation. For the problem-specific sample selection strategy, based on preliminary tests (see Appendix A.2), we set the \(\epsilon\) radius (the maximum distance allowed to consider any sample as part of an existing cluster) to 0.01. The \(k\) value (the average number of apps in a cluster), has been calculated in the calibration phase, taking a value of 2. For CL methods (CADE Yang et al. (2021) and HICL Chen et al. (2023)), we use the original implementation and the parameters that reported the best results in their respective papers. Additionally, since uncertainty and CL methods require setting a criterion for selecting the samples to be labeled and appended to the retraining dataset, either by taking the \(n\) most uncertain samples or by setting a threshold over the uncertainty measure, we set a labeling budget similar to the average number of samples to be labeled with the problem-specific sample selection method.
### Evaluation Framework and Metrics
First, all models are trained offline (batch) using the apps in the training dataset (balanced and with data from Jan. 2012 to Dec. 2012). For evaluation purposes, we consider non-overlapping windows of three-month periods. Therefore, the evaluation dataset is divided into 28 time-ordered subsets, each one covering one quarter.
For the evaluation of the original version of the detectors (pure batch scenario without retraining), the model used is always the same, i.e., the one obtained in the offline phase. For those scenarios incorporating concept drift management approaches, model update procedures are subsequently carried out with a subset of recent apps. Note that we assume that, when a model is updated, the true labels of the samples used to train the new model are known. As the incoming data are chronologically sorted, we can evaluate the degree of concept drift, as well as the effectiveness of the measures implemented to address it. This approach is common in the concept drift literature Gama et al. (2014).
In two separate experiments we analyze and compare the effect of: (1) the policies to trigger the updates, and (2) the data used for retraining. In the first experiment, periodic vs. change detection mechanisms are compared. In this experiment, the dataset used for training the models grows in each retraining round since all incoming samples are incorporated to the dataset for retraining. In the second experiment, where the different data selection mechanisms are studied, the models are retrained at each trimester with the corresponding selection of data (windows of fixed size, uncertainty samples, OOD samples or cluster representatives). As a baseline for this second experiment, we also consider the model retrained periodically each trimester using all the data available.
Due to the large amount of data that is available for training and in order to avoid imbalance between the malware and goodware when retraining the models, for all the possible combinations and methods excepting those using CL or uncertainty sampling, in each retraining round, goodware is downsampled to reach a balanced ratio between the classes. Specifically, when the training dataset is constructed using the problem-specific sample selection method, once the clustering has been carried out and the goodware and malware samples are obtained, the goodware is downsampled to reach a balanced dataset. Note that this is only done for training, whereas for evaluation the original unbalanced data is used.
Finally, in all the experiments and for each model, we measure its performance as the average of the TPR and TNR, known as the \(A_{mean}\) value (see Table 1). The \(A_{mean}\) is a popular performance metric in the ML literature for unbalanced scenarios and, contrary to the F1 score, the \(A_{mean}\) considers and equally weights the accuracy of models on both positive (malware) and negative (goodware) samples.
## 7 Experimental Results
This section shows the results of the different retraining configurations tested for the state-of-the-art malware detection models: Drebin, DroidDet and MaMaDroid. Code implementations for all these mechanisms are available in our GitLab repository3.
Footnote 3: [https://gitlab.com/serralba/concept_drift](https://gitlab.com/serralba/concept_drift)
### Analysis of the Effect of the Retraining Frequency
The results when retraining the detectors at fixed periods and with change detection are shown in Figure 5. The lines in the figures represent the \(A_{mean}\) performance of the models over the evaluation period. In particular, the red lines show the performance of the detectors when a periodic retraining approach is applied. The blue lines represent the performance of detectors implementing the change detection mechanism based on the PH test. The vertical dotted blue lines represent the points at which the PH test has triggered a drift alarm and, thus, a retraining and model replacement operation has been performed. For comparison purposes, we also include the performance of the (original) batch model which is trained only once, at the beginning. This is represented by a dashed orange line.
As can be seen, the orange lines show a decreasing trend over time for all models, confirming the existence of concept drift. The benefits of using retraining as an adaptation mechanism to counteract the effect of concept drift in batch malware detectors are readily apparent from the figures. For all adaptive solutions, the performance of the models is kept stable over time. In fact, the retraining variants of DroidDet (see Figure 4(b)) show an overall performance improvement with respect to the static version of 15%, while for Drebin and MaMaDroid this performance increases 23% and 16%, respectively (see Figures 4(a) and 4(c)).
Overall, when comparing the two retraining configurations, the figures indicate that applying a change detection mechanism has a minimal cost in performance (\(A_{mean}\)), with an average reduction of 2.3% for all detectors. Conversely, the change detection method requires a much smaller number of retraining operations compared to retraining at fixed
periods. In fact, as can be seen, the change detector successfully triggers a drift alarm when the performance of the detectors decreases. For DroidDet, eight rounds of retraining and model replacement are required, as shown by the blue dotted lines in Figure 4(b), which contrasts with the 28 operations performed with fixed-period retraining. With equivalent detection performance indicators, only seven drift alarms are triggered in MaMaDroid (see Figure 4(c)) and Drebin (see Figure 4(a)).
### Analysis of the Effect of the Retraining Data
Table 2 shows the average \(A_{mean}\) performance of the detectors using different data management policies when periodically re-training the models. It is notable that the use of a data management policy obtains very similar performance values or even outperforms the baseline configuration (the one using all available data) in most cases. From the tested configurations, using the problem-specific sample selection approach for retraining with a labeling budget of 70% of the incoming samples seems to be the best approach, followed by HICL and uncertainty methods with similar labeling budget. In general, except for CADE, the results do not show significant differences among active learning methods, and even using smaller datasets with 45% of labeling effort, the performance indicators remain very similar or even outperform the baseline that uses all the data.
Figure 6 shows the results for each individual detector over the entire evaluation period. For clarity of the results, we only selected the best sliding window policy, contrastive learning OOD method, and problem-specific sample selection configuration. The red lines represent the baseline, that uses all available data for retraining; the green lines represent the performance when considering a sliding window of size 1000 for retraining, the orange lines represent the performance using the HICL method with a labeling budget of 70%; and the blue lines show the performance of the problem-specific sample selection mechanism with 70% of labeling budget. Using active learning mechanisms to select samples for
Figure 5: Evolution of the performance of malware detectors for the period 2013-2019, for different policies to trigger retraining. Dotted, blue vertical lines indicate a model change triggered by the concept drift detection mechanism.
retraining results in improved performance values with respect to the fixed-size configuration for all methods except for DroidDet, with the problem-specific method yielding slightly better \(A_{mean}\) values in most evaluation rounds than the HICL method.
Beyond detection performance, the effort required to label the samples used in each round of retraining is also an important factor to measure the efficiency. Considering that a total of 330 apps arrive in each retraining round (300 goodware and 30 malware), the labeling requirements for the strategies "Last 1000" and "Last 2000" are similar to those of the baseline method, as they involve labeling all new arriving samples before retraining. In contrast, the "Last 100" strategy requires labeling only about 30% of the incoming samples in each evaluation round. Active learning methods require labeling only 45% of the incoming samples to obtain equivalent performance values to the baseline and the last 1000 and 2000 sliding window policies. With a lower labeling budget, the problem-specific and HICL methods obtain very similar performance on average. These results demonstrate how detection models benefit from the use of incremental clustering to label samples and reduce the size of the training data. As a potential drawback, note that this process can lead to labelling errors. In this regard, our experiments showed that only 0.05% of the samples are mislabeled by the method, demonstrating to be insufficient to negatively impact the detection ability of ML algorithms.
For the interested reader, we also include the analysis of the combination of change detection and different sample selection methods, as well as the results obtained with different parameter configurations of the proposed methods in Appendix B.
## 8 Conclusions
In this paper, we have shown that retraining is an effective mechanism for dealing with concept drift in batch Android malware detectors, it being straightforward to incorporate into existing detectors without modifying their design. Specifically, our experiments show that this update mechanism helps maintain high detection rates, with an average performance improvement of 20% compared to the original versions of the detectors. Regarding the two retraining alternatives tested, there are no significant performance differences between periodic retraining and the PH-based change detection approach. However, using a supervision mechanism based on the PH test showed to decrease the number of retraining rounds by 75% on average, dramatically reducing the computational effort required to keep model performance over time.
Additionally, we have demonstrated that the sample selection strategy used for retraining also influences the success of detectors. On one hand, employing a sample selection policy instead of using all available data for retraining reduces the cost of model generation since the complexity of machine learning algorithms is highly dependent on the size
\begin{table}
\begin{tabular}{l|c|c|c|c|c} & **Drebin** & **DroidDet** & **MaMaDroid** & **Avg. Perf.** & **\% Labels Req.** \\ \hline
**All data** & 0.88 & 0.79 & 0.74 & 0.80 & 100\% \\ \hline
**Last 1000** & 0.85 & 0.80 & 0.72 & 0.79 & 100\% \\ \hline
**Last 2000** & 0.87 & 0.79 & 0.72 & 0.79 & 100\% \\ \hline \hline
**Last 100** & 0.75 & 0.69 & 0.68 & 0.70 & 30\% \\ \hline \hline
**Problem-Specific** & 0.88 & 0.78 & 0.82 & 0.82 & 70\% \\ \hline
**CADE** Yang et al. (2021) & 0.83 & 0.76 & 0.74 & 0.77 & 70\% \\ \hline
**HICL** Chen et al. (2023) & 0.88 & 0.78 & 0.75 & 0.80 & 70\% \\ \hline
**Uncertainty** & 0.86 & 0.79 & 0.75 & 0.80 & 70\% \\ \hline \hline
**Problem-Specific** & 0.86 & 0.76 & 0.78 & 0.80 & 45\% \\ \hline
**CADE** Yang et al. (2021) & 0.77 & 0.74 & 0.72 & 0.74 & 45\% \\ \hline
**HICL** Chen et al. (2023) & 0.87 & 0.77 & 0.75 & 0.80 & 45\% \\ \hline
**Uncertainty** & 0.83 & 0.80 & 0.74 & 0.79 & 45\% \\ \hline \hline
**Problem-Specific** & 0.84 & 0.73 & 0.76 & 0.78 & 15\% \\ \hline
**CADE** Yang et al. (2021) & 0.69 & 0.75 & 0.70 & 0.71 & 15\% \\ \hline
**HICL** Chen et al. (2023) & 0.86 & 0.75 & 0.74 & 0.78 & 15\% \\ \hline
**Uncertainty** & 0.83 & 0.72 & 0.73 & 0.76 & 15\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average \(A_{mean}\) performance throughout the evaluation period for different sample selection policies using fixed period retraining. The right column refers to the percentage of samples in the buffer that need to be labelled for retraining.
of the training data Hastie et al. (2009). Sliding window policies, such as selecting the last 1000 and 2000 samples, helped reduce retraining complexity while maintaining similar labeling effort and performance values compared to the baseline. The benefits of using active learning techniques, such as uncertainty sampling, HICL, or the problem-specific sample selection method, are undeniable. These techniques result in better detection performance for models and require reduced labeling effort for retraining. Among the active learning methods, the proposed problem-specific strategy exhibited minimal performance degradation under widened labeling constraints compared to other alternatives.
In general, the choice of a specific sample selection strategy and retraining policy will depend on the requirements of the target scenario for the detector. Change detection is a suitable method in most scenarios, especially in cases where the cost of generating models is high. One advantage over periodic retraining is that it requires fewer retraining operations to keep models up-to-date, consequently reducing the need for labeling new samples. Labeling new samples is often costly, and in many cases, it is performed manually by human experts. In this context, the application of a sample selection mechanism is also desirable. Larger sliding window sizes need labeling all incoming data, which may not be feasible, especially in online scenarios. Shorter windows, on the other hand, lead to rapid forgetting and may lead to model overfitting. Active learning approaches such as contrastive learning OOD methods (CADE and HICL), problem-specific sample selection, or uncertainty sampling are particularly useful because they reduce the number of samples that need to be labeled without compromising detection performance. Additionally, they do not include a forgetting mechanism, which helps mitigate the impact of reappearing application behaviors. However, it is worth noting that CADE and HICL involve higher costs since they require generating a new CL model at each retraining step
Figure 6: Evolution of the performance of malware detectors with periodic retraining for the period 2013-2019. Red lines represent the performance when using all the data available, i.e., the baseline. Green, blue and orange lines represent respectively the performance of models when retrained with the 1000 most recent samples, using the problem-specific strategy and with the HICL selection methods with labeling budgets of 70%.
for sample selection, whereas the cost of the problem-specific sample selection method can be considered negligible because clusters are updated incrementally, involving a one-pass process over the data.
As for future work, we propose exploring more complex sliding window mechanisms, such as adapting the size of the sliding window as a function of the distribution dynamics. This mechanism could be useful for dealing with applications that manifest themselves in different ways, e.g., periodically or recurrently. Similarly, more advanced sample selection policies can be explored. These could include, for example, selective forgetting since, on current configurations the set of samples continuously grows.
## CRediT authorship contribution statement
**Borja Molina-Coronado:** Conceptualization, Methodology, Software, Formal analysis, Investigation, Writing - Original Draft, Writing - Review & Editing **Usue Mori:** Conceptualization, Methodology, Writing - Review & Editing **Alexander Mendiburu:** Conceptualization, Methodology, Writing - Review & Editing **Jose Miguel-Alonso:** Conceptualization, Methodology, Writing - Review & Editing
## Acknowledgments
This work has received support from the following programs: PID2019-104966GB-I00AEI (Spanish Ministry of Science and Innovation), IT-1504-22 (Basque Government), KK-2021/00095 and KK-2021/00065 (Elkartek projects SIGZE and ALUSMART supported by the Basque Government). Borja Molina-Coronado holds a predoctoral grant (ref. PRE_2021\(2\)0230) by the Basque Government.
|
2309.16927 | Ergodicity in some families of Nevanlinna Functions | We study Nevanlinna functions f that are transcendental meromorphic functions
having N asymptotic values and no critical values. In [KK] it was proved that
if the orbits of all the asymptotic values have accumulation sets that are
compact and on which f is a repeller, then f acts ergodically on its Julia set.
In this paper, we prove that if some, but not all of the asymptotic values have
this property, while the others are prepoles, the same holds true. This is the
first paper to consider this mixed case. | Tao Chen, Yunping Jiang, Linda Keen | 2023-09-29T01:55:19Z | http://arxiv.org/abs/2309.16927v2 | # Ergodicity in some families of nevanlinna functions
###### Abstract.
We study _Nevanlinna functions_\(f\) that are transcendental meromorphic functions having \(N\) asymptotic values and no critical values. In [KK] it was proved that if the orbits of all the asymptotic values have accumulation sets that are compact and on which \(f\) is a repeller, then \(f\) acts ergodically on its Julia set. In this paper, we prove that if some, but not all of the asymptotic values have this property, while the others are prepoles, the same holds true. This is the first paper to consider this mixed case.
2010 Mathematics Subject Classification: Primary: 37F10, 30F05; Secondary: 30D05, 37A30 This research is partially supported by gifts from the Simons Foundation (#523341 and #942077) and PSC-CUNY awards. It was also supported by the National Science Foundation under Grant No. 1440140, while the third author was in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the spring semester 2022.
the particularly simple example of meromorphic functions with two asymptotic values and no critical values. There are partial results on the ergodicity question for this family: Let \(\lambda,\mu\in\mathbb{C}\), and
\[f=\frac{\lambda e^{z}-\mu e^{-\mu}}{e^{z}-e^{-z}},\]
where \(\lambda,\mu\) are \(f\)'s two asymptotic values. Keen and Kotus [KK] have shown that if the accumulation sets of both \(\lambda\) and \(\mu\) are compact, and \(f\) is a repeller on this set, then the Julia set is \(\widehat{\mathbb{C}}\) and \(f\) is ergodic. By way of contrast, Skorulski [S1, S2] has shown that if there exist natural numbers \(p\) and \(q\) such that \(f^{p}(\lambda)=f^{q}(\mu)=\infty\), then the Julia set is \(\widehat{\mathbb{C}}\) and \(f\) is non-ergodic. (See also [CJK].)
Weiyuan Qiu asked one of the authors what happens in the remaining case where one asymptotic value lands on a repelling cycle, and the other is a prepole. In answering his question, we were able to prove a more general result for the full family of functions with finitely many asymptotic values and no critical values, so-called "Nevanlinna functions". Our main theorem is
**Main Theorem.** _If \(f\) is a Nevanlinna function with \(N\) asymptotic values of which \(0<K<N\) are prepoles, and if the \(\omega\)-limit sets of the remaining \(N-K\) are compact repellers, then the Julia set is \(\widehat{\mathbb{C}}\) and \(f\) is ergodic._
**Remark 1.1**.: _Our proof of this theorem implies that for these Nevanlinna functions, the measure of the radial Julia set is positive._
The case \(K=0\) was analyzed in [KK]. For the case \(K=N\), we have the following conjecture which we are still working on and will report on in a future paper.
**Conjecture 1.** _When \(K=N\), the action of \(f\) on its Julia set \(\widehat{\mathbb{C}}\) is not ergodic._
The proof of our theorem depends on generalizations of some lemmas in [KK]. After an introductory section in which we give the basic definitions and properties of Nevanlinna functions, we state and prove these lemmas and apply them to the proof of the theorem.
**Acknowledgement:** We would like to thank Professor Janina Kotus for her helpful comments and suggestions and for pointing out Skorulski's papers [S1, S2] to us.
## 2. Preliminaries
In this section, we recall some of the basic theory of transcendental meromorphic functions which we will need. Such a function, \(f:\mathbb{C}\to\widehat{\mathbb{C}}\) is holomorphic except at the set of poles, \(\{f^{-1}(\infty)\}\), and is a local homeomorphism everywhere except at the set \(S_{f}\) of singular points. In this paper, we will be interested in those functions for which \(\#S_{f}\) is finite and will assume this throughout. For such functions, the singular values are of two types:
Let \(v\) be a singular value and let \(V\) be a neighborhood of \(v\). Then
* If, for some component \(U\) of \(f^{-1}(V)\), there is a \(u\in U\) such that \(f^{\prime}(u)=0\), then \(u\) is a _critical point_ and \(v=f(u)\in V\) is the corresponding _critical value_, or
* If, for some component \(U\) of \(f^{-1}(V)\), \(f:U\to V\setminus\{v\}\) is a universal covering map then \(v\) is a _logarithmic asymptotic value_. We will drop the descriptor logarithmic and call such values asymptotic values.
At regular, or non-singular points, meromorphic functions are local homeomorphisms. The Koebe distortion theorems give estimates on the behavior of these functions at regular points. Many proofs exist in the standard literature on conformal mapping. (See e.g. [A], Theorem 5.3.) Since we will use them repeatedly below we state them here without proof.
**Theorem 1** (Koebe Distortion Theorem).: _Let \(f:D(z_{0},r)\to\mathbb{C}\) be a univalent function, then for any \(\eta<1\),_
1. \(|f^{\prime}(z_{0})|\dfrac{\eta r}{(1+\eta)^{2}}\leq|f(z)-f(z_{0})|\leq|f^{ \prime}(z_{0})|\dfrac{\eta r}{(1-\eta)^{2}}\)_,_ \(z\in D(z_{0},\eta r)\)_,_
2. _If_ \(T(\eta)=\dfrac{(1+\eta)^{4}}{(1-\eta)^{4}}\)_,_ \(\dfrac{|f^{\prime}(z)|}{|f^{\prime}(w)|}\leq T(\eta)\)_, for any_ \(z,w\in D(z_{0},\eta r)\)_._
**Theorem 2** (Koebe \(1/4\) Theorem).: _Let \(f:D(z_{0},r)\to\mathbb{C}\) be a univalent function, then_
\[D\Big{(}f(z_{0}),\dfrac{r|f^{\prime}(0)|}{4}\Big{)}\subset f(D(z_{0},r)).\]
Meromorphic functions with finitely many critical points and finitely many asymptotic values can be characterized by their Schwarzian derivatives.
**Definition 1**.: _If \(f(z)\) is a meromorphic function, its Schwarzian derivative is_
\[S(f)=(\dfrac{f^{\prime\prime}}{f^{\prime}})^{\prime}-\dfrac{1}{2}(\dfrac{f^{ \prime\prime}}{f^{\prime}})^{2}.\]
The Schwarzian differential operator satisfies the condition
\[S(f\circ g)=S(f)g^{\prime 2}+S(g)\]
from which it is easy to deduce that if \(f\) is a Mobius transformation, \(S(f)=0\), so that \(f\circ g\) and \(g\) have the same Schwarzian derivative.
In [N], Chap. XI, SS3, Nevanlinna, using a technique he calls rational approximation, shows how to, given a finite set of points in the plane, and finite or infinite branching data for these points, construct a meromorphic function whose topological covering properties are determined by this data. The function is defined up to Mobius transformations. He proves
**Theorem 3**.: _The Schwarzian derivative of a meromorphic function with finitely many critical points and finitely many asymptotic values is a rational function. If there are no critical points, it is a polynomial. Conversely, if a meromorphic function has a rational Schwarzian it has finitely many critical points and finitely many asymptotic values._
In the literature, meromorphic functions with polynomial Schwarzian are called _Nevanlinna functions_. These are the focus of this paper.
In order to prove our results, we will need estimates on the asymptotic behavior of the poles and residues of Nevanlinna functions. These are well known and there is an extensive literature, see e.g. [H], Chap. 5 or [L], Chap. 4, [C] for details. Here, we state those properties of the functions that we will need and sketch their derivation. We begin by recalling the connection between Nevanlinna functions and the second order differential equation,
\[w^{\prime\prime}+P(z)w=0, \tag{1}\]
where \(P(z)\) is a polynomial of degree \(m\). The solutions of (1) are holomorphic and form a two dimensional linear space. It is easy to verify that if \(w_{1},w_{2}\) are linearly independent solutions of (1) then \(f=w_{1}/w_{2}\) is meromorphic, has \(N=m+2\) asymptotic values and satisfies \(S(f)=2P\). The following is a summary of the properties of solutions of (1) and \(S(f)=2P\).
Using the Liouville transformation
\[W(Z)=P(z)^{1/4}w(z),\,Z=\int^{z}P(s)^{1/2}ds,\]
we obtain a new equation in \(W\) of the form
\[W^{\prime\prime}(Z)+(1-F(Z)W(Z)=0,\,\,\,\mbox{where}\,\,F(Z)=\frac{1}{4}\frac{ P^{\prime\prime}(z)}{P(z)^{2}}-\frac{5}{16}\frac{P^{\prime}(z)^{2}}{P(z)^{3}}. \tag{2}\]
Let \(\mathcal{S}\) be the sector of the \(Z\) plane defined by \(|Z|>R\) and \(|\arg Z|<\pi-\delta\) for some large \(R\) and some \(\delta\) in \((0,\pi)\); there, \(F(Z)=\mathrm{O}(1/\mathrm{Z}^{2})\) and there are linearly independent solutions with the asymptotic expressions
\[W_{1}(Z)=e^{iZ}(1+\mathrm{O}(1/|\mathrm{Z}|))\,\,\mbox{and}\,\,\mathrm{W}_{2}( \mathrm{Z})=\mathrm{e}^{-\mathrm{iZ}}(1+\mathrm{O}(1/|\mathrm{Z}|)). \tag{3}\]
From these we obtain two linearly independent "principal solutions"
\[w_{i}(z)=P(z)^{-1/4}W_{i}(Z),\,\,i=1,2,\]
of the original second order equation, equation (1), defined in a sector \(S\) of the \(z\) plane satisfying,
\[S=\{z\,|\,|z|>R,\ |\arg z-\theta_{0}|<\frac{2\pi}{N}-\delta^{\prime}\}\]
where \(R\) is a large constant, \(\delta^{\prime}\) is a small constant depending on \(\delta\), \(a\) is the leading coefficient of \(P(z)\) and \(\theta_{0}\) is a solution of \(\arg a+(2\pi/N)\theta_{0}=0\mod 2\pi\). The rays \(z=te^{i\theta_{0}},t>0\) are called a _Julia rays_. Note that there are \(N\) Julia rays, equally spaced in the plane. Each sector contains the Julia ray \(te^{i\theta_{0}}\) and is contained between the rays \(te^{i(\theta_{0}+2\pi/N)}\) and \(te^{i(\theta_{0}-2\pi/N)}\), \(t>0\). See figure 1.
The following lemma can be proved using standard techniques (see e.g. [L], Lemma 4.3.6). We omit the proof.
**Lemma 4**.: _The function \(Z=Z(z)\) satisfies_
\[Z(z)=\frac{2a^{\frac{1}{2}}}{N}z^{\frac{N}{2}}(1+\mathrm{o}(1))\ \ \text{as}\ \mathrm{z}\to\infty\ \text{in}\ \mathrm{S}\]
_and, for each Julia ray \(te^{i\theta_{k}}\), \(k=1,\ldots,N\), \(Z\) is univalent on a sector of the \(z\)-plane, \(S_{k}=\{z\,|\,|z|>R_{1},\,|\arg z-\theta_{k}|<\frac{2\pi}{N}-\delta^{\prime \prime}\}\) where \(R_{1}>R\) and \(\delta^{\prime\prime}>\delta^{\prime}\). Moreover, \(Z\) maps \(S_{k}\) onto a region containing the sector \(\mathcal{T}=\{Z\,|\,|Z|>R_{2},|\arg Z|<\pi-\sigma\}\) contained in \(\mathcal{S}\), where \(R_{2}\) is large and \(\sigma>N\delta/2\)._
The asymptotic expressions in equation (3) show that each of the rays \(Z=\pm it+\mathrm{O}(1/\mathrm{t}),\mathrm{t}>\mathrm{R}\) are asymptotic paths for the respective (distinct) asymptotic values of \(W_{i}\). Their images under the inverse branches of \(Z\) are thus asymptotic paths in the asymptotic tracts for the principal solutions \(w_{i}\). Therefore the function \(f=w_{1}/w_{2}\) has \(N\) asymptotic values and, moreover, the asymptotic values in adjacent tracts are distinct.
Equation (3) also shows that the \(w_{i}\) have no zeros in the sectors where they are defined but that, for any \(A,B\in\mathbb{C}^{*}\), the equation \(Aw_{1}-Bw_{2}=0\) has infinitely many zeros. We next show that these zeroes accumulate along the Julia rays and are thus contained in a subsector of \(S\) of width \(2\delta^{\prime}\) containing the Julia ray. This will imply that the same is true for the poles of
\[f=\frac{Aw_{1}-Bw_{2}}{Cw_{1}-Dw_{2}},A,B,C,D\in\mathbb{C}^{*}\]
and the zeros of \(f(z)-z_{0}\) for any \(z_{0}\in\mathbb{C}\).
**Proposition 5**.: _If \(A,B\) are non-zero constants and \(w=Aw_{1}-Bw_{2}\) then \(w=0\) has infinitely many solutions \(s_{j}\). Label them so that \(\cdots\leq|s_{j}|\leq|s_{j+1}|\leq\cdots\). Then, along each Julia ray \(te^{i\theta_{k}}\), \(k=1,\ldots,N\), \(|s_{j}|\sim\text{O}(|j|^{2/N})\) and \(\lim_{j\to\infty}|\arg s_{j}-\theta_{k}|=0\),_
Proof.: Set \(G(z)=\frac{1}{2i}\log\frac{W_{1}}{W_{2}}\). By lemma 4, if \(z\in S\) is sufficiently large, there is a constant \(c_{0}\) such that
\[G(z)=Z(z)+\mathrm{o}(1)\sim\frac{2c_{0}^{\frac{1}{2}}}{N}z^{\frac{N}{2}}(1+ \mathrm{o}(1)).\]
Furthermore, the zeroes \(s_{j}\) of \(w\), \(j\in\mathbb{Z}\), satisfy
\[2iG(s_{j})=\log(\frac{B}{A})+2j\pi i.\]
Figure 1. Principal solutions in sectors of the \(z\)-plane
If \(z\in S\), so that \(|\arg Z|<\pi-\sigma\), there is a constant \(c_{1}\) such that for each zero \(s_{j}\) of \(w\), \(|s_{j}|\sim c_{1}|j\pi|^{2/N}\), \(\arg G(s_{j})\sim\mathrm{o}(1)\). Hence as \(j\to\infty\), the real part dominates so that \(\arg s_{j}\sim\theta_{0}\); that is, there is a constant \(c_{2}\) such that \(s_{j}\sim c_{2}|j|^{2/N}\) and, as \(j\to\infty\), \(|\arg s_{j}-\theta_{k}|<\mu\) for some small \(\mu\); moreover, \(\mu\to 0\) as \(R\to\infty\).
Write the function \(f\) as:
\[f(z)=\frac{aw_{1}+bw_{2}}{cw_{1}+dw_{2}}=\frac{aW_{1}+bW_{2}}{cW_{1}+dW_{2}}= \frac{ag(Z)+b}{cg(Z)+d} \tag{4}\]
where \(ad-bc=1\) and \(g(Z)=W_{1}/W_{2}\sim e^{2iZ}\) so that \(g^{\prime}(Z)\sim 2ig(Z)\).
**Proposition 6**.: _Let \(f\) be as in equation (4). Denote the poles of \(f\) and their respective residues by \(s_{j}\) and \(r_{j}\), and assume the poles are labelled so that \(\cdots\leq|s_{j}|\leq|s_{j+1}|\leq\cdots\). Then_
\[r_{j}=\frac{1}{2i}\big{(}\frac{a}{c}-\frac{b}{d}\big{)}P(s_{j})^{-\frac{1}{2} }\sim c_{2}\cdot s_{j}^{-\frac{N-2}{2}},\]
_for some constant \(c_{2}\)._
The relation between the residues and the poles is thus
\[|r_{j}|\sim c_{2}|s_{j}|^{-\frac{N-2}{2}}\sim c_{3}|j|^{-\frac{N-2}{N}}. \tag{5}\]
**Proposition 7**.: _As above, let \(f\) be a solution of \(S(f)=2P\) where \(P\) is a polynomial of degree \(N-2\). For any \(z_{0}\in\mathbb{C}\), denote its preimages by \(p_{j}\) and label them so that \(\cdots\leq|p_{j}|\leq|p_{j+1}|\leq\cdots\). Then there exists a constant \(c>0\) such that \(|f^{\prime}(p_{j})|\sim c|j|^{(N-2)/N}\)._
Proof.: Let
\[g(z)=\frac{1}{f(z)-z_{0}};\]
then \(S(g)=S(f)=2P\). If \(p_{j}\) are solutions of \(f(z)=z_{0}\), they are poles of \(g(z)\) so that proposition 6 implies that \(|\mathrm{res}(g,p_{j})|\sim c_{3}|j|^{-(N-2)/N}\). Furthermore, since \(g(z)=1/(f(z)-z_{0})\), a simple computation shows that
\[|f^{\prime}(p_{j})|=\frac{1}{|\mathrm{res}(g,p_{j})|}\sim c|j|^{\frac{N-2}{N}}\]
The zeros of \(f\) are the poles of \(1/f\) and vice versa. Both are determined by the zeros of the function \(w\) in the above proposition. Since for any \(z_{0}\in\mathbb{C}\), \(f\) and \(f-z_{0}\) have the same Schwarzian, it follows from the above propositions that the zeros of \(f-z_{0}\) have the same asymptotic behavior as those of \(f\). In particular, they grow at the same rate.
## 3. The Main Theorem
Let \(\mathcal{F}_{N}\) be the set of Nevanlinna functions with \(N\) asymptotic values. For \(f\in\mathcal{F}_{N}\) and \(i=1,\ldots,N\) denote the asymptotic values by \(\lambda_{i}\) and the corresponding asymptotic tracts by \(T_{i}\). Assume there is an integer \(K\), \(1\leq K<N\), and integers \(p_{i}\geq 0\), \(i=1,\ldots,K\), such that
\[f^{p_{i}}(\lambda_{i})=\infty,\ i=1,\ldots,K.\]
If \(\lambda_{i}=\infty\), \(p_{i}=0\). This can happen for at most \(N/2\) asymptotic values and the asymptotic tracts of these infinite asymptotic values must be separated by the asymptotic tract of a finite asymptotic value. Also assume that for each \(i=K+1,\cdots,N\), the accumulation set, \(\omega(\lambda_{i})\), of the orbit of \(\lambda_{i}\), is a compact repeller; that is, there exists a \(\kappa>1\) such that for each \(z\in\omega(\lambda_{i})\), there exists an \(n=n(z)\), such that \(|(f^{n})^{\prime}(z)|>\kappa\). Note that this implies that these asymptotic values are finite.
Define
\[I=I(f)=\{z\in\mathbb{C}\ |\ f^{n}(z)\to\infty\},\]
and
\[L=L(f)=\{z\in\mathbb{C}\ |\ \omega(z)=\cup_{i=K+1}^{N}\omega(\lambda_{i})\}.\]
The proof of the main theorem depends on the following theorems:
**Theorem 8**.: _The set \(I\) is of measure zero._
**Theorem 9**.: _The set \(L\) is of measure zero._
Versions of these theorems are proved in [KK] under the assumption that all of the asymptotic values accumulate on a compact repeller.
### The measure of the set \(I\)
For each \(1\leq i\leq K\), denote the orbit of the prepole asymptotic value \(\lambda_{i}\) by
\[Orb(\lambda_{i})=\{\lambda_{i},f(\lambda_{i}),\cdots,f^{p_{i}-1}(\lambda_{i} ),\infty\}.\]
If \(\lambda_{i}=\infty\) for some \(i\), \(Orb(\lambda_{i})=\{\infty\}\).
Let \(S=\{1,2,\cdots,K\}\). Since there are \(2^{K}-1\) distinct non-empty subsets of \(S\), label them \(S_{l}\), \(l=1,\ldots,2^{K}-1\) and denote the collection \(\Sigma\). For any \(S_{l}\), define
\[Orb_{l}=\cup_{i\in S_{l}}Orb(\lambda_{i}).\]
For \(S_{l}\in\Sigma\), where \(l=1,\ldots,2^{K}-1\), define
\[I_{l}=I_{l}(f)=\{z\in\mathbb{C}\ |\ \omega(z)=Orb_{l}\}.\]
**Theorem 10**.: _Each of the sets \(I_{l}\) is of measure zero._
The proof of this theorem depends on the next two lemmas.
Fix \(R>>0\), and let \(\mathcal{A}_{R}=\{z\in\mathbb{C},|z|>R\}\); For each \(1\leq i\leq K\), and \(j\in\mathbb{Z}\), let \(b_{ij}=f^{-1}(\lambda_{i})\).
Because \(\lambda_{i}\) is prepole of order \(p_{i}\), one component of \(f^{-p_{i}}(\mathcal{A}_{R})\) is the topological disk \(D_{i}\) punctured at \(\lambda_{i}\). Therefore the set of components of \(f^{-1}(D_{i})\) consists of the asymptotic tract \(T_{i}\) of \(\lambda_{i}\) and the topological disks \(V_{ij}\) punctured at \(b_{ij}\). For each \(S_{l}\in\Sigma\) and \(z\in\cup_{i\in S_{l}}(\cup_{j}V_{ij}\cup T_{i})\), define the map \(\sigma_{l}(z)=f^{p_{i}+1}(z)\).
**Lemma 11**.: _If \(z\in\cup_{i=1}^{K}T_{i}\), then_
\[|\sigma_{l}^{\prime}(z)|>\frac{|\log|\sigma_{l}(z)|-\log R|}{4\pi}\cdot\frac{| \sigma_{l}(z)|}{|z|}\]
Proof.: Since \(f^{p_{i}}:D_{i}\to A_{R}\) is conformal and \(f:T_{i}\to D_{i}\) is a universal covering, it follows that \(\sigma_{l}:T_{i}\to A_{R}\) is also a universal covering. The rest of proof given below follows along the lines of the corresponding proof in [Lyu].
Consider \(H_{R}=\log\mathcal{A}_{R}\), the right half plane with real part greater than \(R\) and let \(\mathcal{U}_{l}=\log(\cup_{i\in S_{l}}T_{i})\). Then \(\mathcal{U}_{l}\subset H_{R}\) and consists of infinitely many disjoint simply connected components \(U_{im}\), \(i\in S_{l},m\in\mathbb{Z}\); moreover, there is an \(\epsilon_{im}>0\), depending on \(R\) such that each \(U_{im}\) is fully contained inside a strip of height \(2\pi-\epsilon_{im}\). Because there are at most \(K\) sets \(U_{im}\), the sum of their heights is less than \((2\pi K/N-\epsilon_{R})\) where \(\epsilon_{R}=\sum\epsilon_{im}\) depends on \(R\) and \(S_{l}\).
For each \(U_{im}\) there is a conformal map \(F_{im}\) to \(H_{R}\) such that the following diagram commutes:
For each point \(z_{0}\in\cup T_{i}\), denote the lifts of \(z_{0}\) and \(\sigma_{l}(z_{0})\) by \(w_{0}\in U_{im}\) and \(w_{1}\in H_{R}\) respectively. Note that \(\Re w_{1}=\log|\sigma_{l}(z)|\). Consider \(D=D(w_{1},\Re w_{1}-\log R)\) and its preimage under \(F_{im}\). By the \(1/4\) Koebe theorem, its preimage contains a disk of radius
\[\frac{\Re w_{1}-\log R}{4|F_{im}^{\prime}(w)|}.\]
As the width of each strip is less than \(2\pi\),
\[|F_{im}^{\prime}(w)|\geq\frac{\Re w_{1}-\log R}{4\pi}.\]
The lemma now follows from the chain rule.
Theorem 8 follows directly since \(I\subset\cup_{S_{l}\in\Sigma}I_{l}\).
The next lemma is the analog of lemma 11 in the case that \(S_{l}\in\Sigma\) and \(z\in\cup_{i\in S_{l}}(\cup_{j}V_{ij})\). Fix \(i\in S_{l}\) and, suppressing the index \(i\) for readability, denote the zeros of \(f(z)-\lambda_{i}=0\) by \(b_{j}\). Note that by proposition 7, \(|f^{\prime}(b_{j})|\sim c|j|^{(N-2)/N}\).
**Lemma 12**.: _There exists a neighborhood \(V^{\prime}_{j}\) and of \(b_{j}\) and a constant \(b>0\) such that \(V^{\prime}_{j}\subset\overline{V^{\prime}_{j}}\subset V_{j}\),_
\[V^{\prime}_{j}\subset D(b_{j},\frac{b}{|j|^{\frac{N-2}{N}}R}),\]
_and for \(z\in U_{j}\) and for some constant \(B>0\),_
\[|\sigma^{\prime}_{l}(z)|>BR|j|^{\frac{N-2}{N}}\]
Proof.: For each \(\lambda_{i}\in S_{l}\), denote the pole \(f^{p_{i}-1}(\lambda_{i})\) by \(s_{i}\). Then expanding \(f\) at \(s_{i}\),
\[f(z)=\frac{r_{i}}{z-s_{i}}(1+\phi_{i}(z))\]
where \(r_{i}\) is the residue of \(f\) at \(s_{i}\), and \(\phi_{i}\) is analytic at \(s_{i}\). Consider the annular region \(\mathcal{A}_{2R}\subset\mathcal{A}_{R}\), and denote by \(g\) the branch of \(f^{-1}\) such that \(g(\mathcal{A}_{R})\) is a punctured neighborhood of \(s_{i}\). Set \(h=g(1/z):D(0,1/R)\to\mathbb{C}\), so that \(0\) is a removable singularity and let \(U=h(D(0,1/2R))=g(\mathcal{A}_{2R})\cup\{\infty\}\). Then \(h\) is conformal and \(h^{\prime}(0)=r_{i}\). The Koebe distortion theorem applied to \(h\) proves that for any \(z\in D(0,1/2R)\),
\[|h(z)-h(0)|\leq|h^{\prime}(0)|\frac{\frac{1}{2}\cdot\frac{1}{R}}{(1-\frac{1}{ 2})^{2}}=\frac{2r_{i}}{R}.\]
The Koebe \(1/4\) theorem applied to \(h\) on \(D(0,1/2R)\) proves that \(D(s_{i},r_{i}/8R)\subset U\). Combining these gives
\[D(s_{i},\frac{|r_{i}|}{8R})\subset U\subset D(s_{i},\frac{2|r_{i}|}{R}).\]
Therefore for any \(z\in U\),
\[|f^{\prime}(z)|=|-\frac{r_{i}(1+\phi_{i}(z))}{(z-s_{i})^{2}}+\frac{r_{i}\phi^{ \prime}_{i}(z)}{z-s_{i}}|\geq|\frac{r_{i}}{z-s_{i}}|\geq\frac{R}{2}. \tag{6}\]
Since \(f\) has no critical points, the disk \(D(s_{i},4|r_{i}|/R)\) at the pole \(s_{i}\) is mapped univalently by the respective branches of \(f^{-p_{i}}\) onto neighborhoods of the points \(b_{j}=f^{-p_{i}}(s_{i})\). Let \(V^{\prime}_{j}\) be the component of \(f^{-p_{j}}(U)=f^{-(p_{i}+1)}(\mathcal{A}_{2R})\) at \(s_{j}\). It is obvious that \(\overline{V^{\prime}_{j}}\subset V_{j}\), a component of \(f^{-(p_{i}+1)}(\mathcal{A}_{R})\). Since
\[U_{i}\subset D(s_{i},\frac{2|r_{i}|}{R})\subset D(s_{i},\frac{4|r_{i}|}{R}),\]
the Koebe distortion theorem implies that for any \(z,w\in\widetilde{V}_{ij}\),
\[\frac{(f^{P_{i}})^{\prime}(z)}{(f^{P_{i}})^{\prime}(w)}\leq T(\frac{1}{2}). \tag{7}\]
By Proposition 7, for some \(c_{1}>0\), \(|f^{\prime}(b_{j})|\sim c_{1}j^{(N-2)/N}\). Since \(f\) is univalent on the orbit of \(\lambda_{i}\), there exists \(c_{2}>0\) such that \(|(f^{P_{i}})^{\prime}(b_{j})|\sim c_{2}j^{(N-2)/N}\) and thus
\[|(f^{P_{i}})^{\prime}(z)|>c_{2}T^{-1}(\frac{1}{2})j^{\frac{N-2}{N}},\text{ for any }z\in V_{j}^{\prime}.\]
Since \(U\subset D(s_{i},2|r_{i}|/R)\), this implies
\[V_{j}^{\prime}\subset D(b_{j},\frac{2T(\frac{1}{2})|r_{i}|}{c_{2}|j|^{\frac{N- 2}{N}}R}).\]
This, combined with equation (6) also implies that for all \(z\in V_{j}^{\prime}\),
\[\sigma_{l}^{\prime}(z)\geq\frac{c_{2}R|j|^{\frac{N-2}{N}}}{2T(1/2)}\]
and thus completes the proof.
### Proof of Theorem 10
Let \(E=\{z,\sigma_{l}^{n}(z)\to\infty,n\to\infty\}\); then
\[I_{l}(f)=\cup_{n=0}^{\infty}f^{-n}(E).\]
To prove theorem 10, it suffices to show that the measure of the set \(E\) is zero. We assume not and obtain a contradiction. Let \(z_{0}\) be a Lebesgue density point of the set \(E\), and let \(z_{n}=\sigma_{l}^{n}(z_{0})\). As \(z_{n}\to\infty\), without loss of generality, we may assume that for each \(n\), \(|z_{n+1}|\geq|z_{n}|\geq R\). Set \(\mathcal{A}_{r,s}=\{z,\,|\,r<|z|<s\}\).
Since by hypothesis, \(K<N\), if follows that \(\cup_{K+1}^{N}T_{i}\neq\emptyset\) but \((\cup_{K+1}^{N}T_{i})\cap\sigma_{l}^{-1}(\mathcal{A}_{R})=\emptyset\). Therefore, for any \(s>r>R\), there is a \(\tau>0\) such that
\[\frac{m(\mathcal{A}_{r,s}\cap\sigma_{l}^{-1}(\mathcal{A}_{r,s}))}{m(A_{r,s})} <1-\tau.\]
Note that if \(K=N\), the asymptotic tracts fill up \(\sigma_{l}^{-1}(\mathcal{A}_{R})\). The proof of non-ergodicity for \(K=N=2\) in [CJK] uses this fact and lends support to conjecture 1.
The proof of theorem 10, is in two parts depending on the orbit of \(z_{0}\).
**Part 1:** Assume that for all \(n\), \(z_{n}\in\cup_{i=1}^{K}T_{i}\). This part of the proof depends on lemma 11 and uses the notation in that lemma.
As in the lemma, for \(z_{n}\in T_{i}\), set \(w_{n}=\log z_{n}\in U_{im}\) and \(r_{n}=\Re w_{n}\). Then \(F_{im}^{-1}:H_{R}\to U_{im}\) is the inverse branch such that \(F_{im}^{-1}(w_{n})=w_{n-1}\). The function \(F_{im}^{-1}\) is univalent in the disk \(D(w_{n},r_{n}-\log R)\). By lemma 11, it follows that
\[|(F_{im}^{-1})^{\prime}(w_{n})|\leq\frac{4\pi}{r_{n}-\log R}.\]
Note that it may be that \(z_{n-1}\in T_{j}\), \(i\neq j\), \(i,j\in S_{l}\), and similarly, \(w_{n-1}\) may be in a different \(U_{jm}\). For the sake of readability, we will ignore these indices and write \(U\) for whichever \(U_{im}\) is meant and write \(F^{-1}\) for whichever inverse branch is meant.
Next, consider the disk \(D(w_{n},r_{n}/4)\). First note that since \(U\) does not intersect the any of the preimages, \(f^{-1}(T_{i})\), \(i=K+1,\ldots,N\), there exists a \(\tau^{\prime}>0\), such that
\[\frac{m(D(w_{n},\frac{r_{n}}{4})\cap U)}{m(D(w_{n},\frac{r_{n}}{4})}<1-\tau^{ \prime}.\]
Moreover, for \(w\in D(w_{n},r_{n}/4)\), the Koebe distortion theorem implies that
\[|F^{-1}(w)-F^{-1}(w_{n})|\leq\frac{4\pi}{r_{n}-\log R}\cdot\frac{\eta(r_{n}- \log R)}{(1-\eta)^{2}},\]
where
\[\eta=\frac{r_{n}}{4(r_{n}-\log R))}<\frac{1}{2}.\]
Therefore,
\[F^{-1}(D(w_{n},\frac{r_{n}}{4}))\subset D(w_{n-1},d)\text{ where }d=8\pi.\]
For each \(1\leq k\leq n-1\), \(F^{-1}\) is univalent in the disk \(D(w_{k},2d)\) and \((F^{-1})^{\prime}(w_{k})\leq 1/8\). For each \(k\), \(1\leq k\leq n-1\), the Koebe \(1/4\) theorem implies that
\[F^{-1}(D(w_{k},d))\subset D(w_{k-1},\frac{d}{2}).\]
Next, iterate \(F^{-1}\) and set \(B_{n}=F^{-n}(D(w_{n},r_{n}/4))\subset D(w_{0},2^{-m+1}d)\). Now since the iterated function \(F^{-n}\) is univalent on \(D(w_{n},\Re w_{n}-\log R)\), apply Koebe distortion again to get
\[D(w_{0},t\rho_{n})\subset B_{n}\subset D(w_{0},\rho_{n})\]
where \(t\) is independent of \(n\), and \(\rho_{n}\) is the radius of the smallest disk centered at \(w_{0}\) containing \(B_{n}\). It follows that \(\rho_{n}\leq 2^{-n+1}d\) which, in turn, implies that \(\rho_{n}\to 0\) as \(n\to\infty\).
Part (ii) of the Koebe Distortion Theorem applied to \(F^{-n}\) implies there exists a \(\tau^{\prime\prime}\) such that
\[\frac{m(B_{n}\cap E)}{m(E)}\leq 1-T^{-2}(\frac{1}{2})\tau^{\prime\prime}\]
for all \(n\). In other words, the Lebesgue density of the point \(w_{0}\) is less than \(1\), which contradicts the assumption that \(w_{0}\) is a density point.
**Part 2:** Now consider a subsequence \(z_{n_{k}}\in\cup_{i\in S_{l}}(\cup_{j}V_{j})\). Let \(W=\sigma_{l}^{-1}(\mathcal{A}_{2R})\) and let \(V_{j}\) be the component of \(W\) such that \(z_{n_{k}}\in V_{j}^{\prime}\subset V_{j}\) for some \(n_{k}\)
There is at least one asymptotic tract \(T_{k}\) such that \(W\cap T_{k}=\emptyset\); thus there exists a \(\tau^{\prime\prime\prime}>0\) such that
\[\frac{m(\mathcal{A}_{2R}\cap W)}{m(\mathcal{A}_{2R})}<1-\tau^{\prime\prime \prime}.\]
By lemma 12,
\[V_{j}^{\prime}\subset D(b_{j},\frac{b}{R|j|^{\frac{N-2}{N}}}),\]
and for any \(z\in V_{j}^{\prime}\),
\[\sigma_{l}^{\prime}(z)\geq BR|j|^{\frac{N-2}{N}}\text{ and }\frac{(f^{p_{i}})^{ \prime}(z)}{(f^{p_{i}})^{\prime}(w)}\leq T(\frac{1}{2}).\]
Therefore,
\[\frac{m(V_{j}^{\prime}\cap\sigma_{l}^{-1}(W))}{m(V_{j}^{\prime})}<1-T(\frac{1 }{2})^{-2}\tau^{\prime\prime\prime}.\]
Without loss of generality, assume that \(|z_{n+1}|\geq|z_{n}|>>R\) for all \(n\). Then the above inequality, together with lemmas 11 and 12, show \(|\sigma_{l}^{\prime}(z_{n})|>M>1\) for all \(n\).
Let \(B_{n_{k}}=\sigma_{l}^{-n_{k}}(V_{j}^{\prime})\). Then
\[B_{n_{k}}\subset D(z_{0},M^{-n_{k}}\frac{b}{R|j|^{\frac{N-2}{N}}}).\]
Since \(\sigma_{l}^{-n_{k}}\) is univalent on \(V_{j}\supset V_{j}^{\prime}\), this implies
\[D(z_{0},t\rho_{n_{k}})\subset B_{n_{j}-1}\subset D(z_{0},\rho_{k})\]
where \(t\) is independent of \(n_{j}\), and \(\rho_{n_{k}}\) is the radius of the smallest disk centered at \(z_{0}\) containing \(B_{n_{j}-1}\). It follows that \(\rho_{n_{k}}\to 0\) as \(n_{k}\to\infty\).
Applying part (ii) of the Koebe Distortion Theorem,
\[\frac{m(B_{n_{k}}\cap E)}{m(E)}\leq 1-T^{-4}(\frac{1}{2})\tau^{\prime\prime\prime}\]
for all \(n_{k}\) which implies that the Lebesgue density of the point \(z_{0}\) is less than \(1\). This contradicts the assumption that \(z_{0}\) is a density point and completes the proof of theorem 10.
### Proof of Theorem 9
Proof.: To prove that \(m(L)=0\), note first that by assumption \(\Omega=\cup_{i=K+1}^{N}\omega(\lambda_{i})\) is a finite union of compact repellers, so is again a compact repeller. This implies that the orbits of the non-prepole asymptotic values do not accumulate on \(\Omega\), but actually land on it. The proof in [KK] assumes \(K=0\). Although this proof is similar to that proof, here we modify it to take account of the prepole asymptotic values.
Let \(\mathcal{K}_{\epsilon}=\{z\,|\,dist(z,\Omega)<\epsilon\}\). We claim there is an \(\epsilon>0\) and an integer \(M>0\) such that if \(y=\lambda_{i}\), \(i=K+1,\ldots,N\), \(n>M\) and \(f^{n}(y)\in\mathcal{K}_{\epsilon/2}\), then \(f^{n}(y)\in\Omega\).
If \(\Omega\) is finite, the claim is obviously true, so assume it is not. By the compactness assumption, there are no prepoles in \(\Omega\) and there there are constants \(\kappa>1\) and \(\epsilon>0\) such that \(|(f^{n})^{\prime}(w)|\geq\kappa\) for some \(m\) and all \(w\in\mathcal{K}_{\epsilon}\), and thus for all \(w\in\overline{\mathcal{K}_{\epsilon/2}}\). By the forward invariance of \(\Omega\) and this expansion property, \(\overline{\mathcal{K}_{\epsilon/2}}\subset f^{n}(\overline{\mathcal{K}_{ \epsilon/2}})\). Let \(g\) be the inverse branch of \(f^{n}\) reversing this inclusion. Then set
\[A_{0}=\overline{\mathcal{K}_{\frac{g}{2}}}-g(\overline{\mathcal{K}_{\frac{g}{ 2}}})\text{ and }A_{n+1}=g^{n}(A_{0}),n\to\infty.\]
These disjoint annuli are nested and, since the inverse branches are univalent, have the same moduli. Therefore, if for some \(n\), \(f^{n}(y)\in\overline{\mathcal{K}}_{\epsilon}\setminus\Omega\), by compactness, there are subsequences of its iterates that converge both to points in \(\overline{A_{0}}\) and to points in \(\Omega\). This is a contradiction because these sets are disjoint and the claim is proved.
Choose \(\epsilon\) as above and set
\[\mathcal{L}=\cap_{n\geq 0}f^{-n}(\mathcal{K}_{\frac{\epsilon}{2}}).\]
A point \(z\in\mathcal{L}\setminus\Omega\) if its full forward orbit belongs to \(\mathcal{K}_{\epsilon/2}\). We will show \(m(\mathcal{L}\setminus\Omega)=0\). Since \(L\subset\cup_{n=0}^{\infty}f^{-n}(\mathcal{L})\) and \(\Omega\) is countable, this will imply \(m(L)=0\).
Suppose \(m(\mathcal{L}\setminus\Omega)>0\) and let \(z_{0}\) be a density point of \(\mathcal{L}\setminus\Omega\). Since \(\Omega\) is compact, a subsequence \(z_{k}=f^{n_{k}}(z_{0})\) converges to a point \(y_{0}\in\overline{\mathcal{K}_{\epsilon/2}}\subset\mathcal{K}_{\epsilon}\). Denote the respective inverse branches by \(g_{k}\). Set \(D_{k}=D(z_{k},\epsilon/4)\); then \(D_{k}\subset\mathcal{K}_{\epsilon}\) and \(g_{k}\) is univalent on \(D_{k}\). Applying Koebe distortion we obtain
\[\frac{m(g_{k}(D_{k})\cap\mathcal{L})}{m(g_{k}(D_{k}))}\to 1\ \text{ and }\frac{m(D_{k}\cap f^{n_{k}}(\mathcal{L}))}{m(g_{k}(D_{k}))}\to 1.\]
Finally, let \(U\) be an open set with compact closure contained in \(\mathbb{C}\setminus\overline{\mathcal{K}}\). Since the Julia set is the whole sphere, there is an integer \(M\) such that \(f^{M}(D_{k})\supset\overline{U}\) so that \(m(f^{M+n_{k}}(\mathcal{L}\cap U))>0\). For all \(k\in\mathbb{N}\), however, \(f^{k}(\mathcal{L})\subset\mathcal{K}_{\epsilon/2}\) so that \(f^{M+n_{k}}(\mathcal{L}\cap U)=\emptyset\). This contradiction shows \(m(L)=0\).
### Proof of the main theorem
**Theorem 13**.: _If \(f\) is a Nevanlinna function with \(1\leq K<N\) prepole1 asymptotic values and \(N-K\) asymptotic values that accumulate on a compact repeller, then \(f\) acts ergodically on its Julia set._
Footnote 1: If infinity is an asymptotic value, we consider it a “prepole of order \(0\)”.
Proof.: Let \(A\) be an \(f\)-invariant subset of the Julia set with positive measure. We will show that \(A=\widehat{\mathbb{C}}\) up to a set of measure zero. Let \(z_{0}\) be a Lebesgue density point of \(A\) and denote its orbit by \(z_{n}=f^{n}(z)\), \(n=0,1,\ldots.\) We proved
above that the measures of each of the sets \(I\), \(I_{l}\) and \(L\) is zero. Since these three sets together contain all points whose orbits accumulate on \(\cup_{i=1}^{N}\omega(\lambda_{i})\) we assume that \(z_{0}\) is not among them.
By the above, the density point \(z_{0}\) of \(A\) has an accumulation point \(y\in\mathbb{C}\setminus\cup_{i=1}^{N}\omega(\lambda_{i})\). Recall that \(\Omega=\cup_{i=K+1}^{N}\omega(\lambda_{i})\). Hence there is an \(\epsilon>0\) so that \(2\epsilon=dist(y,\Omega)>0\). Thus, there is a subsequence \(\{n_{j}\}\) in \(\mathbb{N}\) such that \(z_{n_{j}}\to y\) as \(j\to\infty\) and \(dist(z_{n_{i}},\Omega)\geq\epsilon\).
Let \(B_{j}=B(z_{n_{j}},\epsilon)\) and \(V_{j}=B(z_{n_{j}},\epsilon/2)\). Let \(g_{j}\) be the inverse branch of \(f^{-n_{j}}\) that sends \(z_{n_{j}}\) to \(z\); it is a univalent function on \(B_{j}\). Let \(U_{j}=g_{j}(V_{j})\). All of the inverse branches of \(f\) are contracting with respect to the hyperbolic metric on \(\mathbb{C}\setminus\Omega\); this implies that \(g_{j}^{\prime}\to 0\) on \(V_{j}\) as \(j\) goes to \(\infty\), which in turn implies that the diameter of \(U_{j}\) tends to \(0\). Since \(g_{j}\) is univalent on \(B_{j}\), the Koebe distortion theorem shows that that \(U_{j}\) is almost a disk. Since \(z\) is a density point of \(A\),
\[\lim_{j\to\infty}\frac{m(A\cap U_{j})}{m(U_{j})}=1.\]
Applying Koebe distortion again, since \(A\) is an invariant subset, we get
\[\lim_{j\to\infty}\frac{m(A\cap V_{j})}{m(V_{j})}=1.\]
This and the fact that \(V_{j}\) approaches \(B_{y}=B(y,\epsilon/2)\) as \(j\) goes to \(\infty\) together imply that \(B_{y}\subset A\) up to a set measure zero. Since \(A\) is an invariant subset of the Julia set \(J\), \(f^{n}(B_{y})\subset A\subset J\). Since \(A\) is in the Julia set, \(B_{y}\) is also in the Julia set and \(f^{n}(B_{0})\) approaches \(\mathbb{C}\) as \(n\) goes to \(\infty\). Therefore, \(A=\widehat{\mathbb{C}}\) up to a zero-measure set and the proof of Theorem 13 is complete.
**Remark 3.1**.: _The assumption that the \(\omega\)-limit sets of the non pre-polar asymptotic values are compact repellers says the Julia set is the whole sphere and gives us the expansion we need to prove our theorem. We could replace this by assuming that the Julia set is the sphere and that the orbits of the non-prepolar orbits are bounded. Then the main theorem of [GKS] implies the expansion we need exists._
**Remark 3.2**.: _Another application of the results in [GKS] and [RVS, Theorem 1.1] to the Nevanlinna functions \(f\) of our main theorem is that \(f\) supports no invariant line field._
**Remark 3.3**.: _Finally, the results in [KU] applied to the Nevanlinna functions of our main theorem prove that \(f\) has a \(\sigma\)-finite ergodic conservative \(f\)-invariant measure absolutely continuous with respect to the Lebesgue measure._ |
2304.00175 | Well-posedness and qualitative properties of quasilinear degenerate
evolution systems | We analyze nonlinear degenerate coupled PDE-PDE and PDE-ODE systems that
arise, for example, in the modelling of biofilm growth. One of the equations,
describing the evolution of a biomass density, exhibits degenerate and singular
diffusion. The other equations are either of advection-reaction-diffusion type
or ordinary differential equations. Under very general assumptions, the
existence of weak solutions is proven by considering regularized systems,
deriving uniform bounds, and using fixed point arguments. Assuming additional
structural assumptions we also prove the uniqueness of solutions.
Global-in-time well-posedness is established for Dirichlet and mixed boundary
conditions, whereas, only local well-posedness can be shown for homogeneous
Neumann boundary conditions. Using a suitable barrier function and comparison
theorems we formulate sufficient conditions for finite-time blow-up or uniform
boundedness of solutions. Finally, we show that solutions of the degenerate
parabolic equation inherit additional global spatial regularity if the
diffusion coefficient has a power-law growth. | Koondanibha Mitra, Stefanie Sonner | 2023-03-31T23:36:01Z | http://arxiv.org/abs/2304.00175v1 | # Well-posedness and qualitative properties of quasilinear degenerate evolution systems
###### Abstract
We analyze nonlinear degenerate coupled PDE-PDE and PDE-ODE systems that arise, for example, in the modelling of biofilm growth. One of the equations, describing the evolution of a biomass density, exhibits degenerate and singular diffusion. The other equations are either of advection-reaction-diffusion type or ordinary differential equations. Under very general assumptions the existence of weak solutions is proven by considering regularized systems, deriving uniform bounds and using fixed point arguments. Assuming additional structural assumptions we also prove the uniqueness of solutions.
Global-in-time well-posedness is established for Dirichlet and mixed boundary conditions, whereas, only local well-posedness can be shown for homogeneous Neumann boundary conditions. Using a suitable barrier function and comparison theorems we formulate sufficient conditions for finite-time blow-up or uniform boundedness of solutions. Finally, we show that solutions of the degenerate parabolic equation inherit additional global spatial regularity if the diffusion coefficient has a power-law growth.
**Keywords:** degenerate diffusion \(\bullet\) biofilm models \(\bullet\) quasilinear parabolic systems \(\bullet\) PDE-ODE systems \(\bullet\) well-posedness \(\bullet\) regularity \(\bullet\) finite time blow up
**MSC:** 35K65, 35K59, 35A01, 35A02, 35B44, 35B45, 35B50
###### Contents
* 1 Introduction
* 2 Problem formulation
* 2.1 Preliminaries
* 2.2 Assumptions on the data
* 2.3 Weak solutions
Well-posedness for Dirichlet and mixed boundary conditions * 3.1 A regularized problem * 3.2 Existence of solutions of the degenerate parabolic problem * 3.3 A contraction argument for proving Theorem 3.2 * 3.4 A fixed point argument for proving Theorem 3.1
* 4 Homogeneous Neumann boundary conditions
* 4.1 Existence of weak solutions
* 4.1.1 Well-posedness of backward Euler time-discretizations
* 4.1.2 Interpolations in time
* 4.1.3 Proof of Lemma 3.1
* 4.2 Finite time blow-up
* 5 Spatial regularity of the biomass density
* 5.1 Some auxiliary functions
* 5.2 Boundedness of \(M\) in \(L^{2}(0,T;H^{r}(\Omega))\)
## 1 Introduction
This paper investigates the well-posedness and qualitative properties of weak solutions of a wide class of quasilinear parabolic systems where one of the equations shows degenerate and singular diffusion. We also consider couplings of such degenerate parabolic equations with ordinary differential equations (ODEs). The motivation for our work is models describing the growth of spatially heterogeneous biofilms in dependence of growth limiting substrates. The models are either formulated as systems of partial differential equations (PDEs) or as coupled PDE-ODE systems, e.g. see [6, 7]. Their characteristic and challenging features are the degenerate and singular diffusion effects in the equation for the biomass density and the nonlinear coupling of this equation to additional ODEs and/or PDEs for the substrates.
Let \(\Omega\subset\mathbb{R}^{d}\), \(d\in\mathbb{N}\), be a bounded Lipschitz domain and \(T>0\). We denote the parabolic cylinder by \(Q:=\Omega\times(0,T]\). Throughout this study, for a fixed \(k\in\mathbb{N}\), \(j\in\{1,\ldots,k\}\) will denote an integer, and \(\vec{w}=(w_{1},\ldots,w_{k})\) a \(k\)-dimensional vector. We consider the following problem in \(Q\),
\[\partial_{t}M =\nabla\cdot[D_{0}(M)\nabla M]+f_{0}(M,\vec{S}), \tag{1.1a}\] \[\partial_{t}S_{j} =\nu_{j}\nabla\cdot[D_{j}(M,\vec{S})\nabla S_{j}+\mathbf{v}_{j}S_{j}] +f_{j}(M,\vec{S}), \tag{1.1b}\]
for \(j=1,\ldots,k\), where \(M:Q\to\mathbb{R}\) denotes the biomass density and the vector-valued function \(\vec{S}:Q\to\mathbb{R}^{k}\) the substrate concentrations. The biomass density \(M\) is normalized with respect to the maximum biomass density and hence, it takes values in \([0,1)\). The biomass diffusion coefficient \(D_{0}:[0,1)\to[0,\infty)\) is degenerate, it satisfies \(D_{0}(0)=0\) and \(\lim_{m\nearrow 1}D_{0}(m)=\infty\). Although, we remark that large parts of our analysis are also valid for non-degenerate functions \(D_{0}\). The diffusion coefficients of the substrates \(D_{j}:[0,1]\times\mathbb{R}^{k}\to[0,\infty)\) are non-degenerate, i.e. they are bounded from above and below by positive constants. The constants \(\nu_{j}\geq 0\) will be referred to as the mobility coefficients of the substrates. It is important to point out that the
case of immobilized substrates (\(\nu_{j}=0\)) is included in our setting which leads to a coupling of Equation (1.1a) with ODEs in (1.1b). Moreover, \(\boldsymbol{v}_{j}:Q\to\mathbb{R}^{d}\) is a given flow-field. Finally, the reaction terms \(f_{0},\,f_{j}:\mathbb{R}^{k+1}\to\mathbb{R}\) describe the complex interplay between the substrates and biomass.
In biofilm modelling applications, it is important to allow for mixed Dirichlet-Neumann or homogeneous Neumann boundary conditions for \(M\). To this end, we divide the boundary \(\partial\Omega\) into two disjoint parts \(\Gamma_{1}\) and \(\Gamma_{2}\) that are both Lipschitz boundaries. We complement (1.1a)-(1.1b) with the following initial and boundary conditions for \(M\) and \(\vec{S}\),
\[M(0) =M_{0},\quad\vec{S}(0)=\vec{S}_{0}, \tag{1.1c}\] \[M|_{\Gamma_{1}} =h_{0},\quad[\nabla M\cdot\boldsymbol{\hat{n}}]|_{\Gamma_{2}}=0, \quad\nu_{j}S_{j}|_{\partial\Omega}=\nu_{j}h_{j}, \tag{1.1d}\]
where \(\boldsymbol{\hat{n}}\) denotes the outward unit normal to \(\partial\Omega\) and \(M_{0}:\Omega\to[0,1)\), \(\vec{S}_{0}:\Omega\to\mathbb{R}^{k}\), \(h_{0}:\Gamma_{1}\to[0,1)\) and \(h_{j}:\partial\Omega\to\mathbb{R}\) are given. We remark that the case \(\Gamma_{1}=\emptyset\) is allowed in our setting which corresponds to homogeneous Neumann boundary conditions for \(M\). The case \(\Gamma_{2}=\emptyset\) is also included which corresponds to Dirichlet boundary conditions for \(M\). Note that in (1.1c) we do not prescribe boundary conditions for immobilized substrates \(S_{j}\), i.e. if \(\nu_{j}=0\) for some \(j\in\{1,\ldots,k\}.\) To simplify the presentation of our results we assume Dirichlet boundary conditions for the substrates, but the analysis remains valid if we impose mixed boundary conditions for the substrates, see Remark 2.4.
In models for biofilm growth, the actual biofilm is described by the region where \(M\) is positive,
\[\Omega^{+}(t)=\{x\in\Omega:M(t,x)>0\}.\]
Due to the degeneracy of the biomass diffusion coefficient, \(D_{0}(0)=0\), there is a sharp interface between the biofilm and the surrounding region, and the interface propagates at a finite speed. The additional singularity in the diffusion coefficient, \(\lim_{m\nearrow 1}D_{0}(m)=\infty\), ensures that the biomass density does not exceed its maximum value, i.e. \(M\) remains bounded by a constant strictly less than \(1\).
In Figure 1 typical situations modelled by (1.1) are sketched for biofilm colonies depending on a single substrate. In the left figure the substrate is dissolved in the spatial domain \(\Omega\) and transported by diffusion and convection. The biofilm colony grows into the aqueous phase. In the right figure the substrate is immobilized and contained in the spatial domain \(\Omega\). The bacteria consume and degrade the substrate, a biofilm front develops and propagates through the substratum.
A system of the form (1.1) with a single dissolved substrate \(S=S_{1}\), i.e. \(k=1\) and \(\nu_{1}>0\), was first proposed in [7] to model biofilm growth in an aqueous medium. In this case,
\[D_{0}(M) =d_{2}\frac{M^{a}}{(1-M)^{b}}, D_{1}(M,S) =d_{1}, \tag{1.2a}\] \[f_{0}(M,S) =k_{3}\frac{SM}{k_{4}+S}-k_{2}M, f_{1}(M,S) =-k_{1}\frac{SM}{k_{4}+S}, \tag{1.2b}\]
for some constants \(k_{1},k_{2},k_{3},k_{4},d_{1},d_{2}>0\) and \(a,b\geq 1\), and \(\mathbf{v_{1}}\) is a given flow field. An ODE-PDE system of the form (1.1) with a single substrate was used in [6] to model cellulolytic biofilms
Figure 1: Schematic figures illustrating biofilm growth in dependence of a single nutrient \(S\) in an aqueous medium (a) and in an immobilized medium (b). The biofilm is represented by the region where \(M(\mathbf{x},t)>0\), which is separated by the surrounding region by a sharp interface. Nutrients are consumed by bacteria resulting in the production of biomass. The parts of the boundary where homogeneous Dirichlet and Neumann conditions are specified are also marked in the diagrams, \(\Gamma_{1}\) (Dirichlet) in blue and \(\Gamma_{2}\) (Neumann) in red.
(a) PDE-PDE systems [7]: The biofilm colonies grow in a liquid containing substrates. The substrates diffuse and are transported by a flow field \(\mathbf{v}\). The diffusion coefficient of the substrate might depend on \(M\), i.e. it differs inside and outside the biofilm.
(b) PDE-ODE systems [6]: The bacteria degrade and consume an immobilized medium which is the case, e.g. for cellulolytic biofilms. The biofilm colony propagates consuming the immobile cellulose, leaving at its wake a region of low substrate concentrations.
degrading an immobilized cellulose material. In this case, the functions \(D_{0},f_{0}\) and \(f_{1}\) are as in (1.2) and \(D_{1}\equiv 0\), \(\mathbf{v_{1}}\equiv 0\).
The existence of weak solutions of scalar nonlinear degenerate parabolic equations such as (1.1a) was shown in the seminal papers [1, 2], however, for bounded diffusion coefficients \(D_{0}\). Uniqueness of solutions was proven in [21] using \(L^{1}\)-contraction. The existence of weak solutions for the biofilm model [7] with the diffusion coefficients and reaction functions in (1.2) and \(\mathbf{v_{1}}\equiv 0\) was proven in [9] under the assumptions of homogeneous Dirichlet boundary conditions for \(M\), i.e. \(\Gamma_{2}=\emptyset\) and \(h_{0}\equiv 0\). The existence of the global attractor for the generated semigroup in \(L^{1}(\Omega)\) was also shown. The well-posedness theory was generalized in [15] where more general functions \(D_{0}\), \(f_{0}\) and \(f_{1}\) and mixed Dirichlet-Neumann boundary conditions were considered. The Holder continuity of solutions was studied in [14].
Several extensions and variations of the single species biofilm growth model [7] have been proposed and analyzed. Most works are simulation studies and only few analytical results have been obtained. The well-posedness of multi-substrate biofilm models with \(k>1\), \(\nu_{j}>0\) in (1.2), appearing in antibiotic disinfection and quorum sensing applications, was established in [25, 10]. A PDE-ODE system with an immobile substrate, i.e. \(k=1\) and \(\nu_{1}=0\), was proposed and numerically studied in [6]. The simulations reproduced many experimentally observed features of cellulolytic biofilms. The existence and stability of travelling wave solutions for this model were shown in [19], but the well-posedness of the model remained an open problem. Many examples of semilinear coupled PDE-ODE models appearing in biology are discussed in [20, Chapter 13]. For a PDE-ODE model for hysteretic flow through porous media with a diffusion coefficient \(D_{0}\) depending on both \(M\) and \(\vec{S}\), the existence of solutions was shown in [18].
We aim to develop a unifying solution theory for a large class of systems with degenerate diffusion that is motivated by models for biofilm growth, but the analysis is not limited to these applications. In fact, we expect that such models can also be used, e.g. to describe cancer cell invasion or the spread of wildfires. In our paper we extend previous well-posedness results in the following directions:
(a) _Well-posedness results for PDE-PDE systems:_ our results extend the theory developed for systems with one substrate in [15] to systems with an arbitrary number of substrates \(k\in\mathbb{N}\). Moreover, the existence of weak solutions is proven for a broad class of diffusion coefficients \(D_{0}\) and \(D_{j}\), reaction terms \(f_{j}\), and allows for flow-fields \(\boldsymbol{v}_{j}\) which has not been considered in earlier works.
(b) _Well-posedness of PDE-ODE systems (\(\nu_{j}=0\)):_ The well-posedness of PDE-ODE systems of the form (1.1) with a degenerate and/or singular diffusion coefficient \(D_{0}\) has been an open problem. The theory we develop applies to the cellulolytic biofilm model [6] and implies its local well-posedness.
(c) _Mixed as well as homogeneous Neumann conditions for \(M\):_ Global well-posedness is shown for mixed Dirichlet-Neumann boundary conditions and a local well-posedness result is established assuming homogeneous Neumann boundary conditions for \(M\). Moreover, apart from well-posedness results we also analyze qualitative properties such as boundedness or blow-up of solutions.
(d) _Global spatial regularity of M:_ We further show that under certain porous medium type growth conditions on \(D_{0}\) close to zero, the biomass concentration \(M\) inherits some global spatial regularity.
The outline of our paper is as follows: In Section 2 we introduce notation, state our assumptions on the data and introduce the concept of weak solutions. In Section 3 we prove global well-posedness for systems with Dirichlet or mixed Dirichlet-Neumann boundary conditions for \(M\). In Section 4 we establish local well-posedness for systems with homogeneous Neumann conditions for \(M\). We also derive criteria ensuring finite-time blow-up of the model and discuss some important examples. In Section 5 we show that even in the degenerate case, the biomass density \(M\) possesses some global spatial regularity.
## 2 Problem formulation
In this section, we introduce notation and a suitable functional framework. We state the properties of the coefficient functions and the boundary and initial data for system (1.1) that will be assumed throughout the paper. Moreover, we introduce weak solutions of the problem.
### Preliminaries
Functional setting:Let \(\Omega\subset\mathbb{R}^{d}\) be a bounded Lipschitz domain. The boundary \(\partial\Omega\) is divided into two regular open subsets \(\Gamma_{1}\) and \(\Gamma_{2}\) that are both Lipschitz boundaries and such that \(\partial\Omega=\overline{\Gamma}_{1}\cup\overline{\Gamma}_{2}\) and \(\Gamma_{1}\cap\Gamma_{1}=\emptyset\), e.g. see [22]. We denote by \((\cdot,\cdot)\) and \(\|\cdot\|\) the \(L^{2}(\Omega)\) inner product and norm. The norm of any other Banach space \(V\) will be denoted by \(\|\cdot\|_{V}\). For \(1\leq p\leq\infty\), let \(W^{1,p}(\Omega)\) denote the Sobolev space of functions \(u\in L^{p}(\Omega)\) such that the weak derivative \(\nabla u\) exists and \(\nabla u\in(L^{p}(\Omega))^{d}\). For \(r\in(0,1)\) and \(p\in[0,\infty)\), the Sobolev-Slobodeckij space \(W^{r,p}(\Omega)\) is the set of functions \(u\in L^{p}(\Omega)\) such that
\[\|u\|_{W^{r,p}(\Omega)}:=\|u\|_{L^{p}(\Omega)}+\int_{\Omega}\int_{\Omega}\frac {|u(\mathbf{x})-u(\mathbf{y})|^{p}}{|\mathbf{x}-\mathbf{y}|^{d+rp}}d\mathbf{x}d\mathbf{y}<\infty. \tag{2.1}\]
We define \(H^{r}(\Omega):=W^{r,2}(\Omega)\) for \(r\in(0,1]\). Let \(H^{1}_{0}(\Omega)\) denote the closure of \(C^{\infty}_{c}(\Omega)\) in \(H^{1}(\Omega)\), which is equipped with the norm \(\|u\|_{H^{1}_{0}(\Omega)}:=\|\nabla u\|\). Similarly, we define
\[\mathcal{H}^{1}:=\{u\in H^{1}(\Omega):\;\text{tr}(u)=0\text{ in }\Gamma_{1}\} \quad\text{with the norm}\quad\|u\|_{\mathcal{H}^{1}}:=\|u\|_{H^{1}(\Omega)}.\] (2.2a) The dual spaces of \[H^{1}_{0}(\Omega)\] and \[\mathcal{H}^{1}\] are defined as: \[H^{-1}:=(H^{1}_{0}(\Omega))^{*}\quad\text{ and }\quad\mathcal{H}^{-1}=( \mathcal{H}^{1})^{*}.\] (2.2b) Observe that, \[\text{ if }\Gamma_{1}=\emptyset\text{ then }\mathcal{H}^{1}=H^{1}(\Omega), \quad\text{ if }\Gamma_{2}=\emptyset\text{ then }\mathcal{H}^{1}=H^{1}_{0}(\Omega).\] (2.2c) Let \[\langle\cdot,\cdot\rangle\] denote the duality pairing of \[\mathcal{H}^{1}\] and \[\mathcal{H}^{-1}\]. The duality pairing of any other Sobolev space \[V\] will be denoted by \[\langle\cdot,\cdot\rangle_{V,V^{*}}\].
Finally, we introduce the following Bochner spaces that are important for our analysis:
\[\mathcal{W} :=L^{\infty}(0,T;L^{\infty}(\Omega))\cap H^{1}(0,T;\mathcal{H}^{-1 })\cap C([0,T];L^{2}(\Omega)), \tag{2.3a}\] \[\mathcal{X} :=L^{2}(0,T;\mathcal{H}^{1})\cap H^{1}(0,T;\mathcal{H}^{-1}),\] (2.3b) \[\mathcal{Y} :=L^{\infty}(0,T;H^{1}(\Omega))\cap H^{1}(0,T;L^{2}(\Omega)),\] (2.3c) \[\mathcal{Z} :=C([0,T];(L^{2}(\Omega))^{k}). \tag{2.3d}\]
Note that we have the continuous embedding \(\mathcal{X}\hookrightarrow C([0,T];L^{2}(\Omega))\).
Inequalities:Note that the Poincare inequality, i.e. \(\|u\|\leq C_{\Omega}\|\nabla u\|\) for \(u\in H^{1}_{0}(\Omega)\), where \(C_{\Omega}>0\) denotes the Poincare constant, also holds for functions \(u\in\mathcal{H}^{1}\) if \(\Gamma_{1}\neq\emptyset\).
We recall Young's inequality stating that for any \(\sigma>0\) one has
\[ab\leq\frac{1}{2\sigma}a^{2}+\frac{\sigma}{2}b^{2}\qquad\forall a,b\in\mathbb{ R}. \tag{2.4}\]
We will also frequently use Gronwall's Lemma stating that if \(u,\,a,\,b\in C(\mathbb{R})\) are non-negative, then \(u(t)\leq a(t)+\int_{0}^{t}u(\varrho)\,b(\varrho)\,\mathrm{d}\varrho\) implies that
\[u(t)\leq a(t)+\int_{0}^{t}a(\varrho)\,b(\varrho)e^{\int_{\varrho}^{t}b(\tau)d \tau}\,\mathrm{d}\varrho\] (2.5a) for all \[t>0\] ; and the discrete counterpart of the Gronwall Lemma: Let \[\{u_{n}\}_{n\in\mathbb{N}}\], \[\{a_{n}\}_{n\in\mathbb{N}}\], \[\{b_{n}\}_{n\in\mathbb{N}}\] be non-negative sequences such that \[u_{n}\leq a_{n}+\sum_{k=1}^{n-1}b_{k}u_{k}\]. Then \[u_{n}\leq a_{n}+\sum_{k=1}^{n-1}a_{k}b_{k}\exp\left(\sum_{k<j<n}b_{j}\right).\] (2.5b) Finally, for a convex \[\eta\in C(\mathbb{R}^{+})\] with \[\eta(0)=0\] we will use Jensen's inequality and the superadditivity property: \[\text{Jensen's inequality:} \eta\left(\tfrac{1}{|\Omega|}\int_{\Omega}|f|\right)\leq\tfrac{ 1}{|\Omega|}\int_{\Omega}\eta(|f|)\qquad\text{ for }f\in L^{1}(\Omega); \tag{2.6a}\] \[\text{Super-additivity:} \eta(a)+\eta(b)\leq\eta(a+b)\qquad\text{ for all }a,\,b\geq 0. \tag{2.6b}\]
Further notation:We denote by \([\cdot]_{+}\) and \([\cdot]_{-}\) the positive and negative part of functions, i.e. \([\cdot]_{+}:=\max\{\cdot,0\}\) and \([\cdot]_{-}:=\min\{\cdot,0\}\), respectively. By \(C>0\) we refer to an undisclosed constant in the estimates that may vary in each occurrence and from line to line. Finally, the notation
\[a\lesssim b\quad\text{ implies that }a\leq Cb\quad\text{for some constant }C>0 \tag{2.7}\]
which does not depend on a parameter \(\varepsilon>0\) (to be specified later).
### Assumptions on the data
We specify the hypotheses on the data associated with (1.1).
* The diffusion coefficient \(D_{0}:[0,1)\mapsto[0,\infty)\) is a continuous function that is strictly increasing in \([0,\epsilon_{0})\) for some \(\epsilon_{0}\in(0,1]\), and satisfies \[D_{0}(0)=0,\ \lim_{m\nearrow 1}D_{0}(m)=\infty\ \text{ and }\ \ D_{0}(m)>0\text{ for all }m\in(0,1).\]
The primitive of \(D_{0}\), expressed by the Kirchhoff transform function \(\Phi:[0,1)\to[0,\infty)\), \(\Phi(m)=\int_{0}^{m}D_{0}(\varrho)\,\mathrm{d}\varrho\), satisfies \[\lim_{m\nearrow 1}\Phi(m)=\infty.\] (2.8)
2. The diffusion coefficients \(D_{j}:[0,1]\times\mathbb{R}^{k}\to[D_{\min},D_{\max}]\), \(j=1,\ldots,k\), with constants \(0<D_{\min}<D_{\max}<\infty\), are Lipschitz continuous with respect to both variables.
**Remark 2.1** (Biofilm models).: _In models for biofilm growth, see e.g. [6, 9], the diffusion coefficient \(D_{0}\) is given by the function in (1.2), and the diffusion coefficient \(D_{1}\) for the (single) substrate is assumed to be constant. These functions satisfy all assumptions in (P1)-(P2)._
**Remark 2.2** (Generalizations of the assumptions (P1)-(P2)).: _Our analysis can be extended to systems where the diffusion coefficient \(D_{0}\) is piecewise constant, non-degenerate, and/or has a porous media type degeneracy, e.g., \(D_{0}(m)=m^{a}\), for some constant \(a>1\). To keep the analysis uniform and self-contained, we only analyze the case (P1) which is more involved and arises in models for biofilm growth._
_We could also allow for degenerate diffusion coefficients \(D_{j}\) if they only depend on the substrate \(S_{j}\). Some additional assumptions are required to cover this case which are discussed in Corollary 3.2.1._
For the flow-field and reaction terms we make the following assumptions:
1. The flow-field satisfies \(\boldsymbol{v}_{j}\in(L^{\infty}(Q))^{d}\), \(j=1,\ldots,k\).
2. The functions \(f_{0},\,f_{j}\in C([0,1]\times\mathbb{R}^{k})\) are uniformly Lipschitz continuous. They can be extended to uniformly Lipschitz continuous functions on \(\mathbb{R}^{k+1}\) which (to simplify notation) we will also denote by \(f_{0},\,f_{j}\). The constant \(C_{L}\geq 0\) is the maximum of the Lipschitz constants of \(f_{0},f_{1},\ldots,f_{k}\). Moreover, \(f_{0}(0,\vec{s})\geq 0\) for all \(\vec{s}\in\mathbb{R}^{k}\).
3. There exists a non-negative and locally Lipschitz continuous function \(f_{max}\in C(\mathbb{R})\) such that \(f_{0}(\cdot,\vec{s})\leq f_{\max}(\cdot)\) for all \(\vec{s}\in\mathbb{R}^{k}\).
**Remark 2.3** (Assumptions (P4)-(P4*)).: _Assumption (P4) admits reaction functions \(f_{0}(\cdot,\vec{s})\), \(f_{j}(\cdot,\vec{s})\) (for \(\vec{s}\in\mathbb{R}^{k}\)) that have superlinear growth with respect to their first argument as long as they are Lipschitz continuous within the interval \([0,1]\). This is because the physically relevant solutions satisfy \(M\in[0,1]\). However, before proving the upper bound for \(M\) (in Lemma 3.2), we need the functions \(f_{0}\) and \(f_{j}\) to be defined in \(\mathbb{R}^{k+1}\) (in Lemma 3.1) which is why we introduce the extensions._
_Assumption (P4*) is needed to derive the \(L^{\infty}\) bound for the solution \(M\). This is important in our setting since physically relevant solutions take values in \([0,1)\), otherwise, the models are not valid. Such \(L^{\infty}\) bounds may not be required in other applications, for example for porous medium-type equations. We therefore explicitly state in all theorems and lemmas where this assumption is required and where it can be omitted._
_Under additional assumptions, the analysis can be generalized also to systems with reaction functions that depend on \(\boldsymbol{x}\) and \(t\), e.g. see [15]. To simplify the presentation of our results, we omit this dependency here._
_Finally, we remark that the condition \(f_{0}(\cdot,\vec{s})\leq f_{\max}(\cdot)\) can be relaxed to \(f_{0}(\cdot,\vec{s})\leq g(|\vec{s}|)\,f_{\max}(\cdot)\) for some \(g\in C(\mathbb{R}^{+})\). Then, for the proofs to go through, we need uniform \(L^{\infty}\) bounds on the solution \((M,\vec{S})\) which can be established for a certain class of functions \(f_{j}\). For an example, we refer to Corollary 3.2.1._
For the boundary and initial data we assume the following properties:
1. The initial data \(M_{0}\in L^{\infty}(\Omega)\) satisfies \[\underline{M}:=\operatorname{ess}\inf_{\mathbf{x}\in\Omega}\{M_{0}\}\geq 0\quad \text{and}\quad\overline{M}:=\operatorname{ess}\sup_{\mathbf{x}\in\Omega}\{M_{0} \}<1.\] The initial data \(\vec{S}_{0}\in(L^{\infty}(\Omega))^{k}\) satisfies \[\underline{S}:=\min_{1\leq j\leq k}\operatorname{ess}\inf_{\mathbf{x}\in\Omega}\{S _{0,j}\}>-\infty\quad\text{and}\quad\overline{S}:=\max_{1\leq j\leq k} \operatorname{ess}\sup_{\mathbf{x}\in\Omega}\{S_{0,j}\}<\infty.\]
2. The Dirichlet boundary data \(h_{0}:\Gamma_{1}\to[\underline{M},\overline{M}]\) is such that there exists \(h_{0}^{e}\in H^{1}(\Omega)\) satisfying \(h_{0}^{e}|_{\Gamma_{1}}=h_{0}\) in a trace sense. If \(\Gamma_{1}=\emptyset\) then set \(h_{0}^{e}\equiv 0\). For the Dirichlet data \(h_{j}:\partial\Omega\to[\underline{S},\overline{S}]\) there also exist functions \(h_{j}^{e}\in H^{1}(\Omega)\) such that \(h_{j}^{e}|_{\partial\Omega}=h_{j}\) in a trace sense.
**Remark 2.4** (Assumption (P6)).: _Observe that, under the assumptions in (P6), it is always possible to choose the extensions to \(\Omega\) such that \(h_{0}^{e}\in[\underline{M},\overline{M}]\) and \(h_{j}^{e}\in[\underline{S},\overline{S}]\) a.e. in \(\Omega\). For example, consider \(\bar{h}^{e}=\min\{h_{0}^{e},\overline{M}\}\in H^{1}(\Omega)\). Then \(\bar{h}^{e}\leq\overline{M}\) a.e. and \(\bar{h}^{e}=h_{0}\) on \(\partial\Omega\). This choice will implicitly be used in the proofs that follow. Similar arguments apply to the boundary conditions \(h_{j}\)._
_To keep notations simple we only consider Dirichlet boundary conditions for the substrates \(\vec{S}\). Mixed Dirichlet-/Neumann boundary conditions with different divisions of the boundary depending on \(j\) can also be assumed without major modifications in the subsequent arguments, see e.g. [15]._
### Weak solutions
We introduce the following notion of weak solutions.
**Definition 1** (Weak solution).: _The pair \((M,\vec{S})\) with \(M\in\mathcal{W}\) (see (2.3)), \(\Phi(M)\in L^{2}(0,T;H^{1}(\Omega))\), \(S_{j}\in H^{1}(0,T;H^{-1}(\Omega))\cap C([0,T];L^{2}(\Omega))\), and \(\nu_{j}S_{j}\in L^{2}(0,T;H^{1}(\Omega))\), \(j=1,\ldots,k,\) is a weak solution of (1.1) provided that \(M\) is bounded in \([0,1)\) a.e. in \(Q\), \(M(0)=M_{0}\) and \(\vec{S}(0)=\vec{S}_{0}\) a.e. in \(\Omega\), \(\Phi(M)=\Phi(h_{0})\) on \(\Gamma_{1}\) and \(\nu_{j}S_{j}=\nu_{j}h_{j}\) on \(\partial\Omega\) in the trace sense, and for all \(\varphi\in L^{2}(0,T;\mathcal{H}^{1})\), \(\vec{\zeta}\in L^{2}(0,T;(H^{1}_{0}(\Omega))^{k})\), we have_
\[\int_{0}^{T}\langle\varphi,\partial_{t}M\rangle+\int_{0}^{T}( \nabla\Phi(M),\nabla\varphi)=\int_{0}^{T}(f_{0}(M,\vec{S}),\varphi), \tag{2.9a}\] \[\int_{0}^{T}\langle\zeta_{j},\partial_{t}S_{j}\rangle_{H^{1}_{0},H^{-1}}+ \nu_{j}\int_{0}^{T}(D_{j}(M,\vec{S})\nabla S_{j}+\mathbf{v}_{j}\,S_{j},\nabla\zeta _{j})=\int_{0}^{T}(f_{j}(M,\vec{S}),\zeta_{j}). \tag{2.9b}\]
**Remark 2.5**.: _In Definition 1 we take \(\nu_{j}\,S_{j}\in L^{2}(0,T;H^{1}(\Omega))\) instead of taking \(S_{j}\in L^{2}(0,T;H^{1}(\Omega))\). This is required since \(\nu_{j}=0\) is allowed in our setting, and in this case, \(S_{j}\) might not possess any spatial regularity. Similarly, we see that \(\Phi(M)\), and not \(M\), possesses spatial regularity. Therefore, the traces are also only defined for functions with sufficient spatial regularity._
## 3 Well-posedness for Dirichlet and mixed boundary conditions
In this section, we prove the well-posedness of weak solutions for the case when \(\Gamma_{1}\) has non-zero measure, i.e. \(M\) either satisfies Dirichlet boundary condition or mixed Dirichlet-Neumann boundary condition. The main results of this section are stated in the following theorems.
**Theorem 3.1** (Existence and boundedness).: _Let (P1)-(P6) and (P4*) be satisfied, and \(\Gamma_{1}\) have non-zero measure. Then, there exists a weak solution \((M,\vec{S})\) of (1.1) in the sense of Definition 1. Furthermore, a constant \(\delta\in(0,1)\) exists such that \(0\leq M\leq 1-\delta\) a.e. in \(Q\)._
**Theorem 3.2** (Uniqueness).: _Let the assumptions of Theorem 3.1 hold. In addition, for each \(j\in\{1,\ldots,k\}\), assume that either \(\nu_{j}=0\) or the diffusion coefficient \(D_{j}\) depends only on \(S_{j}\). Then, a unique weak solution \((M,\vec{S})\) of (1.1) exists in the sense of Definition 1._
The proof of Theorem 3.2 is based on a contraction argument which, along with the existence of solutions, guarantees uniqueness. However, a different argument based on Schauder's fixed point theorem is required for proving the existence of solutions in the more general setting of Theorem 3.1.
For the proof of these theorems, we initially focus on the first equation in (2.9) for a given \(\vec{S}\). In Section 3.1 we consider a non-degenerate approximation of (1.1a) and then discuss existence (Lemma 3.1) and boundedness (Lemma 3.2) of solutions. In Section 3.2, we pass the regularization parameter to zero to show the existence of weak solutions of the original problem (Lemma 3.3). We then consider the coupled system. In Section 3.3, the \(L^{1}\) contraction principle (Lemma 3.4) is applied to prove Theorem 3.2, and in Section 3.4, a Schauder argument (Lemma 3.5) is used to prove Theorem 3.1.
We remark that Lemmas 3.1-3.5 hold for all boundary conditions including homogeneous Neumann boundary condition. The proof of Lemma 3.1 is postponed to Section 4, due to complications arising from homogeneous Neumann condition. Lemmas 3.2-3.5 are proven in the general case.
### A regularized problem
We introduce the following regularization of the Kirchhoff transform \(\Phi\): for \(\varepsilon>0\), let \(\Phi_{\varepsilon}\in C^{1}(\mathbb{R})\) be a non-degenerate approximation of \(\Phi\) satisfying
\[\varepsilon\leq\Phi_{\varepsilon}{}^{\prime}\leq\varepsilon^{-1}\quad\text{ and }\quad\lim_{\varepsilon\to 0}\Phi_{\varepsilon}(m)=\Phi(m)\quad\text{ for all }m\in[0,1). \tag{3.1}\]
A specific choice of \(\Phi_{\varepsilon}\) that will be used in the sequel and in Section 5 is
\[\Phi_{\varepsilon}(m):=\int_{0}^{m}\min\left\{\max\{\varepsilon,D(\varrho)\}, \varepsilon^{-1}\right\}\,\mathrm{d}\varrho. \tag{3.2}\]
Then, recalling the functional spaces defined in (2.3), the following lemma holds.
**Lemma 3.1** (Existence for a regularized problem).: _Let (P1)-(P6) hold. Let \(\vec{s}\in\mathcal{Z}\) be given and \(\varepsilon\in(0,1)\) be sufficiently small. Then there exists a unique \(M_{s,\varepsilon}\in\mathcal{X}+h_{0}^{e}\) which satisfies \(M_{s,\varepsilon}(0)=M_{0}\), and for all \(\varphi\in L^{2}(0,T;\mathcal{H}^{1})\),_
\[\int_{0}^{T}\langle\varphi,\partial_{t}M_{s,\varepsilon}\rangle+\int_{0}^{T}( \nabla\Phi_{\varepsilon}(M_{s,\varepsilon}),\nabla\varphi)=\int_{0}^{T}(f_{0} (M_{s,\varepsilon},\vec{s}),\varphi). \tag{3.3}\]
_Moreover, \(M_{s,\varepsilon}\in C([0,T];L^{2}(\Omega))\), and for all \(t\in[0,T]\) we have_
\[\|M_{s,\varepsilon}(t)\|^{2}+\int_{0}^{T}\left[\|\nabla\Phi_{ \varepsilon}(M_{s,\varepsilon})\|^{2}+\|\partial_{t}M_{s,\varepsilon}\|^{2}_ {\mathcal{H}^{-1}}\right]\] \[\lesssim 1+\int_{0}^{T}\left(\|\vec{s}\|^{2}+\|\Phi_{\varepsilon}(M _{s,\varepsilon})\|^{2}\right)+\left(1+\|\Phi_{\varepsilon}^{\prime}(M_{s, \varepsilon})\|_{L^{\infty}(Q)}\right)\|h_{0}^{e}\|^{2}_{H^{1}(\Omega)}. \tag{3.4}\]
_Furthermore, if \(M_{0}\in H^{1}(\Omega)\) in (P5), and \(M_{0}|_{\Gamma_{1}}=h_{0}\) in the trace sense then, in addition, it holds that_
\[\|\nabla\Phi_{\varepsilon}(M_{s,\varepsilon}(t))\|^{2}+\int_{0}^{T }\int_{\Omega}\frac{|\partial_{t}\Phi(M_{s,\varepsilon})|^{2}}{\Phi_{ \varepsilon}^{\prime}(M_{s,\varepsilon})}\] \[\lesssim\|\nabla\Phi_{\varepsilon}(M_{0})\|^{2}+\|\Phi_{ \varepsilon}^{\prime}(M_{s,\varepsilon})\|_{L^{\infty}(Q)}\left[1+\int_{0}^{T }\|\vec{s}\|^{2}\right]. \tag{3.5}\]
The proof of Lemma 3.1 is postponed to Section 4.1. For \(\Gamma_{1}\) having non-zero measure, the existence of \(M_{s,\varepsilon}\in\mathcal{X}+h_{0}^{e}\) follows immediately from [1] since \(\Phi_{\varepsilon}^{\prime}\) satisfies the uniform ellipticity condition. However, the result in [1] does not cover the case of homogeneous Neumann conditions, i.e. the case \(\Gamma_{1}=\emptyset\).
The assumption (P4*) is not required in Lemma 3.1, but it is needed for the next result.
**Lemma 3.2** (Boundedness for the regularized problem).: _In addition to the hypothesis of Lemma 3.1 we assume that (P4*) holds. Let \(M_{s,\varepsilon}\in\mathcal{X}+h_{0}^{e}\) denote the solution in Lemma 3.1. Moreover, let \(\hat{M}\in C^{1}(\mathbb{R}^{+})\) denote the solution of the integral equation_
\[\hat{M}(t)=\overline{M}+\int_{0}^{t}f_{\max}(\hat{M}(\varrho))\,\mathrm{d} \varrho,\qquad t\in[0,T].\]
* _Then,_ \(0\leq M_{s,\varepsilon}(t)\leq\hat{M}(t)\) _a.e. in_ \(\Omega\) _for all_ \(t\in[0,T]\)_._
* _If, in addition,_ \(\Gamma_{1}\) _has non-zero measure, then a constant_ \(\delta\in(0,1)\) _exists such that_ \[0\leq M_{s,\varepsilon}(t)\leq 1-\delta\ \text{ a.e. in }\Omega\text{ for all }t\in[0,T].\]
Observe that the above lemma implies that \(M_{s,\varepsilon}\in L^{\infty}(0,T;L^{\infty}(\Omega))\) and the family \(M_{s,\varepsilon}\) is uniformly bounded with respect to \(\varepsilon>0\) in \(L^{\infty}(0,T;L^{\infty}(\Omega))\).
Proof.: The existence of \(\hat{M}\) follows from the Picard-Lindelof Theorem since \(f_{\max}\in\mathrm{Lip}(\mathbb{R})\). Moreover, it satisfies \(\partial_{t}\hat{M}=f_{\max}(\hat{M})\) and therefore, \(\hat{M}\geq\overline{M}\) since \(f_{\max}\) was assumed to be non-negative in (P4*).
**(Step 1) \(\mathbf{M_{s,\varepsilon}}\geq 0\):** Inserting the test function \(\varphi=[M_{s,\varepsilon}]_{-}\) in (3.3) implies that
\[\int_{0}^{T}\big{[}\partial_{t}\left(\tfrac{1}{2}\|[M_{s,\varepsilon}]_{-}\|^{2 }\right)+\varepsilon\|\nabla[M_{s,\varepsilon}]_{-}\|^{2}\big{]}\stackrel{{ (\text{\sc P4})}}{{\leq}}C_{L}\int_{0}^{T}\|[M_{s,\varepsilon}]_{-}\|^{2}.\]
Since \([M_{s,\varepsilon}(0)]_{-}=[M_{0}]_{-}=0\), we have \(\|[M_{s,\varepsilon}]_{-}(T)\|=0\) using Gronwall's Lemma (2.5a).
**(Step 2) \(\mathbf{M_{s,\varepsilon}}\leq\hat{\mathbf{M}}\):** Inserting the test function \(\varphi=[M_{s,\varepsilon}-\hat{M}]_{+}\in L^{2}(0,T;\mathcal{H}^{1})\) in (3.3) we obtain
\[\int_{0}^{T}\langle[M_{s,\varepsilon}-\hat{M}]_{+},\partial_{t}M _{s,\varepsilon}\rangle=\int_{0}^{T}\langle[M_{s,\varepsilon}-\hat{M}]_{+}, \partial_{t}[M_{s,\varepsilon}-\hat{M}]\rangle+\int_{0}^{T}\langle[M_{s, \varepsilon}-\hat{M}]_{+},\partial_{t}\hat{M}\rangle\] \[\qquad=\int_{0}^{T}\partial_{t}\left(\frac{1}{2}\|[M_{s, \varepsilon}-\hat{M}]_{+}\|^{2}\right)+\int_{0}^{T}(\partial_{t}\hat{M},[M_{s,\varepsilon}-\hat{M}]_{+})\] \[\qquad\stackrel{{(\text{\sc P5})}}{{=}}\frac{1}{2} \|[M_{s,\varepsilon}-\hat{M}]_{+}(T)\|^{2}+\int_{0}^{T}(\partial_{t}\hat{M},[ M_{s,\varepsilon}-\hat{M}]_{+}), \tag{3.6a}\] \[\int_{0}^{T}(\nabla\Phi_{\varepsilon}(M_{s,\varepsilon}),\nabla[M_{s, \varepsilon}-\hat{M}]_{+})=\int_{0}^{T}(\Phi_{\varepsilon}{}^{\prime}(M_{s, \varepsilon})\nabla M_{s,\varepsilon},\nabla[M_{s,\varepsilon}-\hat{M}]_{+}) \geq 0, \tag{3.6b}\]
and for the reaction term we obtain
\[\int_{0}^{T}(f_{0}(M_{s,\varepsilon},\vec{s}),[M_{s,\varepsilon}- \hat{M}]_{+})=\int_{0}^{T}(f_{0}(M_{s,\varepsilon},\vec{s})-f_{0}(\hat{M},\vec {s})+f_{0}(\hat{M},\vec{s}),[M_{s,\varepsilon}-\hat{M}]_{+})\] \[\stackrel{{(\text{\sc P4}),(\text{\sc P4}*)}}{{\leq}}C _{L}\int_{0}^{T}\|[M_{s,\varepsilon}-\hat{M}]_{+}\|^{2}+\int_{0}^{T}f_{\max}( \hat{M})[M_{s,\varepsilon}-\hat{M}]_{+}. \tag{3.6c}\]
Combining the estimates in (3.6) it follows that
\[\frac{1}{2}\|[M_{s,\varepsilon}-\hat{M}]_{+}(T)\|^{2}+\int_{0}^{T}(\partial_{ t}\hat{M}-f_{\max}(\hat{M}),[M_{s,\varepsilon}-\hat{M}]_{+})\leq C_{L}\int_{0}^{T} \|[M_{s,\varepsilon}-\hat{M}]_{+}\|^{2}. \tag{3.7}\]
The second term is zero by the definition of \(\hat{M}\). Hence, using Gronwall's Lemma (2.5a) we have the result.
**(Step 3) \(\mathbf{M_{s,\varepsilon}}\leq\mathbf{1}-\delta\):** This is a generalization of Proposition 6 in [9] to the case of mixed or homogeneous Neumann boundary conditions, see also the proof of Theorem 2.7 in [15]. For \(f_{\max}(\cdot)\) introduced in (P4*), let \(\hat{u}\in\mathcal{H}^{1}\) solve the elliptic problem
\[(\nabla\hat{u},\nabla\varphi)=(\hat{C},\varphi)\quad\text{for all }\varphi\in \mathcal{H}^{1},\quad\text{where }\hat{C}:=\max_{0\leq t\leq T}f_{\max}(\hat{M}(t)). \tag{3.8}\]
The existence of a unique weak solution \(\hat{u}\) directly follows from the Lax-Milgram Lemma. If \(d=1\) (one space-dimension), then we immediately have \(\hat{u}\in L^{\infty}(\Omega)\) from Morrey's inequality [11, Chapter 5]. Hence, let \(d\geq 2\). Set \(q=2d/(d-2)\) for \(d>2\), and \(q>2\) for \(d=2\). Then for \(m\geq 0\), inserting the test function \(\varphi=[\hat{u}-m]_{+}\in\mathcal{H}^{1}\) and denoting \(A(m):=\{\mathbf{x}\in\Omega:\hat{u}(\mathbf{x})>m\}\) we have the estimates,
\[\|\nabla[\hat{u}-m]_{+}\|^{2} \leq\hat{C}\,\|[\hat{u}-m]_{+}\|_{L^{1}(\Omega)},\] \[\|[\hat{u}-m]_{+}\|_{L^{1}(\Omega)} \leq|A(m)|^{1-\frac{1}{d}}\|[\hat{u}-m]_{+}\|_{L^{q}(\Omega)}\leq C ^{\prime}|A(m)|^{1-\frac{1}{d}}\|\nabla[\hat{u}-m]_{+}\|,\]
where the last inequality follows from the Sobolev inequality [11, Chapter 5]. Hence, we have \(\|[\hat{u}-m]_{+}\|_{L^{1}(\Omega)}\leq{C^{\prime}}^{2}\hat{C}|A(m)|^{\gamma}\), where \(\gamma=2-\frac{2}{q}>1\). Thus, following the steps of [13, Lemma 7.3] we conclude that \(\hat{u}\in L^{\infty}(\Omega)\). Hence, using the comparison principle [21], one has \(\Phi_{\varepsilon}(M_{s,\varepsilon}(t))\leq\hat{u}+\bar{M}<\infty\) a.e. in \(\Omega\) for all \(\varepsilon>0\) and \(t>0\), which concludes the proof.
**Remark 3.1** (Generalization to \(\vec{s}\in(L^{2}(Q))^{k}\)).: _Although Lemmas 3.1 and 3.2, and the following Lemma 3.3, assume \(\vec{s}\in\mathcal{Z}\) to simplify the presentation, the results remain valid for all \(\vec{s}\in(L^{2}(Q))^{k}\) as evident from the a-priori estimates (3.4)-(3.5). This observation will become important in Lemma 3.5 which provides the setting for the proof of Theorem 3.1._
### Existence of solutions of the degenerate parabolic problem
**Lemma 3.3** (Existence for the degenerate problem).: _Let (P1)-(P6) and (P4*) hold. Let \(\vec{s}\in\mathcal{Z}\) be given and let \(0<T^{*}\leq\infty\) denote the time, independent of \(\vec{s}\) and \(\varepsilon>0\), such that the solutions \(M_{s,\varepsilon}\in\mathcal{X}+h_{0}^{e}\) of (3.3) remain bounded in \([0,1)\) for all \(t<T^{*}\). Let \(T<T^{*}\). Then there exists a unique \(M_{s}\in\mathcal{W}\) with \(\Phi(M_{s})\in L^{2}(0,T;H^{1}(\Omega))\) satisfying \(M_{s}(0)=M_{0}\), \(\Phi(M_{s})=\Phi(h_{0})\) on \(\Gamma_{1}\) in the trace sense, and_
\[\int_{0}^{T}\langle\varphi,\partial_{t}M_{s}\rangle+\int_{0}^{T}(\nabla\Phi( M_{s}),\nabla\varphi)=\int_{0}^{T}(f_{0}(M_{s},\vec{s}),\varphi), \tag{3.9}\]
_for all \(\varphi\in L^{2}(0,T;\mathcal{H}^{1})\). Moreover, \(0\leq M_{s}<1-\delta\) a.e. in \(Q\) for some constant \(\delta\in(0,1)\)._
Proof.: In this proof, we will first assume that
\[M_{0}\in H^{1}(\Omega). \tag{3.10}\]
This constraint will later be dropped. Lemma 3.2 and the assumption \(T<T^{*}\), imply the existence of \(\bar{\delta}\in(0,1)\) such that
\[M_{s,\varepsilon}\in[0,1-\bar{\delta}],\ \ \text{and consequently},\ \ \phi_{\varepsilon}:=\Phi_{\varepsilon}(M_{s,\varepsilon})\in[0,\Phi(1-\bar{ \delta})]\ \ \text{a.e. in}\ Q, \tag{3.11}\]
for small \(\varepsilon>0\). The shorthand \(\phi_{\varepsilon}\) will be used to denote \(\Phi_{\varepsilon}(M_{s,\varepsilon})\) for the rest of the proof.
**(Step 1) Convergence of \(\mathbf{M_{s,\varepsilon}}\) and \(\phi_{\varepsilon}\), assuming (3.10):** Taking (3.11) into account which implies that \({\Phi_{\varepsilon}}^{\prime}(M_{s,\varepsilon})\) is bounded above independent of \(\varepsilon\), we conclude from (3.5) in Lemma 3.1 that \(\phi_{\varepsilon}\) is uniformly bounded in \(\mathcal{Y}\) (see (2.3)). Observe that \(\mathcal{Y}\subset H^{1}(Q)\). Using the compact embedding \(H^{1}(Q)\hookrightarrow\hookrightarrow L^{2}(Q)\), we conclude that there exists \(\phi\in H^{1}(Q)\) and a subsequence \(\phi_{\varepsilon}\) such that for \(\varepsilon\to 0\),
\[\phi_{\varepsilon}\rightharpoonup\phi\ \text{weakly in}\ H^{1}(Q), \tag{3.12a}\] \[\phi_{\varepsilon}\to\phi\ \text{strongly in}\ L^{2}(Q). \tag{3.12b}\]
Setting
\[M_{s}:=\Phi^{-1}(\phi)\in L^{\infty}(0,T;L^{\infty}(\Omega)) \tag{3.13}\]
we claim that for \(\varepsilon\to 0\),
\[\partial_{t}M_{s,\varepsilon}\rightharpoonup\partial_{t}M_{s}\text{ weakly in }L^{2}(0,T;\mathcal{H}^{-1}(\Omega)), \tag{3.14a}\] \[M_{s,\varepsilon}\to M_{s}\text{ strongly in }L^{2}(Q). \tag{3.14b}\]
To see this, we consider a convex strictly increasing function \(\eta\in C^{1}([0,1))\) such that
\[\eta=\Phi\text{ in }[0,\epsilon_{0}),\quad\text{ and }\quad\eta\circ\Phi^{-1}\in\operatorname{Lip}(\mathbb{R}^{+}), \tag{3.15}\]
where \(\epsilon_{0}\in(0,1)\) was fixed in (P1). Recall that \(D_{0}\) is strictly increasing in \([0,\epsilon_{0})\), and therefore, \(\Phi\) is convex in \([0,\epsilon_{0})\). For \(\varrho>\epsilon_{0}\), \(\Phi^{\prime}(\varrho)=D_{0}(\varrho)\) is bounded away from \(0\), implying that \((\Phi^{-1})^{\prime}(\varrho)\) is bounded for \(\varrho>\Phi(\epsilon_{0})\). Hence, it is always possible to find such a function \(\eta\).
Using (3.11) and that \(\eta\) is strictly increasing and convex, we obtain
\[\eta\left(\frac{1}{|Q|}\int_{Q}|M_{s,\varepsilon}-M_{s}|\right) \stackrel{{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:
Finally to show the continuity of \(M_{s}\) in time, we observe that for \(\tau>0\) one has
\[\|M_{s}(t+\tau)-M_{s}(t)\|^{2}=(M_{s}(t+\tau)-M_{s,\varepsilon}(t+ \tau),M_{s}(t+\tau)-M_{s}(t))\] \[\quad+(M_{s,\varepsilon}(t+\tau)-M_{s,\varepsilon}(t),M_{s}(t+ \tau)-M_{s}(t))+(M_{s,\varepsilon}(t)-M_{s}(t),M_{s}(t+\tau)-M_{s}(t)).\]
The first and last term on the right side are arbitrarily small for all sufficiently small \(\varepsilon\) due to the weak convergence of \(M_{s,\varepsilon}(t)\) in \(L^{2}(\Omega)\). The term in the middle vanishes as \(\tau\) tends to zero since \(M_{s,\varepsilon}\in C([0,T];L^{2}(\Omega))\). This proves that \(M_{s}\in C([0,T];L^{2}(\Omega))\).
Using (3.12),(3.14) we can now pass to the limit in (3.3) and conclude that \(M_{s}\) is a solution of (3.9). This completes the proof for the case \(M_{0}\in H^{1}(\Omega)\).
**(Step 2) Existence for \(\mathbf{M_{0}\in L^{1}(\Omega)}\) satisfying (P5):** We postulate that for a given \(M_{0}\in L^{1}(\Omega)\) and \(\mu>0\), there exists \(M_{0}^{\mu}\in H^{1}(\Omega)\) such that
\[0\leq M_{0}^{\mu}\leq\bar{M}<1\text{ a.e. in }Q,\text{ and }\|M_{0}^{\mu}-M_{0}\|_{L^{1}(\Omega)}\leq\mu. \tag{3.18}\]
The existence of \(\tilde{M}_{0}^{\mu}\in C_{c}^{\infty}(\mathbb{R}^{d})\) such that \(\|\tilde{M}_{0}^{\mu}-M_{0}\|_{L^{1}(\Omega)}\leq\frac{1}{2}\mu\) follows from the fact that \(C_{c}^{\infty}(\mathbb{R}^{d})\) is dense in \(L^{1}(\mathbb{R}^{d})\), see Theorem 4.3 in [4]. Define \(M_{0}^{\mu}=\min\{\bar{M},\tilde{M}_{0}^{\mu}\}\in H^{1}(\Omega)\), where \(\bar{M}\) was defined in (P5). Then \(M_{0}^{\mu}\) satisfies (3.18) since
\[\|M_{0}^{\mu}-M_{0}\|_{L^{1}(\Omega)} \leq\|M_{0}^{\mu}-\tilde{M}_{0}^{\mu}\|_{L^{1}(\Omega)}+\|\tilde {M}_{0}^{\mu}-M_{0}\|_{L^{1}(\Omega)}\] \[=\|[\tilde{M}_{0}^{\mu}-\bar{M}]_{+}\|_{L^{1}(\Omega)}+\|\tilde{M }_{0}^{\mu}-M_{0}\|_{L^{1}(\Omega)}\] \[\overset{\eqref{eq:M_s}}{\leq}\|\tilde{M}_{0}^{\mu}-M_{0}\|_{L^{1 }(\Omega)}+\|\tilde{M}_{0}^{\mu}-M_{0}\|_{L^{1}(\Omega)}\leq\mu.\]
Now, let \(M_{s}^{\mu}\in\mathcal{W}\) be the weak solution corresponding to the initial data \(M_{s}^{\mu}(0)=M_{0}^{\mu}\in H^{1}(\Omega)\), \(\mu>0\), which exists by Step 1. Consider a sequence \(\{\mu_{n}\}_{n\in\mathbb{N}}\subset\mathbb{R}^{+}\) converging to zero. Then, the \(L^{1}\)-contraction result in [21] implies that there exists a constant \(C>0\) independent of \(\mu\) such that
\[\|(M_{s}^{\mu_{m}}-M_{s}^{\mu_{n}})(t)\|_{L^{1}(\Omega)}\leq C\|M_{0}^{\mu_{m} }-M_{0}^{\mu_{n}}\|_{L^{1}(\Omega)} \tag{3.19}\]
for all \(m,n\in\mathbb{N}\), \(t\in[0,T]\). Note that an \(L^{1}\)-contraction result also holds for homogeneous Neumann boundary conditions, see [3]. Hence, \(\{M_{s}^{\mu_{n}}\}_{n\in\mathbb{N}}\) is a Cauchy sequence in \(L^{1}(\Omega)\), and since \(M_{s}^{\mu_{n}}\in L^{\infty}(\Omega)\) is uniformly bounded with respect to \(\mu_{n}\) (see Lemma 3.2), it is also a Cauchy sequence in \(L^{2}(\Omega)\). Since \(M_{s}^{\mu_{n}}\in C([0,T];L^{2}(\Omega))\), we conclude that there exists \(M_{s}\in C([0,T];L^{2}(\Omega))\) such that
\[\|M_{s}^{\mu_{n}}-M_{s}\|_{C([0,T];L^{2}(\Omega))}\to 0\quad\text{ as }n\to\infty.\]
The uniform boundedness of \(M_{s}^{\mu_{n}}\) in \(\mathcal{W}\) and \(\Phi(M_{s}^{\mu_{n}})\) in \(L^{2}(0,T;H^{1}(\Omega))\) follow directly from (3.4), Lemma 3.1. The strong convergence of \(M_{s}^{\mu_{n}}\to M_{s}\) and its uniform \(L^{\infty}\)-boundedness away from \(1\) (the singular point of \(\Phi\)), implies that \(\Phi(M_{s}^{\mu_{n}})\) also converges strongly to \(\Phi(M_{s})\) in \(L^{2}(\Omega)\) for all \(t\in[0,T]\). Hence, similar to before, passing the limit \(n\to\infty\) it follows that \(M_{s}\) solves (3.9).
### A contraction argument for proving Theorem 3.2
We first show the existence of a unique weak solution under the additional assumptions stated in Theorem 3.2 compared to Theorem 3.1. The assumptions in Theorem 3.2 demand that \(D_{j}\) depends only on \(S_{j}\), i.e. \(D_{j}:\mathbb{R}\to[D_{\min},D_{\max}]\). This is unless \(\nu_{j}=0\) in which case we can also define \(D_{j}\) as such. Hence, similar to (2.8), we introduce the function
\[\Phi_{j}(S):=\int_{0}^{S}D_{j}(\varrho)\,\mathrm{d}\varrho,\qquad\text{for }j \in\{1,\ldots,k\}. \tag{3.20}\]
Observe that due to (P2), \(\Phi_{j}\) is Lipschitz continuous and strictly increasing.
For a given \(\vec{s}\in\mathcal{Z}\) let \(M_{s}\in\mathcal{W}\) be the corresponding solution in Lemma 3.3. Define the operator \(\mathfrak{A}:\mathcal{Z}\to\mathcal{Z}\) such that for all \(j\in\{1,\ldots,k\}\), \(\mathfrak{A}(\vec{s})_{j}\) satisfies \(\nu_{j}\,\mathfrak{A}(\vec{s})_{j}\in L^{2}(0,T;H^{1}(\Omega))\), \(\mathfrak{A}(\vec{s})_{j}\in H^{1}(0,T;H^{-1}(\Omega))\), and for all \(\zeta_{j}\in L^{2}(0,T;H^{1}_{0}(\Omega))\),
\[\int_{0}^{T}\left[\langle\zeta_{j},\partial_{t}\mathfrak{A}( \vec{s})_{j}\rangle_{H^{1}_{0},H^{-1}}+\nu_{j}\,(\nabla\Phi_{j}(\mathfrak{A}( \vec{s})_{j})+\boldsymbol{v}_{j}\,\mathfrak{A}(\vec{s})_{j},\nabla\zeta_{j}) \right]=\int_{0}^{T}(f_{j}(M_{s},\vec{s}),\zeta_{j}),\] (3.21a) with \[\mathfrak{A}(\vec{s})_{j}(0)=S_{0,j}\] and \[\nu_{j}\mathfrak{A}(s)_{j}=\nu_{j}h_{j}\] on \[\partial\Omega\] in the trace sense. (3.21b)
To prove Theorem 3.2 we need the following lemma.
**Lemma 3.4** (\(L^{1}\)-contraction property of \(\mathfrak{A}\)).: _Under the assumptions of Theorem 3.2, define \(\Phi_{j}:\mathbb{R}\to\mathbb{R}\) by (3.20). Assume that \(T<T^{*}\) for \(T^{*}>0\) introduced in Lemma 3.3. Then the operator \(\mathfrak{A}:\mathcal{Z}\to\mathcal{Z}\), introduced in (3.21), is well-defined. Moreover, there exists a strictly increasing function \(\mathfrak{C}\in C^{1}(\mathbb{R}^{+})\) with \(\mathfrak{C}(0)=0\) such that for all \(t\in[0,T]\) and \(\vec{s}_{1},\,\vec{s}_{2}\in\mathcal{Z}\),_
\[\int_{0}^{t}\|\mathfrak{A}(\vec{s}_{1})-\mathfrak{A}(\vec{s}_{2})\|_{(L^{1}( \Omega))^{k}}\leq\mathfrak{C}(t)\int_{0}^{t}\|\vec{s}_{1}-\vec{s}_{2}\|_{(L^{1 }(\Omega))^{k}}.\]
Proof.: Since \(f_{j}(M_{s},\vec{s})\in C([0,T];L^{2}(\Omega))\), and \(D_{j}\) is bounded from above and below by a positive constant by (P2), the existence and regularity results in [1] imply that \(\mathfrak{A}(\vec{s})_{j}\) is well-defined for \(\nu_{j}>0\) (similar to Lemma 3.1). If \(\nu_{j}=0\), then \(\mathfrak{A}(\vec{s})_{j}\) is simply the solution of an ODE with known right hand side. From (3.9), using the \(L^{1}\)-contraction result in [3, 21] and the Lipschitz continuity of \(f_{0}\), it follows that for all \(t\in[0,T]\),
\[\|(M_{s_{1}}-M_{s_{2}})(t)\|_{L^{1}(\Omega)} \leq\int_{0}^{t}\|f_{0}(M_{s_{1}},\vec{s}_{1})-f_{0}(M_{s_{2}}, \vec{s}_{2})\|_{L^{1}(\Omega)}\] \[\overset{(\ref{eq:C1})}{\leq}C_{L}\int_{0}^{t}\|\vec{s}_{1}-\vec {s}_{2}\|_{(L^{1}(\Omega))^{k}}+C_{L}\int_{0}^{t}\|M_{s_{1}}-M_{s_{2}}\|_{L^{1} (\Omega)}. \tag{3.22}\]
Applying Gronwall's Lemma (2.5a) we conclude that
\[\|(M_{s_{1}}-M_{s_{2}})(t)\|_{L^{1}(\Omega)}\leq C_{L}\exp(C_{L}\,t)\int_{0}^{ t}\|\vec{s}_{1}-\vec{s}_{2}\|_{(L^{1}(\Omega))^{k}}. \tag{3.23}\]
We now apply the \(L^{1}\)-contraction principle to (3.21) and use the Lischitz continuity of \(f_{j}\) and the previous estimate to get
\[\|(\mathfrak{A}(\vec{s}_{1})_{j}-\mathfrak{A}(\vec{s}_{2})_{j})(t)\| _{L^{1}(\Omega)} \leq\int_{0}^{t}\|f_{j}(M_{s_{1}},\vec{s}_{1})-f_{j}(M_{s_{2}}, \vec{s}_{2})\|_{L^{1}(\Omega)}\] \[\leq C_{L}\int_{0}^{t}\|\vec{s}_{1}-\vec{s}_{2}\|_{(L^{1}(\Omega) )^{k}}+C_{L}\int_{0}^{t}\|M_{s_{1}}-M_{s_{2}}\|_{L^{1}(\Omega)}\] \[\leq C_{L}(1+C_{L}\,t\,\exp(C_{L}\,t))\int_{0}^{t}\|\vec{s}_{1}- \vec{s}_{2}\|_{(L^{1}(Q))^{k}}. \tag{3.24}\]
Note that this estimate also holds for the case \(\nu_{j}=0\). Hence, setting \(\mathfrak{C}(t)=k\,C_{L}\,t\,(1+C_{L}\,t\,\exp(C_{L}\,t))\) the result follows.
Proof of Theorem 3.2.: Choosing \(T>0\) small enough such that \(\mathfrak{C}(T)<1\) the existence of a unique weak solution \((M,\vec{S})\) of (1.1) follows from Lemma 3.4 and Banach's fixed point theorem. Since Lemma 3.2 implies that \(T^{*}=\infty\) provided that \(\Gamma_{1}\) has a non-zero measure, the argument can be repeated and solutions can be patched together to cover the interval \([0,T]\) for an arbitrary \(T>0\), thus concluding the proof.
### A fixed point argument for proving Theorem 3.1
In this section, we use Schauder's fixed point theorem to prove the existence of solutions for general diffusion coefficients \(D_{j}\) satisfying (P2). Since the case of ODE-PDE couplings, where \(\nu_{j}=0\), is already covered by the previous section, here we assume that \(\nu_{j}>0\). Similarly as in the previous section, we define the map \(\mathfrak{B}:(L^{2}(Q))^{k}\to(L^{2}(Q))^{k}\) such that for all \(j\in\{1,\ldots,k\}\), \(\mathfrak{B}(\vec{s})_{j}\in L^{2}(0,T;H^{1}(\Omega))\cap H^{1}(0,T;H^{-1}( \Omega))\), and for all \(\zeta_{j}\in L^{2}(0,T;H^{1}_{0}(\Omega))\),
\[\int_{0}^{T}[(\zeta_{j},\partial_{t}\mathfrak{B}(\vec{s})_{j})_{H^{1}_{0},H^{ -1}}+\nu_{j}(D_{j}(M_{s},\vec{s})\nabla\mathfrak{B}(\vec{s})_{j}+\boldsymbol{ v}_{j}\,\mathfrak{B}(\vec{s})_{j},\nabla\zeta_{j})]=\int_{0}^{T}(f_{j}(M_{s}, \mathfrak{B}(\vec{s})),\zeta_{j}),\] (3.25a) with \[\mathfrak{B}(\vec{s})_{j}(0)=S_{0,j},\text{ and }\mathfrak{B}(s)_{j}=h_{j}\text{ on } \partial\Omega\text{ in the trace sense.} \tag{3.25b}\]
**Lemma 3.5** (Schauder criteria for \(\mathfrak{B}\)).: _Let \(\nu_{j}>0\) for all \(j\in\{1,\ldots,k\}\). Then under the assumptions of Lemma 3.3, the operator \(\mathfrak{B}:(L^{2}(Q))^{k}\to(L^{2}(Q))^{k}\) introduced in (3.25) is well-defined, continuous, compact, and \(\|\mathfrak{B}(\vec{s})\|_{\mathcal{Z}}\) is bounded for all \(\vec{s}\in(L^{2}(Q))^{k}\)._
Proof.: **(Step 1): Well-posedness, boundedness and compactness.** Recalling Remark 3.1, we have existence and boundedness of weak solutions \(M_{s}\in\mathcal{W}\) for any \(\vec{s}\in(L^{2}(Q))^{k}\). We observe that \(D_{j}\) satisfies the ellipticity condition by (P2), \(\boldsymbol{v}_{j}\in(L^{\infty}(\Omega))^{d}\), \(f_{j}(\cdot,s)\) is Lipschitz continuous, and for \(h_{j}^{e}\) in (P6), we have
\[|f_{j}(M_{s},h_{j}^{e})|\stackrel{{(P4)}}{{\leq}}C(1+|h_{j}^{e}| +|M_{s}|)\in L^{2}(\Omega)\text{ for a constant }C>0.\]
Consequently, \(\mathfrak{B}(\vec{s})_{j}\in L^{2}(0,T;H^{1}(\Omega))\cap H^{1}(0,T;H^{-1}( \Omega))\) is well-defined. This is also evident from a Schaefer's fixed point argument [11, Chapter 9] using the a-priori estimate obtained by
inserting \(\zeta_{j}=\mathfrak{B}(\vec{s})_{j}-h_{j}^{e}\) in (3.25). For the first term this yields,
\[\int_{0}^{T}\langle\mathfrak{B}(\vec{s})_{j}-h_{j}^{e},\partial_{t}\mathfrak{B}( \vec{s})_{j}\rangle_{H_{0}^{1},H^{-1}}=\frac{1}{2}\left[\|\mathfrak{B}(\vec{s }(T))_{j}-h_{j}^{e}\|^{2}-\|S_{0,j}-h_{j}^{e}\|^{2}\right],\]
and for the diffusion and convection terms we obtain,
\[\nu_{j}\int_{0}^{T} (D_{j}(M_{s},\vec{s})\nabla\mathfrak{B}(\vec{s})_{j},\nabla( \mathfrak{B}(\vec{s})_{j}-h_{j}^{e}))\] \[=\frac{\nu_{j}}{2}\int_{Q}D_{j}(M_{s},\vec{s})\left[|\nabla \mathfrak{B}(\vec{s})_{j}|^{2}-|\nabla h_{j}^{e}|^{2}+|\nabla(\mathfrak{B}( \vec{s})_{j}-h_{j}^{e})|^{2}\right],\] \[\left|\nu_{j}\int_{0}^{T} (\boldsymbol{v}_{j}\,\mathfrak{B}(\vec{s})_{j},\nabla( \mathfrak{B}(\vec{s})_{j}-h_{j}^{e}))\right|\] \[\stackrel{{\eqref{eq:diffusion_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation_equation__equation_equation_equation__equation__equation_equation__equation__equation_equation__equation__equation__equation__equation__equation__equation__equation__equation___equation__equation___equation__equation__equation___equation___equation___equation___equation___equation___equation___equation___equation____equation___equation____equation___equation____equation____equation____equation___equation____equation____equation____equation____equation____equation____equation____equation_____equation____equation____equation____equation____equation____equation____equation_____equation____equation_____equation____equation____equation_____equation____equation____equation_____equation_____equation_____equation____equation_____equation_____equation_____equation_____equation____equation_____equation_____equation_____equation_____equation_____equation_____equation_____equation_____equation_____equation_____equation_____equation_____equation______equation_____equation_____equation_____equation______equation_____equation______equation_____equation______equation_____equation_____equation______equation______equation______equation_____equation______equation______equation______equation_____equation______equation_____equation______equation_____equation______equation______equation______equation______equation______equation_______equation______equation_______equation______equation______equation_______equation______equation_______equation_______equation_______equation_______equation_______equation_______equation_______equation_______equation_______equation_______equation________equation________equation_______equation_________equation_________equation_
\(L^{1}\)-contraction result (3.23) we have that \(\|(M^{i}_{s}-M^{\star}_{s})(t)\|_{L^{1}(\Omega)}\to 0\) for all \(t\in(0,T]\). Since, \(M^{i}_{s}\) are bounded in \(L^{\infty}(\Omega)\) (Lemma 3.3), one further has that
\[\|\vec{s}^{\star}-\vec{s}^{\star}\|_{(L^{2}(Q))^{k}}+\|M^{i}_{s}-M^{\star}_{s} \|_{C([0,T];L^{2}(\Omega))}\to 0\ \ \text{as}\ i\to\infty. \tag{3.26}\]
Observe that, since \(\mathfrak{B}(\vec{s}^{\star})_{j}\in L^{2}(0,T;H^{1}(\Omega))\), for any given \(\varepsilon>0\), there exists \(s^{\varepsilon,\star}_{j}\in C^{\infty}(\mathbb{R}^{d})\) such that
\[\|\mathfrak{B}(\vec{s}^{\star})_{j}-s^{\varepsilon,\star}_{j}\|_{L^{2}(0,T;H^ {1}(\Omega))}\leq\varepsilon/4D_{\max}. \tag{3.27}\]
We consider the difference of \(\mathfrak{B}(\vec{s}^{i})_{j}\) and \(\mathfrak{B}(\vec{s}^{\star})_{j}\) by subtracting two versions of (3.25). First, we split up the diffusion term,
\[\int_{0}^{T}(D_{j}(M^{i}_{s},\vec{s}^{i})\nabla\mathfrak{B}(\vec{ s}^{i})_{j}-D_{j}(M^{\star}_{s},\vec{s}^{\star})\nabla\mathfrak{B}(\vec{s}^{ \star})_{j},\nabla\zeta_{j})\] \[=\int_{0}^{T}(D_{j}(M^{i}_{s},\vec{s}^{i})\nabla(\mathfrak{B}( \vec{s}^{i})_{j}-\mathfrak{B}(\vec{s}^{\star})_{j})+(D_{j}(M^{i}_{s},\vec{s}^ {i})-D_{j}(M^{\star}_{s},\vec{s}^{\star}))\nabla\mathfrak{B}(\vec{s}^{\star})_ {j},\nabla\zeta_{j}),\] (3.28a) and use Holder's inequality to estimate the second term as follows \[\|(D_{j}(M^{i}_{s},\vec{s}^{i})-D_{j}(M^{\star}_{s},\vec{s}^{ \star}))\nabla\mathfrak{B}(\vec{s}^{\star})_{j}\|_{L^{2}(Q)}\] \[\leq\|(D_{j}(M^{i}_{s},\vec{s}^{i})-D_{j}(M^{\star}_{s},\vec{s}^{ \star}))\nabla s^{\varepsilon,\star}_{j}\|_{L^{2}(Q)}+\|(D_{j}(M^{i}_{s},\vec {s}^{i})-D_{j}(M^{\star}_{s},\vec{s}^{\star}))\nabla(\mathfrak{B}(\vec{s}^{ \star})_{j}-s^{\varepsilon,\star}_{j})\|_{L^{2}(Q)}\] \[\leq\|s^{\varepsilon,\star}_{j}\|_{C^{1}(Q)}\|D_{j}(M^{i}_{s}, \vec{s}^{i})-D_{j}(M^{\star}_{s},\vec{s}^{\star})\|_{L^{2}(Q)}+2D_{\max}\| \nabla(\mathfrak{B}(\vec{s}^{\star})_{j}-s^{\varepsilon,\star}_{j}))\|_{L^{2} (Q)}\stackrel{{\eqref{eq:2.2.2},\eqref{eq:2.2.2}}}{{\leq}}\varepsilon,\] (3.28b) for all \[i\geq i_{\varepsilon,1}\], where \[i_{\varepsilon,1}\in\mathbb{N}\] is large enough. Here we used the Lipschitz continuity of \[D_{j}\], see ( P 2 ). Furthermore ( 3.26 ) implies for \[i\geq i_{\varepsilon,2}\], where \[i_{\varepsilon,2}\in\mathbb{N}\] is large enough, that \[\|f_{j}(M^{i}_{s},\mathfrak{B}(\vec{s}^{i}))-f_{j}(M^{\star}_{s},\mathfrak{B} (\vec{s}^{\star}))\|_{L^{2}(\Omega)}\stackrel{{\eqref{eq:2.2.2},\eqref{eq:2.2.2}}}{{\lesssim}}\varepsilon+\|\mathfrak{B}(\vec{s}^{i})- \mathfrak{B}(\vec{s}^{\star})\|_{(L^{2}(\Omega))^{k}}.\] (3.28c) Hence, defining \[\mathfrak{G}^{i}_{j}:=\mathfrak{B}(\vec{s}^{i})_{j}-\mathfrak{B}(\vec{s}^{ \star})_{j}\], it follows from ( 3.25 ) that for \[i\geq\max\{i_{\varepsilon,1},i_{\varepsilon,2}\}\], \[\left|\int_{0}^{T}[\langle\zeta_{j},\partial_{t}\,\mathfrak{G}s^{ i}_{j}\rangle_{H^{1}_{0},H^{-1}}+\nu_{j}(D_{j}(M^{i}_{s},\vec{s}^{i})\nabla \overline{\partial}s^{i}_{j}+\boldsymbol{v}_{j}\,\overline{\partial}s^{i}_{j}, \nabla\zeta_{j})]\right|\] \[\stackrel{{\eqref{eq:2.2.2}}}{{\lesssim}}\int_{0}^{T }\left[\sum_{j=1}^{k}\|\mathfrak{G}s^{i}_{j}\|\|\zeta_{j}\|+\varepsilon\| \zeta_{j}\|+\varepsilon\|\nabla\zeta_{j}\|\right]\] \[\lesssim\ \int_{0}^{T}\left[\sum_{j=1}^{k}\|\mathfrak{G}s^{i}_{j}\|\| \zeta_{j}\|+\varepsilon+\varepsilon\left(\|\zeta_{j}\|^{2}+\|\nabla\zeta_{j}\| ^{2}\right)\right].\]
Finally, we insert the test function \(\zeta_{j}=\mathfrak{G}s^{i}_{j}\in L^{2}(0,T;H^{1}_{0}(\Omega))\) and sum up the resulting estimates from \(j=1\) to \(k\). Then for \(\varepsilon>0\) small enough, one obtains using Gronwall's lemma (2.5a)
\[\sum_{j=1}^{k}\left[\|\mathfrak{G}s^{i}_{j}\|^{2}+\int_{0}^{T}\|\nabla \overline{\partial}s^{i}_{j}\|^{2}\right]\lesssim\varepsilon.\]
Hence, the right hand side can be made arbitrary small by choosing \(\varepsilon>0\) small enough which simply requires \(i\geq\max\{i_{\varepsilon,1},i_{\varepsilon,2}\}\). Passing to the limit \(\varepsilon\to 0\) we conclude that \(\|\eth s_{j}^{i}\|_{L^{2}(0,T;H^{1}(\Omega))}+\|\eth s_{j}^{i}\|_{\mathcal{Z}}\to 0\) for all \(j=1,\ldots,k\). This shows that the operator \(\mathfrak{B}\) is continuous, thus, concluding the proof.
Proof of Theorem 3.1.: If \(\nu_{j}>0\) for all \(j\in\{1,\ldots,k\}\), then using Lemma 3.5 and Schauder's fixed point theorem, see [11, Chapter 9], we conclude that a fixed point \(\vec{s}=\vec{S}\in\mathcal{Z}\subset(L^{2}(Q))^{k}\) exists of the mapping \(\mathfrak{B}\), i.e. \(\mathfrak{B}(\vec{S})=\vec{S}\). It is easy to verify that this fixed point \((M_{S},\vec{S})\) is a weak solution of (1.1).
If \(\nu_{\bar{j}}=0\) for all \(\bar{j}\in\mathcal{I}\subset\{1,\ldots,k\}\), then the theorem is proved by first applying the contraction mapping in Lemma 3.4 for (1.1a) and (1.1b) with \(\bar{j}\in\mathcal{I}\), followed by applying the fixed point argument for \(j\in\{1,\ldots,k\}\setminus\mathcal{I}\). The details are left for the avid reader.
The approach developed in this section can be extended to systems with degenerate diffusion coefficients \(D_{j}\) under some additional assumptions. Below we discuss an example of such a case.
**Corollary 3.2.1** (Existence of weak solutions for degenerate \(D_{j}\)).: _Assume that (P1), (P3)-(P6) and (P4*) hold. For some \(\ell\in\{1,\ldots,k\}\), instead of (P2), assume that the diffusion coefficient \(D_{\ell}\) satisfies \(D_{\ell}(m,\vec{s})=D_{\ell}(s_{\ell})\) where_
\[D_{\ell}:[0,\infty)\to\mathbb{R}\text{ is continuous and strictly increasing, with }D_{\ell}(0)=0.\]
_Moreover, let \(\operatorname{ess}\inf\{S_{0,\ell}\}\geq 0,\operatorname{ess}\inf\{h_{\ell}\}\geq 0\), \(\mathbf{v}_{\ell}=\mathbf{0}\) in \(Q\), \(f_{\ell}(\cdot,\vec{s})\leq f_{\max}^{\ell}(s_{j})\) for some function \(f_{\max}^{\ell}\in\operatorname{Lip}(\mathbb{R}^{+})\) and \(f_{\ell}(m,\vec{s})\geq 0\) if \(s_{\ell}=0\)._
_Then a weak solution \((M,\vec{S})\) of (1.1) exists in the sense of Definition 1, but with \(\nu_{\ell}\,S_{\ell}\in L^{2}(0,T;H^{1}(\Omega))\) replaced by \(\nu_{\ell}\int_{0}^{S_{\ell}}D_{\ell}\in L^{2}(0,T;H^{1}(\Omega))\). The solution is unique if for all \(j\in\{1,\ldots,k\}\) either \(\nu_{j}=0\) or \(D_{j}\) depends only on \(s_{j}\). Furthermore, \(S_{\ell}\) is non-negative and bounded almost everywhere in \(Q\)._
If \(D_{\ell}=D_{\ell}(s_{\ell})\) is degenerate for some \(\ell\in\{1,\ldots,k\}\), without loss of generality we assume \(\nu_{\ell}>0\). We define \(\Phi_{\ell}\) as in (3.20) and fix all \(s_{j}\in C([0,T];L^{2}(\Omega))\), \(j\neq\ell\), for \(s_{\ell}\in C([0,T];L^{2}(\Omega))\). Let \(M_{s}\in\mathcal{W}\) be the solution of (3.9) from Lemma 3.3. Then, \(\tilde{s}_{\ell}\in H^{1}(0,T;H^{-1}(\Omega))\cap C([0,T];L^{2}(\Omega))\) with \(\Phi_{\ell}(\tilde{s}_{\ell})\in L^{2}(0,T;H^{1}(\Omega))\) is defined as the solution of the following problem, for all \(\zeta_{\ell}\in L^{2}(0,T;H^{1}_{0}(\Omega))\),
\[\int_{0}^{T}[\langle\zeta_{\ell},\partial_{t}\tilde{s}_{\ell} \rangle_{H^{1}_{0},H^{-1}}+\nu_{\ell}(\nabla\Phi_{\ell}(\tilde{s}_{\ell}), \nabla\zeta_{\ell})]=\int_{0}^{T}(f_{\ell}(M_{s},(s_{1},\ldots,\tilde{s}_{\ell },\ldots,s_{k})),\zeta_{\ell}),\] \[\text{with }\tilde{s}_{\ell}(0)=S_{0,\ell}\ \text{ and }\ \Phi_{\ell}(\tilde{s}_{\ell})=\Phi_{\ell}(h_{\ell})\text{ on }\partial\Omega\text{ in the trace sense.}\]
The existence and uniqueness of \(\tilde{s}_{\ell}\) then follows from Lemmas 3.1 and 3.3. Defining \(\hat{S}_{\ell}(t):=\bar{S}+\int_{0}^{t}f_{\max}^{\ell}\), we have, similar to Lemma 3.2 that \(0\leq\tilde{s}_{\ell}(t)\leq\hat{S}_{\ell}(t)\) a.e. in \(\Omega\) for all \(t>0\). Following Lemma 3.4, we further conclude that \(\tilde{s}_{\ell}\) satisfies an \(L^{1}\)-contraction result with respect to \(s_{\ell}\) since all other \(s_{j}\), \(j\neq\ell\), are fixed. Finally, the arguments in the proof of Theorem 3.2 concludes the proof.
Homogeneous Neumann boundary conditions
In this section, we show the existence of solutions for homogeneous Neumann boundary conditions and present the proof of Lemma 3.1. The global existence of solutions cannot be guaranteed for homogeneous Neumann boundary conditions, since the density \(M\) might reach \(1\) in finite time. The local existence and uniqueness of solutions is analyzed in Section 4.1 and the finite time blow-up in Section 4.2.
### Existence of weak solutions
**Theorem 4.1** (Local well-posedness for homogeneous Neumann conditions).: _Let \(\Gamma_{1}=\emptyset\). We assume that (P1)-(P6) and (P4*) hold. Then, there exists a positive time \(T^{*}\geq\sup\{t:\hat{M}(t)<1\}>0\), where \(\hat{M}\in C^{1}(\mathbb{R}^{+})\) is defined in Lemma 3.2, such that for \(T\in(0,T^{*})\), a weak solution \((M,\vec{S})\) of (1.1) exists in the sense of Definition 1. Moreover, the solution is unique if either \(\nu_{j}=0\), or \(D_{j}\) depends only on \(S_{j}\) for all \(j\in\{1,\ldots,k\}\)._
This result essentially follows from the proof of Theorems 3.1 and 3.2. Indeed, note that Lemmas 3.2 to 3.5 were proven for the general case, i.e., they also hold for homogeneous Neumann boundary conditions. Hence, it remains to show Lemma 3.1 for the case that \(\Gamma_{1}=\emptyset\). In this subsection, we present the proof under more general assumptions that cover mixed as well as homogeneous Neumann boundary conditions. The proof follows the Rothe method [16] that is based on time-discrete approximations of the solutions. To simplify the notation for different boundary conditions (see (2.2c))
\[\text{without loss of generality we \bf assume that }h_{0}\equiv 0,\text{ implying }h_{0}^{e}\equiv 0.\]
#### 4.1.1 Well-posedness of backward Euler time-discretizations
We consider an equivalent formulation of (3.3) and discrtize it using the backward Euler scheme. Following (3.1), we introduce
\[\beta_{\varepsilon}:=\Phi_{\varepsilon}{}^{-1}\quad\text{ such that }\quad \varepsilon\leq\beta_{\varepsilon}{}^{\prime}\leq\varepsilon^{-1}. \tag{4.1}\]
Then replacing \(\Phi_{\varepsilon}(M_{s,\varepsilon})\) by \(u\) and \(M_{s,\varepsilon}\) by \(\beta_{\varepsilon}(u)\), we demand that \(u\in L^{2}(0,T;\mathcal{H}^{1})\) with \(\beta_{\varepsilon}(u)\in H^{1}(0,T;\mathcal{H}^{-1})\) and \(\beta_{\varepsilon}(u(0))=M_{0}\) satisfies
\[\int_{0}^{T}\langle\varphi,\partial_{t}\beta_{\varepsilon}(u)\rangle+\int_{0} ^{T}(\nabla u,\nabla\varphi)=\int_{0}^{T}(f_{0}(\beta_{\varepsilon}(u),\vec{ s}),\varphi)\quad\text{forall }\varphi\in L^{2}(0,T;\mathcal{H}^{1}) \tag{4.2}\]
and a given \(\vec{s}\in\mathcal{Z}\). For \(N\in\mathbb{N}\), we denote by \(\tau:=T/N\) the time-step size and set \(t_{n}:=n\tau\) for \(n\in\{0,1,\ldots,N\}\). Then we define the time-discrete sequence \(\{u_{n}\}_{n=1}^{N}\subset\mathcal{H}^{1}\) recursively as follows: setting \(u_{0}:=\Phi_{\varepsilon}(M_{0})\) (i.e., \(\beta_{\varepsilon}(u_{0})=M_{0}\)), let \(u_{n}\in\mathcal{H}^{1}\) be the solution of
\[\tfrac{1}{\tau}(\beta_{\varepsilon}(u_{n})-\beta_{\varepsilon}(u_{n-1}), \zeta)+(\nabla u_{n},\nabla\zeta)=(f_{0}(\beta_{\varepsilon}(u_{n}),\vec{s}(t _{n})),\zeta)\qquad\text{for all }\zeta\in\mathcal{H}^{1}. \tag{4.3}\]
The following lemma implies the well-posedness of the time-discrete formulation.
**Lemma 4.1** (Well-posedness a semilinear elliptic problem).: _For a given \(F\in L^{2}(\Omega)\), there exists a unique solution \(w\in\mathcal{H}^{1}\) of the elliptic problem_
\[(\beta_{\varepsilon}(w),\zeta)+(\nabla w,\nabla\zeta)=(F,\zeta)\qquad\text{for all $\zeta\in\mathcal{H}^{1}$.} \tag{4.4}\]
Proof.: The proof is based on monotonicity arguments. Let the operator \(\mathfrak{F}:\mathcal{H}^{1}\to\mathcal{H}^{-1}\) be defined by the inner product
\[\langle\zeta,\mathfrak{F}(w)\rangle:=(\beta_{\varepsilon}(w),\zeta)+(\nabla w,\nabla\zeta). \tag{4.5}\]
Then \(\mathfrak{F}\) is strongly monotone since
\[\langle w-v,\mathfrak{F}(w)-\mathfrak{F}(v)\rangle\geq\varepsilon\|w-v\|^{2}+ \|\nabla(w-v)\|^{2}\geq\varepsilon\|w-v\|_{\mathcal{H}^{1}}^{2}.\]
Furthermore, \(\mathfrak{F}\) is Lipschitz continuous since, using the Cauchy-Schwarz inequality, we obtain
\[\|\mathfrak{F}(w)-\mathfrak{F}(v)\|_{\mathcal{H}^{-1}}\leq\sup_{\zeta\in \mathcal{H}^{1}}\left(\frac{\varepsilon^{-1}\|w-v\|\|\zeta\|+\|\nabla(w-v)\| \|\nabla\zeta\|}{\|\zeta\|_{\mathcal{H}^{1}}}\right)\leq\varepsilon^{-1}\|w -v\|_{\mathcal{H}^{1}}.\]
Hence, invoking the nonlinear Lax-Milgram Lemma [26, Theorem 2.G] completes the proof.
Observe that the operator \(\widetilde{\mathfrak{F}}:\mathcal{H}^{1}\to\mathcal{H}^{-1}\) defined by the inner product
\[\langle\zeta,\widetilde{\mathfrak{F}}(w)\rangle:=(\beta_{\varepsilon}(w)- \tau\,f_{0}(\beta_{\varepsilon}(w),\vec{s}(t_{n})),\zeta)+(\nabla w,\nabla \zeta). \tag{4.6}\]
is strictly monotone with respect to \(w\) if \(\tau<C_{L}^{-1}\) by (P4) and Lipschitz continuous. Hence, adjusting the arguments in the proof of Lemma 4.1 to (4.3) we obtain the existence and uniqueness of the time-discrete solutions.
**Lemma 4.2** (Well-posedness of the time-discrete solutions).: _Let (P1)-(P6) hold. Then the sequence \(\{u_{n}\}_{n=1}^{N}\subset\mathcal{H}^{1}\) introduced in (4.3) is well-defined for \(\tau<C_{L}^{-1}\)._
#### 4.1.2 Interpolations in time
For a fixed \(N\in\mathbb{N}\) with \(\tau=T/N\), we define the time interpolates \(\hat{u}_{\tau}\in L^{\infty}(0,T;\mathcal{H}^{1})\) and \(\bar{u}_{\tau}\in C([0,T];\mathcal{H}^{1})\) from the time-discrete solutions \(\{u_{n}\}_{n=1}^{N}\subset\mathcal{H}^{1}\) such that for \(t\in(t_{n-1},t_{n}]\), \(n\in\{1,\ldots,N\}\),
\[\hat{u}_{\tau}:=u_{n},\quad\text{ and }\quad\bar{u}_{\tau}:=\beta_{\varepsilon}^ {-1}\left(\beta_{\varepsilon}(u_{n-1})+\tfrac{t-t_{n-1}}{\tau}(\beta_{ \varepsilon}(u_{n})-\beta_{\varepsilon}(u_{n-1}))\right). \tag{4.7}\]
Observe that \(\bar{u}_{\tau}\) satisfies for all \(n\in\{1,\ldots,N\}\),
\[\bar{u}_{\tau}(t_{n})=u_{n},\quad\text{and}\quad\partial_{t}\beta(\bar{u}_{ \tau})=\frac{\beta_{\varepsilon}(u_{n})-\beta_{\varepsilon}(u_{n-1})}{\tau} \quad\text{ for }t\in(t_{n-1},t_{n}]. \tag{4.8}\]
**Lemma 4.3** (Uniform boundedness of the time interpolates with respect to \(\tau\)).: _Let (P1)-(P6) hold. Then there exist constants \(\tau^{*},C>0\), independent of \(\tau\), such that for \(\tau<\tau^{*}\),_
\[\|\beta_{\varepsilon}(\bar{u}_{\tau})\|^{2}+\int_{0}^{T}\|\nabla \hat{u}_{\tau}\|^{2} \leq C+C\int_{0}^{T}\left(\|\vec{s}\|^{2}+\|\hat{u}_{\tau}\|^{2} \right), \tag{4.9a}\] \[\|\beta_{\varepsilon}(\bar{u}_{\tau})\|^{2}+\int_{0}^{T}[\| \nabla\bar{u}_{\tau}\|^{2}+\|\partial_{t}\beta_{\varepsilon}(\bar{u}_{\tau})\| ^{2}_{\mathcal{H}^{-1}}] \leq C+C\int_{0}^{T}\left(\|\vec{s}\|^{2}+\|\hat{u}_{\tau}\|^{2} \right). \tag{4.9b}\]
_The above inequalities imply the uniform boundedness of \(\beta_{\varepsilon}(\hat{u}_{\tau}),\,\beta_{\varepsilon}(\bar{u}_{\tau})\in L ^{\infty}(0,T;L^{2}(\Omega))\), \(\hat{u}_{\tau},\,\bar{u}_{\tau}\in L^{2}(0,T;\mathcal{H}^{1})\) and \(\beta_{\varepsilon}(\bar{u}_{\tau})\in H^{1}(0,T;\mathcal{H}^{-1})\) with respect to \(\tau<\tau^{*}\)._
Proof.: **(Step 1) Uniform boundedness of \(\|\beta_{\varepsilon}(u_{\tau})\|\):** We choose the test function \(\zeta=\beta_{\varepsilon}(u_{n})\in\mathcal{H}^{1}\) in (4.3), yielding
\[(\beta_{\varepsilon}(u_{n})-\beta_{\varepsilon}(u_{n-1}),\beta_{ \varepsilon}(u_{n}))+\tau(\nabla u_{n},\nabla\beta_{\varepsilon}(u_{n}))=\tau (f_{0}(\beta_{\varepsilon}(u_{n}),\vec{s}(t_{n})),\beta_{\varepsilon}(u_{n})). \tag{4.10}\]
Observe from the identity \(2a(a-b)=a^{2}-b^{2}+(a-b)^{2}\) that
\[(\beta_{\varepsilon}(u_{n})-\beta_{\varepsilon}(u_{n-1}),\beta_{ \varepsilon}(u_{n}))=\tfrac{1}{2}[\|\beta_{\varepsilon}(u_{n})\|^{2}-\|\beta_ {\varepsilon}(u_{n-1})\|^{2}+\|\beta_{\varepsilon}(u_{n})-\beta_{\varepsilon}( u_{n-1})\|^{2}].\]
Moreover, one has for some constant \(C^{\prime}>0\) independent of \(\varepsilon\) and \(\tau\) that
\[(f_{0}(\beta_{\varepsilon}(u_{n}),\vec{s}(t_{n})),\beta_{ \varepsilon}(u_{n})) \stackrel{{\eqref{eq:C-1}}}{{\leq}}C^{\prime}(1+\| \vec{s}(t_{n})\|^{2}+\|\beta_{\varepsilon}(u_{n})\|^{2}),\] \[(\nabla u_{n},\nabla\beta_{\varepsilon}(u_{n})) \geq\varepsilon\|\nabla u_{n}\|^{2}.\]
Then, combining these inequalities we obtain
\[\|\beta_{\varepsilon}(u_{n})\|^{2}+\|\beta_{\varepsilon}(u_{n})- \beta_{\varepsilon}(u_{n-1})\|^{2}+\varepsilon\tau\|\nabla u_{n}\|^{2}\leq\| \beta_{\varepsilon}(u_{n-1})\|^{2}+\tau C^{\prime}\left(1+\|s_{n}\|^{2}+\| \beta_{\varepsilon}(u_{n})\|^{2}\right).\]
Applying the discrete Gronwall Lemma (2.5b) for small enough \(\tau>0\), we have for a constant \(C>0\) independent of \(N\) or \(\varepsilon\) that
\[\|\beta_{\varepsilon}(u_{N})\|^{2}+\sum_{n=0}^{N}[\varepsilon\| \nabla u_{n}\|^{2}\tau+\|\beta_{\varepsilon}(u_{n})-\beta_{\varepsilon}(u_{n-1 })\|^{2}]\leq\|\beta_{\varepsilon}(u_{0})\|^{2}+C+C\sum_{n=0}^{N}\|\vec{s}(t_{ n})\|^{2}\tau. \tag{4.11}\]
For \(\tau>0\) small enough, one can estimate
\[\sum_{n=0}^{N}\|\vec{s}(t_{n})\|^{2}\tau\leq\left(1+\int_{0}^{T}\|\vec{s}\|^{ 2}\right). \tag{4.12}\]
Combining (4.11)-(4.12) we conclude that \(\beta_{\varepsilon}(u_{N})\) and, in extension of the method, all \(\beta_{\varepsilon}(u_{n})\) are uniformly bounded in \(L^{2}(\Omega)\) with respect to \(N\) and \(\varepsilon\). Then, the definition (4.7) implies that \(\beta_{\varepsilon}(\hat{u}_{\tau})\) (\(=\beta_{\varepsilon}(u_{n})\) for \(t\in(t_{n-1},t_{n}]\)) and \(\beta_{\varepsilon}(\bar{u}_{\tau})\) (\(\leq\max\{\beta_{\varepsilon}(u_{n}),\beta_{\varepsilon}(u_{n-1})\}\) for \(t\in(t_{n-1},t_{n}]\)) are uniformly bounded.
**(Step 2) Uniform boundedness of \(\|\nabla u_{\tau}\|_{L^{2}(0,T;\mathcal{H}^{1})}\):** Let us now test (4.3) with \(\zeta=u_{n}\in\mathcal{H}^{1}\). This yields
\[(\beta_{\varepsilon}(u_{n})-\beta_{\varepsilon}(u_{n-1}),u_{n})+\tau\|\nabla u_ {n}\|^{2}=\tau(f_{0}(\beta_{\varepsilon}(u_{n}),\vec{s}(t_{n})),u_{n}). \tag{4.13}\]
Now, from the convexity of the function \(\int_{0}^{m}\Phi_{\varepsilon}\) (see (3.1)), one has
\[\int_{\beta_{\varepsilon}(u_{n-1})}^{\beta_{\varepsilon}(u_{n})}\Phi_{ \varepsilon}\leq\Phi_{\varepsilon}(\beta_{\varepsilon}(u_{n}))(\beta_{ \varepsilon}(u_{n})-\beta_{\varepsilon}(u_{n-1}))=u_{n}(\beta_{\varepsilon}(u _{n})-\beta_{\varepsilon}(u_{n-1})).\]
For the last term, we observe that
\[(f_{0},u_{n})\stackrel{{\eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_defdef_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_defdef_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_defdef
Proof.: We insert the test function \(\zeta=u_{n}-u_{n-1}\) in (4.3). This gives term-wise
\[(\beta_{\varepsilon}(u_{n})-\beta_{\varepsilon}(u_{n-1}),u_{n}-u_{n- 1})\overset{\eqref{eq:1.1}}{\geq}\varepsilon\tau^{2}\int_{\Omega}\left|\frac{ \beta_{\varepsilon}(u_{n})-\beta_{\varepsilon}(u_{n-1})}{\tau}\right|^{2} \overset{\eqref{eq:1.1}}{=}\tau^{2}\varepsilon\|\partial_{t}\beta_{ \varepsilon}(\bar{u}_{\tau})\|^{2},\] \[\tau(\nabla u_{n},\nabla(u_{n}-u_{n-1})) =\tfrac{\tau}{2}[\|\nabla u_{n}\|^{2}-\|\nabla u_{n-1}\|^{2}+\| \nabla(u_{n}-u_{n-1})\|^{2}],\] \[\tau(f_{0},u_{n}-u_{n-1}) \leq\tfrac{\tau^{2}}{2\varepsilon^{3}}\|f_{0}\|^{2}+\tfrac{ \varepsilon^{3}}{2}\|u_{n}-u_{n-1}\|^{2}\overset{\eqref{eq:1.1},\eqref{eq: 1.2},\eqref{eq:1.2}}{\leq}\tfrac{\tau^{2}}{2\varepsilon^{3}}\|f_{0}\|^{2}+ \tfrac{\varepsilon\tau^{2}}{2}\|\partial_{t}\beta_{\varepsilon}(\bar{u}_{ \tau})\|^{2}.\]
Similarly to (4.14), we obtain
\[\|f_{0}\|^{2}\leq C(1+\|\vec{s}(t_{n})\|^{2}+\|\beta_{\varepsilon}(u_{n})^{2} \|)\leq C\left[1+\|\vec{s}(t_{n})\|^{2}+\sum_{n=1}^{N}\|\vec{s}(t_{n})\|^{2} \tau\right],\]
where we used Lemma 4.3. Finally, summing the resulting inequalities from \(n=1\) to \(n=N\) and cancelling out \(\tau\) one has
\[\|\nabla u_{N}\|^{2}+\tfrac{\varepsilon}{2}\sum_{n=1}^{N}\|\partial_{t}\beta_ {\varepsilon}(\bar{u}_{\tau})\|^{2}\tau\leq\|\nabla\Phi_{\varepsilon}(M_{0}) \|^{2}+C\sum_{n=1}^{N}(1+\|\vec{s}(t_{n})\|^{2})\tau, \tag{4.17}\]
which proves the lemma.
**Remark 4.1** (Covering homogeneous Neumann condition).: _The above lemmas cover both homogeneous mixed boundary conditions and homogeneous Neumann conditions. In the latter case, \(\mathcal{H}^{1}=H^{1}(\Omega)\). To cover the case of inhomogeneous mixed boundary conditions, we have to test with \(\zeta=\beta_{\varepsilon}(u_{n})-h_{0}^{e}\) in Step 1 of Lemma 4.3 and with \(\zeta=u_{n}-\Phi_{\varepsilon}(h_{0}^{e})\) in Step 2. The details are straightforward, and hence, omitted._
#### 4.1.3 Proof of Lemma 3.1
**(Step 1) Existence:** Note that \(\beta_{\varepsilon}\) is Lipschitz and strictly increasing by (4.1). Using this fact and applying Gronwall's Lemma to (4.9a) implies that \(\hat{u}_{\tau}\), \(\beta_{\varepsilon}(\hat{u}_{\tau})\) are uniformly bounded in \(L^{\infty}(0,T;L^{2}(\Omega))\) with respect to \(\tau\). Consequently \(\bar{u}_{\tau}\), \(\beta_{\varepsilon}(\bar{u}_{\tau})\) are uniformly bounded as well by (4.9b). Thus, using (4.9b) we obtain the uniform boundedness of \(\beta_{\varepsilon}(\bar{u}_{\tau})\in\mathcal{X}\). Due to the compact embedding of \(L^{2}(Q)\) in \(\mathcal{X}\)[23], there exists \(u\in\mathcal{X}\) such that along a subsequence \(\tau\to 0\),
\[\beta_{\varepsilon}(\bar{u}_{\tau})\rightharpoonup\beta_{ \varepsilon}(u)\text{ weakly in }\mathcal{X}=L^{2}(0,T;\mathcal{H}^{1})\cap H^{1}(0,T;\mathcal{H}^{-1}), \tag{4.18a}\] \[\beta_{\varepsilon}(\bar{u}_{\tau})\to\beta_{\varepsilon}(u)\text{ strongly in }L^{2}(Q). \tag{4.18b}\]
Using (4.11), one has
\[\int_{0}^{T}\|\beta_{\varepsilon}(\hat{u}_{\tau})-\beta_{ \varepsilon}(\bar{u}_{\tau})\|^{2}=\sum_{n=1}^{N}\int_{t_{n-1}}^{t_{n}}\left\| \tfrac{t_{n}-t}{\tau}(\beta_{\varepsilon}(u_{n})-\beta_{\varepsilon}(u_{n-1}) )\right\|^{2}\mathrm{d}t\] \[=\sum_{n=1}^{N}\|(\beta_{\varepsilon}(u_{n})-\beta_{\varepsilon}(u _{n-1}))\|^{2}\int_{t_{n-1}}^{t_{n}}\left(\tfrac{t_{n}-t}{\tau}\right)^{2} \mathrm{d}t=\tfrac{\tau}{3}\sum_{n=1}^{N}\|\beta_{\varepsilon}(u_{n})-\beta_{ \varepsilon}(u_{n-1})\|^{2}\to 0.\]
This, along with the uniform bound with respect to \(\tau\) of \(\hat{u}_{\tau}\), \(\bar{u}_{\tau}\) in \(L^{2}(0,T;\mathcal{H}^{1})\) in (4.9) and the strict monotonicity of \(\beta_{\varepsilon}\) in (4.1) implies that
\[\hat{u}_{\tau},\,\bar{u}_{\tau}\rightharpoonup u\text{ weakly in }L^{2}(0,T;\mathcal{H}^{1}), \tag{4.18c}\] \[\hat{u}_{\tau},\,\bar{u}_{\tau}\to u\text{ strongly in }L^{2}(Q). \tag{4.18d}\]
Observe from (4.3) and (4.7) that \(\hat{u}_{\tau}\) and \(\bar{u}_{\tau}\) satisfy
\[\int_{0}^{T}[\langle\zeta,\partial_{t}\beta_{\varepsilon}(\bar{u}_{\tau}) \rangle+(\nabla\hat{u}_{\tau},\nabla\zeta)]=\int_{0}^{T}(f_{0}(\beta_{ \varepsilon}(\hat{u}_{\tau}),\vec{s}),\zeta), \tag{4.18e}\]
for all \(\zeta\in C([0,T];\mathcal{H}^{1})\) which is dense in \(L^{2}(0,T;\mathcal{H}^{1})\). Passing to the limit \(\tau\to 0\) we conclude from (4.18) that \(u\) solves the system
\[\int_{0}^{T}[\langle\zeta,\partial_{t}\beta_{\varepsilon}(u)\rangle+(\nabla u,\nabla\zeta)]=\int_{0}^{T}(f_{0}(\beta_{\varepsilon}(u),\vec{s}),\zeta).\]
Defining \(M_{s,\varepsilon}=\beta_{\varepsilon}(u)\) we obtain the desired solution. We conclude that \(M_{s,\varepsilon}\in\mathcal{X}\hookrightarrow C([0,T];L^{2}(\Omega))\), see [11, Section 5.9] for the continuous embedding result.
**(Step 2) A-priori bounds:** The a-priori estimate (3.4) follows by inserting \(\varphi=M_{s,\varepsilon}\) and \(\varphi=\Phi_{\varepsilon}(M_{s,\varepsilon})\) in (3.3) and proceeding similar to the steps of the time-discrete case in Lemma 4.3. Lemma 4.3 shows that if in addition \(M_{0}\in H^{1}(\Omega)\) then \(\partial_{t}M_{s,\varepsilon}\in L^{2}(Q)\) which implies by the definition of weak derivatives that
\[\Delta\Phi_{\varepsilon}(M_{s,\varepsilon})=(\partial_{t}M_{s,\varepsilon}- f_{0})\in L^{2}(Q).\]
Multiplying the above equation with \(\partial_{t}\Phi_{\varepsilon}(M_{s,\varepsilon})=\partial_{t}u\in L^{2}(Q)\), integrating in \(Q\), and using integration by parts we conclude that
\[\int_{Q}\partial_{t}u\,\Delta u=-\int_{0}^{T}\partial_{t}(\tfrac{1}{2}\| \nabla u\|^{2})=\tfrac{1}{2}\|\nabla\Phi_{\varepsilon}(M_{0})\|^{2}-\tfrac{1 }{2}\|\nabla u(T)\|^{2},\]
which proves (3.5). The detailed steps mimic its discrete counterpart in Lemma 4.3.
### Finite time blow-up
The model (1.1) breaks down when \(M\) reaches \(1\). Henceforth, we will refer to this as blow-up. Unlike in the case of Dirichlet or mixed boundary conditions, this situation cannot in general be excluded for homogeneous Neumann conditions. Whether a solution will blow-up in finite time or not depends on the initial values \(M_{0}\), \(\vec{S}_{0}\). One can construct cases when the solution will definitely blow-up in finite time. We give a simple example below.
**Example 4.1** (Constant initial states).: Let us focus on the cellulolytic biofilm model with a single substrate [6], i.e., we look at the system
\[\partial_{t}M =\Delta\Phi(M)+f_{0}(M,S_{1})\text{ in }Q, M(0) =M_{0}\text{ in }\Omega, [\nabla M\cdot\hat{\boldsymbol{n}}]|_{\partial\Omega}=0, \tag{4.19a}\] \[\partial_{t}S_{1} =f_{1}(M,S_{1})\text{ in }Q, S_{1}(0) =S_{0,1}\text{ in }\Omega. \tag{4.19b}\]
Moreover, the reaction terms are given by a non-dimensionalized version of (1.2),
\[f_{0}(m,s)=\left(\frac{s}{1+s}-\lambda\right)m,\quad f_{1}(m,s)=-\frac{s\,m}{1+s}. \tag{4.20}\]
For the initial and boundary values we assume that
\[M_{0}\equiv\bar{M}\in(0,1),\quad S_{0,1}\equiv\bar{S}, \tag{4.21}\]
where \(\bar{M}\), \(\bar{S}>0\) are given constants. Then it is clear that the solution \((M,S_{1})\) of (1.1) remains constant in space for a given time. Hence, the system evolves according to the system of ODEs
\[\partial_{t}M=\frac{MS_{1}}{1+S_{1}}-\lambda\,M,\quad\partial_{t}S_{1}=-\frac{ MS_{1}}{1+S_{1}}\ \ \text{for}\ t>0\ \ \text{with}\ (M(0),S_{1}(0))=(\bar{M},\bar{S}). \tag{4.22}\]
Clearly, for \(\bar{M}\) close to \(1\), \(\bar{S}\) large, and \(\lambda\) small, the biomass density \(M\) reaches \(1\) in finite time.
It is possible to generalize Lemma 3.2 to provide a necessary condition for blow-up in finite time, or a sufficient condition for \(M\) to stay bounded away from \(1\). This is stated in the following proposition for the single substrate (\(k=1\)) case.
**Proposition 4.1** (Upper and lower bounds of \((M,s_{1})\)).: _Let (P1)-(P6) and (P4*) be satisfied, \(k=1\) (single substrate) and \(\Gamma_{1}=\emptyset\) (homogeneous Neumann condition) in (1.1). Recall (P5), and let \(f_{0}(m,s)\) be increasing with respect to \(s\geq 0\) for fixed \(m\), and let \(f_{1}(m,s)\) be decreasing with respect to \(m\geq 0\) for fixed \(s\). Let \((\tilde{M},\tilde{S},\hat{M},\hat{S})\in C^{1}(\mathbb{R}^{+})^{4}\) be the solution of the ODE system_
\[\left\{\begin{aligned} \partial_{t}\tilde{M}&=f_{0}( \tilde{M},\tilde{S}),&\partial_{t}\tilde{S}=f_{1}(\hat{M},\tilde {S}),\\ \partial_{t}\hat{M}&=f_{0}(\hat{M},\hat{S}),& \partial_{t}\hat{S}=f_{1}(\tilde{M},\hat{S}),\end{aligned}\right. \tag{4.23}\]
_with \((\tilde{M},\tilde{S},\hat{M},\hat{S})=(\underline{M},\underline{S},\overline {M},\overline{S})\) at \(t=0\). Further, assume that if \(\nu_{1}>0\), then for \(h_{1}\) defined in (P6), \(\tilde{S}(t)\leq h_{1}\leq\hat{\bar{S}}(t)\) a.e. in \(\partial\Omega\) for all \(t\in[0,T]\). Let \((M,S)\) be the weak solution of (1.1) in the sense of Definition 1. Then for all \(t\in[0,T]\),_
\[\tilde{M}(t)\leq M(t)\ \ \text{and}\ \ \tilde{S}(t)\leq S_{1}(t)\leq\hat{S}(t) \ \ \text{a.e. in}\ \Omega. \tag{4.24}\]
**Remark 4.2** (Assumptions in Proposition 4.1).: _Observe that the assumptions of Proposition 4.1 are satisfied by the reaction terms in (1.2) and (4.20) which were considered, e.g., in [6, 7]. Moreover, the assumption \(h_{1}\in[\tilde{S}(t),\tilde{S}(t)]\) a.e. in \(\partial\Omega\) is a consistency condition that can be omitted in the case of immobile substrates (\(\nu_{1}=0\)) which occurs in the models for cellulolytic biofilms [6], or when homogeneous Neumann conditions are assumed for \(S\)._
Proof.: The proof generalizes the arguments in Lemma 3.2 and follows the proof of Proposition 1 of [18]. The existence and uniqueness of the solution \((\tilde{M},\tilde{S},\hat{M},\hat{S})\) is evident from the Picard-Lindelof Theorem. Moreover, \(f_{0}(m,s)\) is increasing with \(s\), \(f_{1}(m,s)\) is decreasing with \(m\), along with \(\hat{M}(0)=\tilde{M}\geq\underline{M}=\tilde{M}(0)\) and \(\hat{S}(0)=\bar{S}\geq\underline{S}=\tilde{S}(0)\), together imply for all \(t>0\),
\[\hat{M}(t)\geq\tilde{M}(t),\ \ \ \text{ and }\ \ \ \hat{S}(t)\geq\tilde{S}(t). \tag{4.25}\]
This follows by writing from (4.23),
\[\frac{1}{2}[\check{M}(t)-\hat{M}(t)]_{+}^{2}=\int_{0}^{t}[\check{M}- \hat{M}]_{+}(f_{0}(\check{M},\check{S})-f_{0}(\hat{M},\hat{S})),\] \[\frac{1}{2}[\check{S}(t)-\hat{S}(t)]_{+}^{2}=\int_{0}^{t}[\check{S} -\hat{S}]_{+}(f_{1}(\hat{M},\check{S})-f_{1}(\check{M},\hat{S})),\]
for \(t>0\). Then, following the manipulations in Lemma 3.2 (also repeated below), Gronwall's Lemma yields \([\check{M}(t)-\hat{M}(t)]_{+}=[\check{S}(t)-\hat{S}(t)]_{+}=0\). We omit the detailed proof for brevity.
Insert the test functions \(\varphi=[M-\hat{M}]_{+}\) and \(\zeta_{1}=[S_{1}-\hat{S}]_{+}\) in (2.9). Observe that, \(\zeta_{1}\in L^{2}(0,T;H_{0}^{1}(\Omega))\) is a valid test function for \(\nu_{1}>0\) since \(S_{1}-\hat{S}=h_{1}-\hat{S}\leq 0\) on \(\partial\Omega\). Then following the manipulations in Lemma 3.2, one obtains from the first equation that
\[\int_{0}^{T}\partial_{t}\left(\frac{1}{2}\|[M-\hat{M}]_{+}\|^{2} \right)\leq\int_{0}^{T}(f_{0}(M,S_{1})-f_{0}(\hat{M},\hat{S}),[M-\hat{M}]_{+})\] \[=\int_{0}^{T}(f_{0}(M,S_{1})-f_{0}(\hat{M},S_{1}),[M-\hat{M}]_{+} )+\int_{0}^{T}(f_{0}(\hat{M},S_{1})-f_{0}(\hat{M},\hat{S}),[M-\hat{M}]_{+})\] \[\stackrel{{(\ref{eq:H1})}}{{\leq}}C_{L}\int_{0}^{T} \|[M-\hat{M}]_{+}\|^{2}+C_{L}\int_{0}^{T}([S_{1}-\hat{S}]_{+},[M-\hat{M}]_{+})\] \[\leq C\int_{0}^{T}[\|[M-\hat{M}]_{+}\|^{2}+\|[S_{1}-\hat{S}]_{+} \|^{2}].\] (4.26a) Here, we used that \[f_{0}(\hat{M},\cdot)\] is increasing to conclude that \[(f_{0}(\hat{M},S_{1})-f_{0}(\hat{M},\hat{S}))[M-\hat{M}]_{+}\leq C_{L}[S_{1}- \hat{S}]_{+}[M-\hat{M}]_{+}.\] Similarly, from the second equation, noting that \[f_{1}(\cdot,s)\] is decreasing for a given \[s\], one obtains \[\int_{0}^{T}\partial_{t}\left(\frac{1}{2}\|[S_{1}-\hat{S}]_{+}\|^ {2}\right)\leq\int_{0}^{T}(f_{1}(M,S_{1})-f_{1}(\check{M},\hat{S}),[S_{1}-\hat {S}]_{+})\] \[=\int_{0}^{T}(f_{1}(M,S_{1})-f_{1}(\check{M},S_{1}),[S_{1}-\hat{S} ]_{+})+\int_{0}^{T}(f_{1}(\check{M},S_{1})-f_{1}(\check{M},\hat{S}),[S_{1}- \hat{S}]_{+})\] \[\leq C_{L}\int_{0}^{T}([M-\check{M}]_{-},[S_{1}-\hat{S}]_{+})+C_ {L}\int_{0}^{T}\|[S_{1}-\hat{S}]_{+}\|^{2}\] \[\leq C\int_{0}^{T}[\|[M-\check{M}]_{-}\|^{2}+\|[S_{1}-\hat{S}]_{+ }\|^{2}]. \tag{4.26b}\]
Finally, inserting the test functions \(\varphi=[M-\check{M}]_{-}\) and \(\zeta_{1}=[S_{1}-\check{S}]_{-}\) in (2.9) we get analogous estimates to (4.26). Adding these inequalities and using Gronwall's Lemma completes the proof.
**Remark 4.3** (Guaranteed finite time blow-up/ boundedness).: _If the solution \(\check{M}\) in Proposition 4.1 reaches \(1\) in finite time, then it implies that the solution of the original system \((M,S_{1})\) blows up in finite time. On the other hand, if the solution \(\hat{M}\) remains bounded by a constant strictly less than \(1\), then \(M\) does not blow up and hence, the solution \((M,S_{1})\) is global-in-time. The bounds are sharp if \(|\check{M}-\underline{M}|\) and \(|\check{S}-\underline{S}|\) are small._
Spatial regularity of the biomass density
In this section, we analyze the spatial regularity of solutions of the degenerate diffusion equation (1.1a), i.e., we focus on the scalar equation
\[\partial_{t}M=\nabla\cdot[D(M)\nabla M]+f(M,\cdot)\qquad\text{in }Q, \tag{5.1}\]
where \(D:[0,1)\to[0,\infty)\) and \(f:[0,\infty)\times Q\to\mathbb{R}\). The regularity results we derive apply to a broad class of degenerate diffusion problems, see Remark 5.1, including the biofilm growth models [6, 7].
It is well known that the degeneracy of the diffusion coefficient \(D(0)=0\) causes a finite speed of propagation and sharp fronts at the interface between the regions \(\{M>0\}\) and \(\{M=0\}\) corresponding to steep gradients of \(M\). Despite this fact, the solution \(M\) is locally Holder continuous. This was shown for porous medium type equations in [5] and for equations with degenerate and singular diffusion in [14]. The global space-time regularity of solutions of the porous medium equation in \(\mathbb{R}^{d}\) has also been studied extensively using optimal regularity theory, see [12] and the references therein. Assuming homogeneous Neumann boundary conditions, in this section we show that \(M\) can further inherit global spatial regularity in the more general case (5.1), i.e. \(M\in L^{2}(0,T;H^{r}(\Omega))\), where \(r=1\) for \(a<2\), and \(r<1\) otherwise. This fact is not only mathematically intriguing but has important consequences in designing numerical tools and test functions for such problems. We now specify the assumptions on the functions \(D\) and \(f\).
**Assumption 5.1** (Assumptions on \(D\) and \(f\)).: The diffusion coefficient satisfies (P1). In addition, there exists \(a\in\mathbb{R}^{+}\) and a constant \(C>0\) such that
\[D(m)\geq Cm^{a}\]
for all \(m\in[0,1)\). The function \(f:[0,\infty)\times Q\to\mathbb{R}\) is Lipschitz continuous with respect to the first variable, and there exists a non-negative function \(f_{\max}\in\operatorname{Lip}(\mathbb{R})\) such that \(f(\cdot,(\boldsymbol{x},t))\leq f_{\max}(\cdot)\). Moreover, we assume that \(f(0,(\boldsymbol{x},t))\geq 0\) for all \((\boldsymbol{x},t)\in Q\).
**Theorem 5.1** (Global spatial regularity of \(M\)).: _Let \(\Gamma_{1}=\emptyset\) (homogeneous Neumann condition). Let \(M\in\mathcal{W}\) with \(\Phi(M)=\int_{0}^{M}D\in L^{2}(0,T;H^{1}(\Omega))\), and \(M(0)=M_{0}\) (see (P5)) be the weak solution of (5.1), i.e.,_
\[\int_{0}^{T}\langle\varphi,\partial_{t}M\rangle+\int_{0}^{T}(\nabla\Phi(M), \nabla\varphi)=\int_{0}^{T}(f(M,\cdot),\varphi), \tag{5.2}\]
_for all \(\varphi\in L^{2}(0,T;H^{1}(\Omega))\). Then under the Assumption 5.1, \(M\in L^{2}(0,T;H^{r}(\Omega))\) for_
* \(r=1\) _if either_ \(a<2\) _or_ \(\underline{M}=\operatorname{ess}\inf\{M_{0}\}>0\)_._
* _all_ \(r<2/a\)_, if_ \(a\geq 2\) _and_ \(\underline{M}=0\)
**Remark 5.1** (Generality of Theorem 5.1).: _Theorem 5.1 applies to the solution \(M\) of the coupled system (1.1) under the conditions (P1)-(P6) and (P4*). Since the spatial irregularity of \(M\) stems from the degeneracy at \(M=0\), our regularity results also cover diffusion coefficients \(D\) that are degenerate but non-singular, for instance, porous medium type equations. In this case, the additional assumption that \(f\) is bounded by \(f_{\max}\) can be omitted as solutions are not required to take values in \([0,1)\)._
**Remark 5.2** (Assumptions on the boundary conditions in Theorem 5.1).: _To simplify notations Theorem 5.1 is stated for homogeneous Neumann boundary conditions. However, the result remains valid for Dirichlet or mixed boundary conditions provided that \(\Phi(M)=\Phi(h_{0}^{e})\) at \(\Gamma_{1}\) and the functions \(\Psi_{\varepsilon}(h_{0}^{e})\in H^{1}(\Omega)\) are uniformly bounded with respect to \(\varepsilon\in(0,1)\), where \(\Psi_{\varepsilon}\) is introduced in (5.3) and \(h_{0}^{e}\) in (P6)._
The rest of this section is dedicated to the proof of Theorem 5.1. The main idea behind the proof is to use a test function \(\varphi\) of the form \(M^{-\alpha}\) (\(\alpha>0\)) in (5.2). However, \(\varphi\) might not be a valid test function due to \(M\) not being sufficiently regular, and \(M^{-\alpha}\) having a singularity at \(0\). To resolve this, we will construct a modified function that is admissible.
### Some auxiliary functions
As in the proof of Theorem 5.1 we consider the regularized problem introduced in Lemma 3.1. The function \(\Phi_{\varepsilon}\) is taken as in (3.2). For a given constant \(\alpha>0\), we further introduce the \(C^{1}(\mathbb{R})\) function
\[\Psi_{\varepsilon}(m):=\int_{1}^{m}\frac{\mathrm{d}\varrho}{\min\{\max\{ \varepsilon,\varrho^{\alpha}\},1\}}. \tag{5.3}\]
Note that
\[\Psi_{\varepsilon}^{\prime}\geq 0\quad\text{ and }\quad\Psi_{ \varepsilon}(m)<0\quad\text{ for }m<1. \tag{5.4}\]
**Lemma 5.1** (Growth of \(\Psi_{\varepsilon}\)).: _For a given \(\alpha>0\) and \(\varepsilon\in(0,1)\), let \(\Psi_{\varepsilon}\) be defined as in (5.3). Then, the following estimate holds,_
\[|m\Psi_{\varepsilon}(m)|\lesssim 1+\int_{1}^{m}\Psi_{\varepsilon}\qquad \text{ for all }m\geq 0. \tag{5.5}\]
Proof.: **Case 1 (\(1<m\)):** For \(m>1\), \(\Psi_{\varepsilon}(m)=m-1\), and the inequality can be verified directly.
**Case 2 (\(\varepsilon^{\frac{1}{\alpha}}\leq m\leq 1\)):** If \(\varepsilon^{\frac{1}{\alpha}}\leq m\leq 1\) and \(\alpha\neq 1\), we have
\[|m\,\Psi_{\varepsilon}(m)|\leq\left|m\int_{1}^{m}\frac{1}{\varrho ^{\alpha}}\right|d\rho|=\left|m\left(\frac{m^{1-\alpha}-1}{1-\alpha}\right) \right|\lesssim|m^{2-\alpha}-m|.\] (5.6a) Observe that, if \[\alpha\leq 2\] then the right hand side is bounded since \[m\leq 1\], and ( 5.5 ) definitely holds. The case when \[\alpha=1\] can also be handled rather easily since it yields \[\Psi_{\varepsilon}(m)=\log(m)\]. The interesting case is when \[\alpha>2\]. Then, we can estimate the right-hand side of ( 5.5 ) as follows, \[1+\int_{1}^{m}\Psi_{\varepsilon}=1+\int_{1}^{m}\int_{1}^{s}\frac {1}{\varrho^{\alpha}}d\rho ds=1+\int_{1}^{m}\frac{s^{1-\alpha}-1}{\alpha-1} \,\mathrm{d}s\gtrsim\int_{1}^{m}s^{1-\alpha}\gtrsim m^{2-\alpha}-1. \tag{5.6b}\]
Combining (5.6) and noting that \(m<1\), we have (5.5) for this case.
**Case 3 (\(0\leq m<\varepsilon^{\frac{1}{\alpha}}\)):** We only focus on \(\alpha>2\) since the case \(\alpha\leq 2\) can be shown exactly as in Case 2. We observe that
\[|m\,\Psi_{\varepsilon}(m)|= \left|m\Psi_{\varepsilon}(\varepsilon^{\frac{1}{\alpha}})+m\int \limits_{\varepsilon^{\frac{1}{\alpha}}}^{m}\tfrac{\mathrm{d}\varrho}{\max( \varepsilon,\varrho^{\alpha})}\right|=\left|m\Psi_{\varepsilon}(\varepsilon^{ \frac{1}{\alpha}})+m\int\limits_{\varepsilon^{\frac{1}{\alpha}}}^{m}\tfrac{1}{ \varepsilon}\right|\] \[\leq |m\Psi_{\varepsilon}(\varepsilon^{\frac{1}{\alpha}})|+\frac{m( \varepsilon^{\frac{1}{\alpha}}-m)}{\varepsilon}\leq|\varepsilon^{\frac{1}{ \alpha}}\Psi_{\varepsilon}(\varepsilon^{\frac{1}{\alpha}})|+\varepsilon^{ \frac{1}{\alpha}-1}(\varepsilon^{\frac{1}{\alpha}}-m).\] (5.7a) From Case 2 we conclude that \[|\varepsilon^{\frac{1}{\alpha}}\Psi_{\varepsilon}(\varepsilon^{\frac{1}{ \alpha}})|\lesssim 1+\int_{1}^{\varepsilon^{\frac{1}{\alpha}}}\Psi_{\varepsilon}\], and we obtain \[1+\int_{1}^{m}\Psi_{\varepsilon} =1+\int_{1}^{\varepsilon^{\frac{1}{\alpha}}}\Psi_{\varepsilon}+ \int_{\varepsilon^{\frac{1}{\alpha}}}^{m}\int_{1}\tfrac{\mathrm{d}\varrho}{ \max(\varepsilon,\varrho^{\alpha})}=1+\int_{1}^{\varepsilon^{\frac{1}{\alpha} }}\Psi_{\varepsilon}+\int_{m}^{\varepsilon^{\frac{1}{\alpha}}}\int^{1}\tfrac{ \mathrm{d}\varrho}{\max(\varepsilon,\varrho^{\alpha})}\] \[\geq 1+\int_{1}^{\varepsilon^{\frac{1}{\alpha}}}\Psi_{\varepsilon}+ \int_{m}^{\varepsilon^{\frac{1}{\alpha}}}\int_{\varepsilon^{\frac{1}{\alpha}} }^{1}\tfrac{\mathrm{d}\varrho}{\max(\varepsilon,\varrho^{\alpha})}\gtrsim| \varepsilon^{\frac{1}{\alpha}}\Psi_{\varepsilon}(\varepsilon^{\frac{1}{\alpha} })|+(\varepsilon^{\frac{1}{\alpha}}-m)\int_{\varepsilon^{\frac{1}{\alpha}}}^{1 }\tfrac{\mathrm{d}\varrho}{\varrho^{\alpha}}\] \[\gtrsim|\varepsilon^{\frac{1}{\alpha}}\Psi_{\varepsilon}( \varepsilon^{\frac{1}{\alpha}})|+\varepsilon^{\frac{1}{\alpha}-1}(\varepsilon ^{\frac{1}{\alpha}}-m)-(\varepsilon^{\frac{1}{\alpha}}-m).\] (5.7b) Hence, combining again ( 5.7 ) we have ( 5.5 ).
### Boundedness of \(M\) in \(L^{2}(0,t;H^{r}(\Omega))\)
To prove Theorem 5.1 we first show the following lemma.
**Lemma 5.2** (An estimate for the regularized solutions).: _Let Assumption 5.1 hold and \(\Gamma_{1}=\emptyset\). For \(\varepsilon\in(0,1)\) and \(\alpha>0\), let \(\Phi_{\varepsilon}\) and \(\Psi_{\varepsilon}\) be defined by (3.2) and (5.3) respectively. Let \(M_{\varepsilon}\in\mathcal{X}\) satisfy \(M_{\varepsilon}(0)=M_{0}\) and_
\[\int_{0}^{T}\langle\varphi,\partial_{t}M_{\varepsilon}\rangle+\int_{0}^{T}( \nabla\Phi_{\varepsilon}(M_{\varepsilon}),\nabla\varphi)=\int_{0}^{T}(f(M_{ \varepsilon},\cdot),\varphi),\]
_for all \(\varphi\in L^{2}(0,T;H^{1}(\Omega))\). Then, we have_
\[\int_{0}^{T}\int_{\Omega}\min\{M_{\varepsilon}^{a-\alpha},1\}|\nabla M_{ \varepsilon}|^{2}\lesssim 1+\int_{\Omega}\int_{1}^{M_{0}}\Psi_{\varepsilon}. \tag{5.8}\]
Proof.: First, we show that the following estimate holds,
\[\Phi_{\varepsilon}^{\prime}(m)\,\Psi_{\varepsilon}^{\prime}(m)\gtrsim\min\{1,m ^{a-\alpha}\}\qquad\text{ for all }m\geq 0. \tag{5.9}\]
We distinguish several cases. **Case 1: \({\Phi_{\varepsilon}}^{\prime}(m)=D(m)\).** This implies that \(\varepsilon\leq{\Phi_{\varepsilon}}^{\prime}(m)=D(m)\leq\frac{1}{\varepsilon}\) and hence, \(m<1\). If \({\Psi_{\varepsilon}}^{\prime}(m)=m^{-\alpha}\), then the result follows from Assumption 5.1. If \({\Psi_{\varepsilon}}^{\prime}(m)=\frac{1}{\varepsilon}\), then \(\Phi_{\varepsilon}^{\prime}(m)\,\Psi_{\varepsilon}^{\prime}(m)=D(m)/\varepsilon>1\). Finally, if \({\Psi_{\varepsilon}}^{\prime}(m)=1\), then \(m\geq 1\) which is excluded.
**Case 2: \({\Phi_{\varepsilon}}^{\prime}(m)=\varepsilon\).** Consequently, \(m<1\). The definition of \(\Phi_{\varepsilon}\) in (3.2) implies that \(\varepsilon\geq D(m)\gtrsim m^{a}\), where the last inequality holds by Assumption 5.1. Hence, for \(\Psi_{\varepsilon}^{\prime}=\varepsilon^{-1}\) the product \(\Phi_{\varepsilon}^{\prime}\,\Psi_{\varepsilon}^{\prime}=1\) and for \(\Psi_{\varepsilon}^{\prime}=m^{-\alpha}\) we have \(\Phi_{\varepsilon}^{\prime}\,\Psi_{\varepsilon}^{\prime}\gtrsim m^{a-\alpha}\).
**Case 3: \({\Phi_{\varepsilon}}^{\prime}=\frac{1}{\varepsilon}\).** This case follows similarly.
Inserting the test function \(\varphi=\Psi_{\varepsilon}(M_{\varepsilon})\) in (3.3), the first term becomes
\[\int_{0}^{T}\langle\partial_{t}M_{\varepsilon},\Psi_{\varepsilon}(M_{ \varepsilon})\rangle=\int_{\Omega}\int_{1}^{M_{\varepsilon}(T)}\Psi_{ \varepsilon}-\int_{\Omega}\int_{1}^{M_{0}}\Psi_{\varepsilon}.\] (5.10a) The second term of ( 3.3 ) gives \[\int_{0}^{T}(\nabla\Phi_{\varepsilon}(M_{\varepsilon}),\nabla\Psi_ {\varepsilon}(M_{\varepsilon}))= \int_{0}^{T}\int_{\Omega}\Phi_{\varepsilon}^{\prime}(M_{ \varepsilon})\,\Psi_{\varepsilon}^{\prime}(M_{\varepsilon})|\nabla M_{ \varepsilon}|^{2}\] \[\stackrel{{\eqref{eq:10}}}{{\gtrsim}}\int_{0}^{T} \int_{\Omega}\min\{M_{\varepsilon}^{a-\alpha},1\}|\nabla M_{\varepsilon}|^{2}. \tag{5.10b}\]
Finally, the third term of (3.3) yields using \(f(0,\cdot)\geq 0\) and (5.4) that
\[\int_{0}^{T}(f(M_{\varepsilon},\cdot),\Psi_{\varepsilon}(M_{ \varepsilon}))=\int_{0}^{T}(f(M_{\varepsilon},\cdot)-f(0,\cdot),\Psi_{ \varepsilon}(M_{\varepsilon}))+\int_{0}^{T}(f(0,\cdot),\Psi_{\varepsilon}(M_ {\varepsilon}))\] \[\stackrel{{\eqref{eq:10}}}{{\leq}}C_{L}\int_{0}^{T} \int_{\Omega}|M_{\varepsilon}\Psi_{\varepsilon}(M_{\varepsilon})|\stackrel{{ f(0,\cdot)\geq 0}}{{+}}\int_{0}^{T}(f(0,\cdot),[\Psi_{\varepsilon}(M_ {\varepsilon})]_{+})\] \[\lesssim\int_{0}^{T}\int_{\Omega}|M_{\varepsilon}\Psi_{ \varepsilon}(M_{\varepsilon})|\stackrel{{\eqref{eq:10}}}{{\lesssim }}\int_{0}^{T}\int_{\Omega}[1+\int_{1}^{M_{\varepsilon}}\Psi_{\varepsilon}]. \tag{5.10c}\]
In the above, noting that \(\Psi_{\varepsilon}(m)>0\) only when \(m>1\), we estimated \(f(0,\cdot)[\Psi_{\varepsilon}(M_{\varepsilon})]_{+}\leq f_{\max}(0)|M_{ \varepsilon}\Psi_{\varepsilon}(M_{\varepsilon})|\). Combining the inequalities (5.10) we have
\[\int_{\Omega}\int_{1}^{M_{\varepsilon}(T)}\Psi_{\varepsilon}+\int_{0}^{T}\int _{\Omega}\min\{M_{\varepsilon}^{a-\alpha},1\}|\nabla M_{\varepsilon}|^{2} \lesssim 1+\int_{\Omega}\int_{1}^{M_{0}}\Psi_{\varepsilon}+\int_{0}^{T}\int _{\Omega}\int_{1}^{M_{\varepsilon}}\Psi_{\varepsilon}. \tag{5.11}\]
Using Gronwall's Lemma (2.5a) the estimate (5.8) follows.
To conclude the proof of Theorem 5.1 from (5.8), we need the following lemma. For its proof we refer to Lemma 1.3 and Lemma B.1 of [24].
**Lemma 5.3** (Property of \(H^{r}(\Omega)\)).: _If \(u^{\gamma}\in H^{1}(\Omega)\) for some \(\gamma>1\) then \(u\in H^{r}(\Omega)\) for all \(r\in(0,\gamma^{-1}]\)._
Proof of Theorem 5.1.: **Case 1 (\(\underline{M}=\operatorname{ess}\inf\{M_{0}\}>0\)):** In this case, taking \(\varepsilon_{1}^{\frac{1}{\alpha}}<\underline{M}\) we conclude by (5.3) that
\[\int_{\Omega}\int_{1}^{M_{0}}\Psi_{\varepsilon}\quad\text{ is uniformly bounded for all }\varepsilon\leq\varepsilon_{1}.\]
Hence, taking \(\alpha=a\) in Lemma 5.2 provides a uniform bound on \(\int_{0}^{T}\|\nabla M_{\varepsilon}\|^{2}\). Moreover, \(\|M_{\varepsilon}\|_{L^{\infty}(0,T;L^{\infty}(\Omega))}\) is bounded by Lemma 3.2. Hence, \(M_{\varepsilon}\) is uniformly bounded in \(L^{2}(0,T;H^{1}(\Omega))\). Passing to the limit \(\varepsilon\to 0\), the convergence of \(M_{\varepsilon}\) to a unique \(M\in\mathcal{W}\) follows from Lemma 3.3 (see (3.12)). Consequently, the uniform bound implies that \(M\in L^{2}(0,T;H^{1}(\Omega))\).
**Case 2 (\(a<2\)):** In this case, put \(\alpha=a\). Then, passing the limit \(\varepsilon\to 0\) on the right hand side of (5.8) one has
\[\lim_{\varepsilon\searrow 0}\int_{\Omega}\int_{1}^{M_{0}}\Psi_{\varepsilon} \lesssim\int_{\Omega}\int_{M_{0}}^{1}(1-\varrho^{1-a})\,\mathrm{d}\varrho \lesssim 1.\]
Here we used that \(M_{0}^{2-a}\leq 1\) a.e. in \(\Omega\) by assumption (P5). Hence, we again obtain a uniform bound on \(\int_{0}^{T}\|\nabla M_{\varepsilon}\|^{2}\) and consequently, \(M\in L^{2}(0,T;H^{1}(\Omega))\).
**Case 3 (\(a\geq 2\)):** We set \(\alpha=2-\delta\) for sufficiently small \(\delta>0\). Then the previous case and (5.8) gives that \((M_{\varepsilon})^{1+\frac{a-\alpha}{2}}\in L^{2}(0,T;H^{1}(\Omega))\) and it is uniformly bounded. From Lemma 5.3 it follows that \(M_{\varepsilon}\in L^{2}(0,T;H^{r}(\Omega))\) for
\[r\leq\frac{2}{2+a-\alpha}=\frac{2}{a+\delta}\qquad\text{and sufficiently small $\delta>0$.}\]
This concludes the proof.
### Acknowledgements
K. Mitra and S. Sonner would like to thank the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO) for their support through the Grant OCENW.KLEIN.358. K. Mitra was additionally supported by Fonds voor Wetenschappelijk Onderzoek (FWO) through the Junior Postdoctoral Fellowship during the completion of this work.
|
2309.11084 | Hydrodynamics is Needed to Explain Propulsion in Chemophoretic Colloidal
Rafts | Active particles driven by a chemical reaction are the subject of intense
research to date due to their rich physics, being intrinsically far from
equilibrium, and their multiple technological applications. Recent attention in
the field is now shifting towards exploring the fascinating dynamics of mixture
of active and passive systems. Here we realize active colloidal rafts, composed
of a single catalytic particle encircled by several shells of passive
microspheres assembled via light activated, chemophoretic flow. We show that
considering only diffusiophoresis can explain the cluster kinetics but not the
cluster propulsion behavior. Thus, using the Lorenz reciprocal theorem, we show
that propulsion emerges by considering hydrodynamics via the diffusioosmotic
answer of the substrate to the generated chemophoretic flow. While
diffusioosmotic flows are often relegate to a secondary role, our work
demonstrates their importance to understand the rich physics of active
catalytic systems. | Dolachai Boniface, Sergi G. Leyva, Ignacio Pagonabarraga, Pietro Tierno | 2023-09-20T06:24:22Z | http://arxiv.org/abs/2309.11084v2 | # Hydrodynamics is Needed to Explain Propulsion in Chemophoretic Colloidal Rafts
###### Abstract
Active particles driven by a chemical reaction are the subject of intense research to date due to their rich physics, being intrinsically far from equilibrium, and their multiple technological applications. Recent attention in the field is now shifting towards exploring the fascinating dynamics of mixture of active and passive systems. Here we realize active colloidal rafts, composed of a single catalytic particle encircled by several shells of passive microspheres assembled via light activated, chemophoretic flow. We show that considering only diffusiophoresis can explain the cluster kinetics but not the cluster propulsion behavior. Thus, using the Lorenz reciprocal theorem, we show that propulsion emerges by considering hydrodynamics via the diffusioosmotic answer of the substrate to the generated chemophoretic flow. While diffusioosmotic flows are often relegate to a secondary role, our work demonstrates their importance to understand the rich physics of active catalytic systems.
+
Footnote †: Both authors equally contributed to this work
_Introduction.-_ In the past few years, active colloidal particles have led to several exciting developments in the field of non-equilibrium statistical mechanics [1; 2; 3; 4] while being also used as simplified models to reproduce emerging phenomena in biological self-propelling systems [5; 6; 7; 8]. Since the pioneering works of Ismagilov _et al._[9] and Paxton _et al._[10], chemical reactions have been routinely used to induce propulsion in asymmetric systems [11] including Janus particles [12; 13; 14; 15], nanorods [16; 17], dimers [18; 19], mixtures [20; 21] and many others [22; 23; 24]. Besides the interest in the reaction mechanism that leads to net motion, these particles showed the capabilities to pick up, transport, and release microscopic cargoes [25; 26; 27; 28]. Thus, they may find direct applications in different technological fields, including biomedicine [29], targeted drug delivery [30] and microfluidics [31].
In most of these catalytic systems, self-propulsion is usually explained in terms of diffusiophoresis or chemophoresis, namely the particle motion in a concentration gradient [32]. However, in presence of a gradient also near a fixed surface, such as the substrate near a particle, there will be an osmotic flow [33]. For active systems near a substrate, this osmotic flow may affect the system dynamics through viscous interactions [34]. Indeed, the osmotic flows on the substrate may compete with particle diffusiophoresis. Because both phenomena have a similar osmotic origin, the diffusiophoresis and substrate diffusioosmis contributions are difficult to disentangle [35]. Thus, most of the theoretical and simulation models in the field do not consider the impact of hydrodynamic interactions associated to diffusioosmosis. In contrast, a recent theoretical work showed that the diffusioomotic contribution in active Janus particles can be even used to guided a interaction with a chemically patterned substrate [36].
Here, we combine experiments and theory to demonstrate that the diffusioosmotic flow induced by the catalytic particle due to the near surface is necessary to describe the motion of active particles driven by chemical reactions. We realize active colloidal rafts composed of several shells of passive spheres around a single catalytic apolar particle. These clusters grow up to an area of 80 times the active inclusion, corresponding to 7 compact shells of passive spheres, and investigate the raft kinetics and dynamics during the illumination process. We find that the clusters display self-propulsion despite being made of symmetric shells of passive spheres. Numerical simulations based only on a purely diffusiophoretic system, without osmotic flow on the substrate, reproduce the raft kinetics but not the cluster direction of motion and its persistence length. We show that hydrodynamics and the close boundary are essential features that should be taken into account to explain the mechanism of motion of the composite clusters.
_Experiments.-_ Our colloidal rafts are realized by illuminating with blue light (wavelength \(\lambda=450-490\)nm) synthesized hematite ellipsoids with short and long axis equal to \(1.3\,\mu\)m and \(1.8\,\mu\)m resp, inset Fig. 1(a). These particles are dispersed with passive silica spheres (\(1\,\mu\)m diameter) in an aqueous solution of hydrogen peroxide (\(3.6\,\,\%\) w/v). The pH solution is raised to \(\sim 9.2\) by adding Trimethylphenylammonium to make the hematite hydrophilic due to hydroxylation of its surface [38]. The colloidal dispersion is sediment over a glass substrate of a sealed rectangular capillary tube. The relative density is below 1 active particle for 2000 passive ones, with a total surface fraction of \(\sim 6\%\). Once the light is applied, the hematite particles start the decomposition of hydrogen peroxide in water, following the reaction: \(2\text{H}_{2}\text{O}_{2(l)}\rightarrow\text{O}_{2(g)}+2\text{H}_{2}\text{O}_{(l)}\), Fig. 1(b). It was previously shown that a such chemical reaction induced propulsion in Janus
colloids with anisotropic coating [39; 40]. For a single hematite particle we find that diffusiophoresis induces an enhanced diffusive dynamics as shown in the Supplementary Material (SM) in [37]. The presence of a near passive sphere induces a strong phoretic attraction which generates a stable and large passive-active cluster displaying self-propulsion [41]. We find that the rafts follow a sub-linear growth with a power law behavior up to \(t=2000\) s (\(\simeq 0.6\) hours), inset in Fig. 1(c). The exponent \(1/3\) is consistent with the Ostwald coarsening process, as described by the Lifshitz-Slyozov-Wagner theory [42]. Such exponent was predicted in scalar field theory of active systems [43] and recently experimental observed in clustering passive particles by active agents [44]. During growth the raft translates and rotates, and the association of both can result in looping trajectories, Fig. 1(b). The system accumulates up to \(6-7\) layers of passive particles for one-hour experiment. The mean cluster velocity, \(\bar{v}_{c}\) linearly decreases with the cluster area \(A\), reducing almost to zero for the largest size of \(A=175\,\mu\)m\({}^{2}\), Fig. 1(c).
_Simulations.-_ To understand the kinetics and self-propulsion behavior, we first perform Brownian dynamic simulations using input parameters obtained from the experimental data. Here we assume a purely diffusiophoretic system. We consider a bath of \(i=1..N\) passive particles at positions \(\mathbf{R}_{i}\) (diameter \(\sigma_{p}\), surface mobility \(\mu_{p}\) and diffusion coefficient \(D_{p}\)) with an unique active particle. To model the aspect ratio of the experimental ellipsoids, the hematite is considered as a dumbbell of two active particles, \(\alpha=1,2\), at positions \(\mathbf{r}_{\mathbf{\alpha}}\) (diameter \(\sigma_{a}=1.3\mu\)m, surface mobility \(\mu_{a}\), and diffusion coefficient \(D_{a}\)) joined by a spring with rest length \(0.5\)\(\mu\)m, and force of magnitude \(F^{h}\) along the vector \(\hat{\mathbf{n}}_{i}=(\mathbf{r}_{i}-\mathbf{r}_{j})/r_{ij}\) joining the two beads. Thus, we integrate the overdamped
Figure 1: (a) Scheme showing the assembly of the colloidal raft. Top inset shows an electron microscopy image of one hematite, scale bar is \(500\) nm. (b) Sequence of two images of a growing raft with superimposed (red) the trajectory of the central active particle. Time \(t=0\)s corresponds to light application. Scale bar is \(5\,\mu\)m. Last image displays the final cluster size, see VideoS1 in [37]. (c) Average raft velocity \(\bar{v}_{c}\) versus cluster area \(A\) showing the experimental data (black disk) and a linear regression with \(\gamma_{0}=0.26\pm 0.02\,\mu\)ms\({}^{-1}\) and a negative slope \(\gamma=(1.48\pm 0.02)\cdot 10^{-3}\mu\)m\({}^{-1}\)s\({}^{-1}\). Inset shows a log-log plot of the area versus time for several rafts, error bars are indicated by the shaded red region.
Figure 2: (a) Sequence of images showing the attraction of a silica particle towards the hematite once blue light is applied (\(t=0\)). (b) Relative speed \(\Delta v_{r}\) versus relative distance \(\Delta r\): the solid line is fitted to the data (blue circles) following Eq. 3. Inset displays a heat map of the velocity and direction of a passive particle near the hematite. (c,d) Mean cluster speed \(\bar{v}_{c}\) (c) and mean square displacement (MSD) (d) versus time from experiments (blue line) and simulation (orange disks). In both graphs the shaded red regions denote experimental uncertainties.
Langevin equations:
\[\dot{\mathbf{r}}_{\alpha} = \mathbf{v}_{\alpha}+(F^{h}\hat{\mathbf{n}}_{\alpha}+\mathbf{F}_{\alpha}^{c})/ \gamma_{a}+\sqrt{2D_{a}}\mathbf{\xi}_{\alpha}\ \, \tag{1}\] \[\dot{\mathbf{R}}_{i} = \mathbf{V}_{i}+\mathbf{F}_{i}^{c}/\gamma_{p}+\sqrt{2D_{p}}\mathbf{\xi}_{i}\ . \tag{2}\]
where \(\gamma_{a}\) and \(\gamma_{p}\) correspond to the active and passive friction coefficients, respectively. Here \(\mathbf{F}_{i}^{c}\) and \(\mathbf{F}_{\alpha}^{c}\) account for steric forces given by a Weeks-Chandler-Andersen potential, which prevent passive and active particles from overlapping. The term \(\mathbf{\xi}_{i}\) is a random Gaussian noise that accounts for the thermal bath. Each bead constituting the dumbbell in the hematite acts as a source [41; 45; 22] of a chemical field, \(\phi\). A second particle with mobility \(\mu_{p}\) (\(\mu_{a}\)) will experience a slip velocity on its surface, \(\mathbf{u}_{s}=\mu_{p}(\mu_{a})\nabla_{\parallel}\phi\), that leads to a net diffusiophoretic velocity \(\mathbf{V}_{i}\) (\(\mathbf{v}_{\alpha}\)), see [37] for the derivation. Accordingly, the relative speed of approach \(\Delta v_{r}\) between an active and a passive particle at a relative distance \(\Delta r\) reads,
\[\Delta v_{r}=v_{\alpha}+V=v_{0}\left[\bar{\mu}\left(\frac{\sigma_{a}}{\Delta r }\right)^{2}+\frac{1}{4}\left(\frac{\sigma_{p}}{\sigma_{a}}\right)^{3}\left( \frac{\sigma_{a}}{\Delta r}\right)^{5}\right]\ \, \tag{3}\]
where \(\bar{\mu}=\mu_{p}/\mu_{a}\) is the ratio of the two mobilities. The detailed derivation of this functional form is provided in [37]. We use Eq. 3 to fit the experimental data as shown in Fig. 2(b), and extract a characteristic diffusiophoretic velocity given by \(v_{0}=11.6\pm 0.4\)\(\mu\)m \(s^{-1}\). Note that the heat map of the velocity field shown in the inset in Fig. 2(b) becomes slightly anisotropic (less than 5%) if the orientation of the hematite is kept fixed with a constant field, as shown in [37]. We also note that the attraction between the passive and active particle is only possible if \(\mu_{p}\) is negative. More details on the other terms used in Eq. 2 and on the simulations are given in [37].
The simulations explain some of the experimental features: the growth of the raft area as \(t^{1/3}\), the emergence of the self-propulsion behavior and, in particular, the decrease of the raft velocity with the cluster area as shown in Fig. 2(c). However, when comparing the raft dynamics via other observables, we find already some discrepancies. For example in Fig. 2(d) we show the average translational mean square displacement \(\text{MSD}(\tau)\equiv\langle(\mathbf{r}(t)-\mathbf{r}(t+\tau))^{2}\rangle\sim\tau^{\delta}\), with \(\tau\) the lag time and \(\langle\dots\rangle\) a time average. Via the exponent \(\delta\), the MSD can be used to distinguish the diffusive (\(\delta=1\)) dynamics from sub-[super] diffusive (\(\delta<1\) [\(\delta>1\)]) and ballistic one (\(\delta=2\)). We define the persistence length of the trajectory \(l_{p}\), as the characteristic length over which the velocity orientation decorrelates. We calculate this quantity from the cluster trajectory as, \(\langle\cos(\theta_{v}(d+\Delta l)-\theta_{v}(d))_{d}\propto\exp(-\Delta l/l_{p})\) being \(d\) the distance travelled by the cluster and \(\theta_{v}\) the orientation of the velocity vector. From the experiment, we measure a persistence length \(l_{p}\simeq 20\,\mu\)m which is significantly larger than the one predicted in the simulations, \(l_{p}\simeq 2.5\,\mu\)m. As we show below, this discrepancy can be solved by considering the asymmetric location of the hematite within the cluster.
_Cluster asymmetry.-_ To better understand the origin of the raft propulsion, we have analyzed in detail the position of the hematite source within the cluster. During the growing process and in the steady state we find that the hematite is not exactly located in the cluster' geometric center, but it is displaced a small distance \(b\). As shown in Fig. 3, the asymmetry parameter \(\chi=b/a\), being \(a\) the radius of the cluster, decreases with the raft area \(A\). Moreover, the analysis of the distribution of angle \(\beta\) between the cluster velocity \(v_{c}\) and the asymmetry vector \(\mathbf{b}\) gives further insight on the propulsion direction. As shown in the inset of Fig. 3, such wrapped distribution is Gaussian (red line) and centered around \(\beta=180^{\circ}\), meaning that the raft propels with the active particle at the rear. Numerical simulations show that the clusters instead tend to propel with the active particle at the front, as shown in the SM, VideoS2.
Qualitatively, we can understand how the asymmetric location of the hematite in the cluster impacts the persistence length. When a colloidal raft moves in a crowded environment of passive particles, they tend to accumulate at the front. Thus, a cluster moving with the hematite shifted toward the front has to change regularly its motion direction to maintain this configuration, as reported in the simulations. While for a cluster moving with the hematite shifted towards the rear, the colloids front accumulation preserves the asymmetry and the motion direction, as observed in the experiments. The two situations lead respectively to a system with a relatively low
Figure 3: Experimentally measured asymmetry parameter \(\chi=b/a\) versus cluster area \(A\), being \(a\) the cluster radius. Top left inset shows an image of a cluster, while right inset displays the distribution of angles \(\beta\) between the cluster velocity \(v_{c}\) and the vector \(\mathbf{b}\) pointing from the cluster center to the hematite particle. These quantities are defined in the schematic in the bottom inset.
and high persistence length. To confirm this hypothesis, we have implemented a specific simulation by imposing that the cluster moves with the hematite at the rear. As shown in VideoS3 in [37], we observe a much longer persistence length, closer to the experimental results.
The discrepancy between the numerical and experimental results arises from the assumption that the system is purely diffusiophoretic. The simulation neglects hydrodynamics and does not consider the presence of the near wall. Indeed in a separate set of experiments we have replaced the glass substrate with a polystyrene one and have observed a decrease of the cluster area, as shown in [37]. This effect highlights the importance of the bottom surface.
_Theory._ To include the effect of hydrodynamics and the proximity of the wall, we approximate the colloidal raft by a disk of diameter \(2a\) and the shifted hematite by a "semi-punctual" source, where the concentration field \(\phi\) is similar to a punctual source except along the source surface, where \(\phi\) is constant. We orient the system such that the unit vector \(\mathbf{e}_{z}\) is diametrically opposed to the vector \(\mathbf{b}\) linking the cluster center to the source. The negative or positive sign of the cluster velocity \(v_{c}\) indicates a disk moving with the source at the front or the rear, respectively. We assume that the catalyzed product is released at the rate \(J\), and diffuses in bulk with a diffusion coefficient \(D_{c}\). We consider two parallel surfaces, the disk (\(p\)) and the substrate (\(S\)), separated by \(h\), such as \(h/a\ll 1\). To describe the disk dynamic we introduce two dimensionless numbers: the Peclet \(\text{Pe}_{c}=\frac{v_{c}a}{D_{c}}\), the Damkohler number \(\text{Da}=\frac{\mu_{p}J}{4\pi aD_{c}^{2}}\) which relates the reaction rate to the diffusive mass transport rate. Experimentally, \(\text{Pe}_{c}\simeq 10^{-4}\ll 1\) thus the transport of the solute is dominated by diffusion, and the source motion can be disregarded. Therefore at a distance \(r\) from the source the chemical gradient is \(\nabla\phi=-J/(4\pi D_{c}r^{2})\mathbf{e}_{r}\). The concentration gradient generates a slip osmotic flow \(\mathbf{u}_{S}=\mu\nabla_{S}\phi\), along the relevant surfaces, namely the disk surface \(p\) and the substrate \(S\), such that \(\left.\mathbf{u}\right|_{p}=v_{c}\mathbf{e}_{z}+\mu_{p}\mathbf{\nabla}\phi\), and \(\left.\mathbf{u}\right|_{S}=\mu_{S}\nabla\phi\). The disk motion is force-free, hence \(\mathbf{F}_{v}+\mathbf{F}_{p}+\mathbf{F}_{S}=0\), where \(\mathbf{F}_{v}\) is the damping force due to the motion of the disk, \(\mathbf{F}_{p}\) is the phoretic force associated with the slip velocity on the disk's surface, and \(\mathbf{F}_{S}\) the osmotic contribution coming from the slip velocity on the wall. See [37] for details of all terms employed and the extended model.
Using the Lorentz reciprocal theorem, we arrive at
\[\text{Pe}_{c}\simeq 2\text{Da}(1-\mu_{S}/\mu_{p})\chi+O(\chi^{2})\;\;, \tag{4}\]
and, accordingly, the velocity of the disk at the first order in \(\chi\) is given by
\[v_{c}\propto(\mu_{p}-\mu_{S})\frac{\chi}{A}. \tag{5}\]
Note that if we remove the osmotic flow along the substrate, the term \(\mu_{S}\) disappears from Eq. 5, and \(v_{c}\propto\mu_{p}\frac{\chi}{A}\). Neglecting or taking into account this flow leads almost to the same dependencies with \(\frac{\chi}{A}\) for the velocity of the disk which is consistent with the experimental observation, Fig. 4. The difference between the osmotic mobilities \(\mu_{p}-\mu_{S}\) in Eq. 5 marks of the competition between diffusiophoresis and substrate diffusiosomsis. It controls the sign of \(v_{c}\), i.e. direction of motion of the raft. Since the passive colloid and the substrate are made of silica, it is reasonable to assume that \(\mu_{S}\) is comparable to \(\mu_{p}\). We also deduce from the clustering phenomenon that \(\mu_{p}<0\). If we assume that \(\mu_{S}/\mu_{p}>1\), the osmotic model in Eq. 5 predicts a cluster moving with the hematite at the rear, as we observe experimentally.
_Conclusion.-_ We have investigated the dynamics of active colloidal rafts composed of a central hematite particles and several shells of passive colloids. We have shown that this system displays a clustering phenomenon due to diffusiophoresis, and collective self-propulsion resulting from diffusiosomosis on the nearby substrate. Indeed, simulations based only on diffusiophoresis describe well the clustering kinetics, but cannot explain the cluster direction of motion and persistence length. Our model solves the discrepancy by considering the cluster asymmetry and, in particular, the substrate diffusiososmotic flow. Thus, we have shown that there is a competition between the diffusiophoresis and osmosis for the cluster motion, and the crucial role of the substrate diffusiosomotic flow on the dynamics. In the line of these results, previous works in the field have also shown the importance of considering the osmotic flow generated by an active particle close to a wall [13; 46]. The theoretical
Figure 4: Experimental data of the mean cluster velocity \(\bar{v}_{c}\) versus ratio \(\chi/A\) being \(\chi=b/a\). Scattered circles are experimental data while the continuous line is a fit from the model, see Eq. 5 in the text. Inset illustrates a schematic of the model: the cluster is considered as a thin disk of radius \(a\) with an active source of size \(\sigma_{a}\) and distance \(b\) from the center. \(J\) and \(D\) denote respectively the release rate of the source and solvent diffusion rate.
approach based on the Lorentz reciprocal theorem, could be extended to many other catalytic active systems close to a substrate, taking into account the proper boundary conditions. In our experiments, we approximate the raft to a disk allowing to reach an analytical expression that captures the underlying physics of this complex, yet rich hybrid active passive system.
This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Programme (grant agreement no. 811234). S.G.L. and I.P. acknowledge support from Ministerio de Ciencia, Innovacion y Universidades (grant no. PID2021-126570NB-100 AEI/FEDER-EU) and from Generalitat de Catalunya under project 2021SGR-673. P.T. and I.P. acknowledge support from the Generalitat de Catalunya (ICREA Academia).
|
2309.03480 | An Anonymous yet Accountable Contract Wallet System using Account
Abstraction | Account abstraction allows a contract wallet to initiate transaction
execution. Thus, account abstraction is useful for preserving the privacy of
externally owned accounts (EOAs) because it can remove a transaction issued
from an EOA to the contract wallet and hides who issued the transaction by
additionally employing anonymous authentication procedures such as ring
signatures. However, unconditional anonymity is undesirable in practice because
it prevents to reveal who is accountable for a problem when it arises. Thus,
maintaining a balancing between anonymity and accountability is important.
In this paper, we propose an anonymous yet accountable contract wallet
system. In addition to account abstraction, the proposed system also utilizes
accountable ring signatures (Bootle et al., ESORICS 2015). The proposed system
provides (1) anonymity of a transaction issuer that hides who agreed with
running the contract wallet, and (2) accountability of the issuer, which allows
the issuer to prove they agreed with running the contract wallet. Moreover, due
to a security requirement of accountable ring signatures, the transaction
issuer cannot claim that someone else issued the transaction. This
functionality allows us to clarify the accountability involved in issuing a
transaction. In addition, the proposed system allows an issuer to employ a
typical signature scheme, e.g., ECDSA, together with the ring signature scheme.
This functionality can be considered an extension of the common
multi-signatures that require a certain number of ECDSA signatures to run a
contract wallet. The proposed system was implemented using zkSync (Solidity).
We discuss several potential applications of the proposed system, i.e., medical
information sharing and asset management. | Kota Chin, Keita Emura, Kazumasa Omote | 2023-09-07T04:54:19Z | http://arxiv.org/abs/2309.03480v1 | # An Anonymous yet Accountable Contract Wallet System using Account Abstraction
# An Anonymous yet Accountable Contract Wallet System using Account Abstraction
Kota Chin
_University of Tsukuba_
_National Institute of Information and_
_Communications Technology_
_Japan_
Keita Emura
_Kanazawa University, Japan_
_National Institute of Information and_
_Communications Technology_
_Japan_
Kazumasa Omote
_University of Tsukuba_
_National Institute of Information and_
_Communications Technology_
_Japan_
###### Abstract
Account abstraction allows a contract wallet to initiate transaction execution. Thus, account abstraction is useful for preserving the privacy of externally owned accounts (EOAs) because it can remove a transaction issued from an EOA to the contract wallet and hides who issued the transaction by additionally employing anonymous authentication procedures such as ring signatures. However, unconditional anonymity is undesirable in practice because it prevents to reveal who is accountable for a problem when it arises. Thus, maintaining a balancing between anonymity and accountability is important. In this paper, we propose an anonymous yet accountable contract wallet system. In addition to account abstraction, the proposed system also utilizes accountable ring signatures (Bootle et al., ESORICS 2015). The proposed system provides (1) anonymity of a transaction issuer that hides who agreed with running the contract wallet, and (2) accountability of the issuer, which allows the issuer to prove they agreed with running the contract wallet. Moreover, due to a security requirement of accountable ring signatures, the transaction issuer cannot claim that someone else issued the transaction. This functionality allows us to clarify the accountability involved in issuing a transaction. In addition, the proposed system allows an issuer to employ a typical signature scheme, e.g., ECDSA, together with the ring signature scheme. This functionality can be considered an extension of the common multi-signatures that require a certain number of ECDSA signatures to run a contract wallet. The proposed system was implemented using zkSync (Solidity). We discuss several potential applications of the proposed system, i.e., medical information sharing and asset management.
Blockchain, Account abstraction, Contract wallet, Accountable ring signatures.
## I Introduction
### _Introduction of Account Abstraction_
Ethereum involves two kinds of accounts, i.e., externally owned accounts (EOA), which are controlled by a user-managed secret key, and contract accounts (contract wallets), which are controlled by smart contracts. In the current implementation of Ethereum, to run a contract wallet, an EOA must send a transaction to the contract wallet. Then the contract wallet runs a transaction according to the rule specified in the contract. Account abstraction [11] allows a contract wallet to initiate transaction execution, i.e., it can remove a transaction sent from an issuer to the contract. Account abstraction provides two primary benefits. The first benefit is the reduction of gas costs because account abstraction can remove a transaction sent from an issuer to the contract wallet. The second benefit is flexible verification. In the current Ethereum implementation, transactions, issued by EOAs, are verified according to the validity of signatures generated by secret keys of EOAs, and the underlying signature scheme is restricted to the elliptic curve digital signature algorithm (ECDSA). Thus, ECDSA signatures are employed generally when transaction validity is verified in the contract, although, theoretically, any signature schemes can be employed because signatures are verified by programs. In contrast, due to account abstraction, no EOA issues transactions, and thus any signature schemes can be employed more easily to verify transaction validity in the contract. For example, CRYSTALS-Dilithium [15], FALCON [18], and SPHINCS+ [4], which have been selected by the NIST Post-Quantum Cryptography Standardization, can be employed under the assumption they can be implemented by a program language that can be run by the contract wallet, e.g., Solidity.1 In addition, account abstraction allows us to employ signatures with rich functionalities, such as accountable ring signatures [10]. Representative examples of systems that support account abstraction include StarkNet [2] and zkSync [3], which are network technologies in Ethereum Layer 2 (L2). Note that L2 is described in further detail in Section II-B.
Footnote 1: Although we consider only signatures in this paper, any verification method can be employed or a contract wallet with no verification step can be constructed.
Transactions between the EOAs and contract wallets are removed; thus account abstraction is useful for preserving the privacy of EOAs. In fact, EIP-2938 [11] states that "_Privacy-preserving systems like tornado.cash_" are a motivation to introduce account abstraction (tornado.cash is a mixing service). Concretely, it is expected that account abstraction can hide the issuer of a transaction. Note that a contract wallet typically verifies a signature using a verification key that has been registered in the contract program. Since the verification key is public (in a public blockchain), anyone can identify who issued the transaction (precisely, which verification key
is used and the key holder who issued the transaction). In other words, account abstraction does not hide transaction issuer information. We emphasize that unconditional anonymity could promote crime. For example, tornado.cash is a tool that could be misused to facilitate money laundering, and the Fiscal Information and Investigation Service, an agency of the government of the Netherlands, arrested a 29-year-old man in Amsterdam as a suspected developer [1]. This indicates maintaining a balancing between anonymity and accountability is important.
### _Privacy on Ethereum_
Each EOA manages an ECDSA verification key vk, and the corresponding address is a (last 20 bytes of) hashed value of vk (here Keccak-256 hash function is used). Here, as each address appears random,2 anyone can easily determine whether multiple transactions were issued by the same EOA, although it is difficult to detect the EOA in the real world. This means that issuer privacy is preserved in the sense of pseudonymity in crypto asset trading.
Footnote 2: Vanity addresses are often used to reduce the storage cost of addresses. For example, using an address 0x0000..., a part of address “\(0\cdots 0\)” does not need to be stored. Even in this case, pseudonymity level privacy protection is guaranteed.
**Does Pseudonymity Provide Sufficient Privacy Protection?** When crypto assets are traded between individuals, the pseudonymity level privacy protection might be sufficient. However, some information leakage occurs if a system is implemented using smart contracts.
For example, assume that a request to view patient data is made by a transaction in an electronic medical record system using Ethereum smart contracts [5, 16, 17, 20, 22, 25, 26, 30, 33]. In this case, we assume that multi-patient data are accessed from the same address. Then, we expect that the transaction issuing address is managed by a doctor or researcher. In addition, we expect that patients are suffering from similar diseases. Here, if the doctor or researcher associated with the address is identifiable, information identifying the patient's disease may be leaked (although this was not authorized by the patient) depending on the specialty of the issuer of the transaction. Thus, no information about hospitals and medical offices should be leaked from transactions. However, it is necessary to internally verify who issued the transaction in order to prevent unnecessary access to medical data.
In the case of group asset management, pseudonymity level privacy protection would make the investment performance of each address public. Thus, it is desirable to be able to prove the investment performance internally while keeping information about the issuer of the transaction secret externally.
**A Naive Solution 1: Key Sharing**. Let a pair of signing and verification keys be generated and let all members of a group share the key pair. When a member issues a transaction, the same signing key is used to generate a signature. Then, anonymity level privacy protection is provided because a contract wallet uses the same verification key to verify the signature. However, there is no way to identify who issued the transaction, even among group members. Unconditional anonymity is undesirable in practice because it prevents to reveal who is accountable for a problem when it arises. In addition, determining how to revoke the signing key when a member leaves the group is a nontrivial problem.
**A Naive Solution 2: External Services**. Amazon Web Services (AWS) Key Management Service or a system on permissioned blockchain that support a trusted execution environment (TEE), e.g., Intel SGX, can be used to solve the above problems. By using such services, group members can issue a transaction without sharing the signing key. Here, anonymity still holds because the contract wallet uses the same verification key. In addition, group members can internally identify who issued a transaction via an access log (AWS Key Management Service) or the transaction (permissioned blockchain). Member revocation is also possible via access control to the services. Thus, technically we can provide both anonymity (for outside of the group) and accountability (for the group) simultaneously. However, both methods assume trust in AWS and TEE, which is undesirable relative to increase trust points.
### _Our Contribution_
In this paper, we focus on account abstraction, which is attractive in terms of constructing a privacy-preserving system, and we propose an anonymous yet accountable contract wallet system (Figure 1).3 In addition to account abstraction, the proposed system employs accountable ring signatures [10].4 We implemented the proposed system using zkSync (Solidity). Precisely, we implemented the verification algorithm of the underlying accountable ring signature scheme using Solidity. The proposed system is briefly introduced as follows.
Footnote 3: We used Free Clip Art ([http://www.cilker.com/](http://www.cilker.com/)).
* To issue a transaction, a ring signature is sent to a contract wallet, and the contract wallet verifies the signature. Then, the contract wallet uses a ring (i.e., a set of verification keys) to verify the signature. Anonymity holds in the sense that the contract wallet cannot identify the issuer among the verification key holders.
* The opening functionality of the accountable ring signatures allows the actual issuer to prove that they issued the transaction, and other users can recognize this fact. In contrast, due to a security requirement of accountable ring signatures (i.e., opening soundness), no user can prove that they issued a transaction if they did not actually issue the transaction, and moreover the transaction issuer cannot claim that someone else issued the transaction.
In addition, the proof can be generated by other ring members who share the secret opening key of the underlying accountable ring signatures. This functionality allows us to clarify the accountability involved in issuing a transaction. We remark that we mainly focus on how to identify/prove who issued a transaction in this paper. That is, we assume that the organization to which the transaction issuer belongs is accountable, and we do not address what penalties that organization would impose on the transaction issuer if the transaction became problematic, or how that organization would be held accountable.
* The proposed system allows an issuer to employ a typical signature scheme, e.g., ECDSA, with the ring signature scheme. This functionality can be considered an extension of the typical multi-signatures that require a certain number of ECDSA signatures to run a contract wallet. For example, the contract wallet determines whether one of the issuers belongs to a group (in an anonymous manner), and a specific user (in the usual manner) agrees to run the contract wallet by verifying a ring signature and an ECDSA signature. This is illustrated in Figure 2. In the usual multi-signatures setting, anyone can recognize who agreed with running the contract wallet by observing ECDSA verification keys used for ECDSA signatures verification. Note that if two signatures are sent to the contract wallet separately, one of them must be preserved in the contract wallet which incurs an additional gas cost. To reduce the cost, one sender can collect all of the required signatures, and send them to the contract wallet.
### _Applications_
Potential applications of the proposed system include, but are not limited to, the following.
**Medical Information Sharing**. As mentioned previously, when a request to view patient data is made by a transaction in an electronic medical record system using Ethereum smart contracts, it is highly desirable that no information about hospitals and medical offices is leaked from the transactions. However, it is also essential to be able to internally verify who issued the transaction in order to prevent unnecessary access to medical data. With the proposed system, from the outside, it is possible to hide who is accessing patient data among members in an organization. For example, when a hospital is specified as the organization, it is possible to keep secret which doctors' offices have access to medical data. If a clinical department, e.g., internal medicine, psychosomatic medicine, or plastic surgery, is specified as the organizational unit, which department is related to which disease is leaked, but no other information is leaked because which doctor accessed the patient data is kept secret. In addition, dividing the organization into independent departments relative to the actual medical offices may further prevent unnecessary leakage of information. Thus, by setting organizational units appropriately, we can control leaked information. In addition, more flexible settings can be realized by using our extended multi-signatures. For example, we can consider a case where the hospital director or a department head must agree to issue a transaction. Here, one person in a medical office needs to agree to issue the transaction.
**Asset Management**. The range of blockchain-based asset management has expanded due to the advent of smart contracts. Thus, the number of investment companies that manage their clients' funds has been increasing. In some cases, the addresses held by lenders are known publicly. Then, the status of the fund management can be checked.5 It is assumed that there are cases where multiple users share an EOA as the address, and cases where multiple users employ contract wallets where transactions can be issued with the consent of a certain number of users. In the former case, it is impossible to know who issued the transaction internally. In the latter case, due to the pseudonymity level privacy protection, anyone can check the investment performance of each address. Except for cases where the investment status is disclosed intentionally,
Fig. 1: Proposed System
such information can be leaked unexpectedly. By employing the proposed system, the investment status of each address is not disclosed externally, and the investment status of each individual can be known internally.
## II Preliminaries
### _Accountable Ring Signatures_
As a generalization of ring signatures [29] and group signatures [13], Xu and Yung proposed accountable ring signatures [34]. Briefly, as in ring signatures, each user generates their own public key \(\mathsf{pk}\) and secret key \(\mathsf{sk}\). This decentralized structure matches blockchain systems. An opener has a public key \(\mathsf{opk}\) and a secret key \(\mathsf{osk}\). When a user who has \((\mathsf{pk},\mathsf{sk})\) generates a ring signature \(\Sigma\) on a message \(M\), the user selects a set of public keys, which we refer to as ring \(R\), and we assume that \(\mathsf{pk}\in R\), and selects the opener by indicating \(\mathsf{opk}\). We say that \((\Sigma,M)\) is valid if the signer is a member of \(R\), i.e., there exists \(\mathsf{pk}\in R\) for which the corresponding \(\mathsf{sk}\) has been used to generate \(\Sigma\). As in group signatures, the designated opener can trace the signer using \(\mathsf{osk}\). Moreover, the opening algorithm produces a proof \(\pi\) proving that \(\Sigma\) is generated by \(\mathsf{sk}\) corresponding to \(\mathsf{pk}\in R\). We introduce the syntax defined by Bootle et al. [10] as follows.
**Definition 1** (Syntax of Accountable Ring Signatures [10]): _[leftmargin=*]_
* \(\mathsf{Setup}(1^{\lambda})\)_: The setup algorithm takes the security parameter_ \(\lambda\in\mathbb{N}\) _as input and outputs the common parameter_ \(\mathsf{pp}\)_._
* \(\mathsf{OKGen}(\mathsf{pp})\)_: The opener key generation algorithm takes_ \(\mathsf{pp}\) _as input and outputs the opener public and secret keys_ \(\mathsf{opk}\) _and_ \(\mathsf{osk}\)_, respectively._
* \(\mathsf{UKGen}(\mathsf{pp})\)_: The user key generation algorithm takes_ \(\mathsf{pp}\) _as input and outputs the user public verification key_ \(\mathsf{pk}\) _and user secret signing key_ \(\mathsf{sk}\)_._
* \(\mathsf{RSign}(\mathsf{opk},M,R,\mathsf{sk})\)_: The signing algorithm takes_ \(\mathsf{opk}\)_, a message_ \(M\) _to be signed, a ring_ \(R\)_, and_ \(\mathsf{sk}\) _as inputs and outputs a ring signature_ \(\Sigma\)_. Here,_ \(R\) _is a set of user public keys and the_ \(\mathsf{pk}\) _corresponding to_ \(\mathsf{sk}\) _is assumed to be_ \(\mathsf{pk}\in R\)_._
* \(\mathsf{RVerify}(\mathsf{opk},M,R,\Sigma)\)_: The verification algorithm takes_ \(\mathsf{opk}\)_,_ \(M\)_,_ \(R\)_, and_ \(\Sigma\) _as inputs and outputs 1 (accept) or 0 (reject)._
* \(\mathsf{Open}(M,R,\Sigma,\mathsf{osk})\)_: The open algorithm takes as_ \(M\)_,_ \(R\)_,_ \(\Sigma\)_, and_ \(\mathsf{osk}\) _as inputs and outputs_ \(\mathsf{pk}\in R\) _of the signer and its proof_ \(\pi\) _or_ \(\bot\) _otherwise._
* \(\mathsf{Judge}(\mathsf{opk},M,R,\Sigma,\mathsf{pk},\pi)\)_: The judge algorithm takes_ \(\mathsf{opk}\)_,_ \(M\)_,_ \(R\)_,_ \(\Sigma\)_,_ \(\mathsf{pk}\)_, and_ \(\pi\) _as inputs and outputs 0 if_ \(\mathsf{RVerify}(\mathsf{opk},M,R,\Sigma)=0\)_; otherwise, it outputs 1 to indicate that_ \(\Sigma\) _is generated by_ \(\mathsf{sk}\) _corresponding to_ \(\mathsf{pk}\)_, and 0 otherwise._
We require correctness holds where an honestly generated signature is always valid (the \(\mathsf{RVerify}\) algorithm outputs 1), and a proof generated by the \(\mathsf{Open}\) algorithm against the signature and corresponding verification key is always accepted by the Judge algorithm. Bootle et al. defined full unforgeability, anonymity, traceability, and tracing soundness, which are briefly explained as follows. Refer to the literature [10] for details on these security definitions.
* Full Unforgeability: It ensures that no adversary \(\mathcal{A}\) (who may control the opener) can falsely accuse an honest user of creating a ring signature, nor \(\mathcal{A}\) can forge a ring signature on behalf of an honest ring.
* Anonymity: It ensures that no adversary \(\mathcal{A}\) (who does not have \(\mathsf{osk}\)) can identify the signer, i.e., a signature does not reveal the identity of the ring member who generated it.
* Traceability: It ensures that no adversary \(\mathcal{A}\) can produce a signature that is valid but untraceable.
* Tracing Soundness: It ensures that no adversary \(\mathcal{A}\) can produce proofs for a signature that are accepted by the Judge algorithm for different verification keys.
Some studies have employed accountable ring signatures in blockchain systems [19, 31] due to its decentralized structure. In addition, Qiao et al. [28] removed the opening functionality from group signatures to reduce its centralized structure; however, it still has a centralized structure because a group manager issues signing keys to the group members unlike to (accountable) ring signatures. Linkable
Fig. 2: Extended Multi-Signatures
ring signatures [8, 23, 24] have been employed to provide a pseudonymity level privacy protection. As mentioned previously, this is insufficient; thus, anonymity level privacy protection is desirable. Delegatable anonymous credentials (DAC) [12] have been employed for eKYC systems [9, 32], where KYC stands for Know-Your-Customer. However, DACs are not applicable in the proposed system because they do not provide tracing functionality. Connolly et al. [14] proposed a revocable and auditable anonymous credential scheme called Protego, and they considered its application to Hyperledger Fabric. Although it might be employed in the proposed system, Connolly et al. only considered permissioned blockchains. Thus, accountable ring signatures are employed in the proposed system.
### _Layer 2_
Ethereum is widely recognized as providing high security as a smart contract-enabled blockchain. However, in terms of ensuring security and providing a distributed structure, there is room for improvement regarding scalability. Concretely, the number of transactions that can be processed within a certain time period is less than that of other blockchain systems. Thus, transaction fees (gas costs) will increase when many EOAs want to issue transactions. To solve this problem, we can issue a transaction off-chain, and the proof that the transaction is generated correctly, is only stored on the Ethereum blockchain. In this case, the blockchain and off-chain are referred to as Layer 1 (L1) and Layer 2 (L2), respectively. For example, when a Merkle tree is used to generate a hash chain, the proof that the hash chain satisfies the Merkle tree is stored in L1. To reduce the proof size stored in L1, the zk-STARK (zero-knowledge Scalable Transparent ARgument of Knowledge) [6] or zk-SNARK (zero-knowledge Succinct Non-interactive ARguments of Knowledge) [21, 27] are employed.
zkSync [3] is an Ethereum L2 network technology that supports zk-SNARK. The goal of L2 technologies to provide the same execution environment of Ethereum (i.e., the EVM: Ethereum Virtual Machine), and zkSync nearly supports the same execution environment of Ethereum, i.e., it is EVM-compatible and can run a smart contract programed by Solidity. Thus, we can utilize Ethereum ecosystem tools, and smart contracts on zkSync are described by Solidity. This means that the proposed system can be used directly when Ethereum supports account abstraction.
### _Account Abstraction_
In the following, we describe account abstraction. A user sends the code of a contract wallet to nodes that support account abstraction, and requests to its deployment. Note that the user does not need to be an issuer of a transaction. After deploying the contract wallet, an issuer sends transaction data to blockchain via nodes. Finally, the contract wallet runs the transaction according to the rule described in the code. We illustrate the flow of transaction issuing when account abstraction is/is not employed in Figure 3 and Figure 4.
Here, the issuer (User (EOA)) communicates with nodes, their IP address is known by nodes which break anonymity trivially. Thus, we exclude the nodes in our anonymity requirement.6
Footnote 6: This restriction can be removed if a user (who issued a transaction) prepares a node. However, currently only a test net supports account abstraction, and users who can prepare nodes are restricted.
## III Proposed Anonymous yet Accountable Contract Wallet System
In this section, we describe the proposed anonymous yet accountable contract wallet system. First, we classify users who issue transactions.
* Group User: a user \(i\) who manages a key pair of the underlying accountable ring signature scheme \((\mathsf{pk}_{i},\mathsf{sk}_{i})\) and the common opening key \(\mathsf{osk}\). \(i\in R\) denotes when user \(i\) is a group member where \(R\) is the ring used for signature verification (i.e., \(\mathsf{pk}_{i}\in R\)).
* Individual User: a user \(j\) who manages own ECDSA key pair \((\mathsf{wk}_{j},\mathsf{sigk}_{j})\).
For simplicity, we assume that an issuer is either a group user or individual user. Thus, we define a policy Policy that specifies when the contract wallet runs a transaction. For example, let \(\mathsf{Policy}:=\{R,j\}\). Then, a transaction is run when one group user who belongs \(R\) and one individual user \(j\) agree with the execution of the transaction. In this case, the ring \(R=\{\mathsf{pk}_{i}\}\) and ECDSA verification key \(\mathsf{vk}_{j}\) are registered in the contract wallet.
### _Security Requirements_
We require the following security notions hold. Especially, due to the provability and proving soundness, the protocol is accountable because they allow us to clarify the accountability involved in issuing a transaction.
Fig. 4: Account Abstraction
Fig. 3: Usual Transaction Issuing
* Anonymity: All entities that can observe transactions including contract wallets and excluding group users and nodes that communicate with the transaction issuer, cannot identify the actual issuer among the verification key holders contained in ring \(R\).
* Unforgeability: As long as all signatures (either/both the ring signatures and/or the ECDSA signatures) specified by the Policy are sent to a contract wallet, no entity can issue a transaction that is valid under the Policy.
* Provability: When a group user \(i\) sends a ring signature to issue a transaction, the user can generate its proof.
* Proving Soundness: When a group user \(i\) does not send a ring signature to issue a transaction, the user cannot generate its proof.
### _High-level Description_
In the following, we explain the case of \(\mathsf{Policy}:=\{R,j\}\). First, we assume that all group users belonging to \(R\) share osk. Let a group user \(i\in R\) issue a transaction. Then, the user \(i\) generates a ring signature \(\Sigma\), and the individual user \(j\) generates an ECDSA signature \(\sigma\), respectively. After these signatures are sent to a contract wallet, the contract wallet checks whether \(\Sigma\) is valid under \(R\), and \(\sigma\) is valid under \(\mathsf{vk}_{j}\). If both signatures are valid, then the contract wallet runs the transaction. We note that only the signature verification is executed on-chain. Next, the user \(i\) runs the Open algorithm using osk, generates \(\pi\) that is a proof of transaction issuing, and sends \(\pi\) to other group users (via an off-chain channel). For example, any information sharing tool used in the organization (e.g., a bulletin board in the organization or e-mails) can be used for sending pi. Other group users can recognize that the user \(i\) issues the transaction by checking \(\pi\) using the Judge algorithm.
We can easily consider a case that consents of two or more individual users are required. For example, \(\mathsf{Policy}:=\{R,j,k\}\). Similarly, we can consider two or more rings, e.g., \(R_{1}\) and \(R_{2}\). Here, we assume that \(R_{1}\cap R_{2}=\emptyset\) and do not consider the case \(R_{1}\cap R_{2}\neq\emptyset\) because the contract wallet cannot distinguish whether one user who belongs to \(R_{1}\cap R_{2}\neq\emptyset\) sends two ring signatures or not, due to anonymity. Similarly, we do not consider a case that consents two or more group users belonging to the same ring are required because the contract wallet cannot distinguish whether one user who belongs to the ring sends two or more ring signatures or not, due to anonymity. We remark that ring members can internally check whether one user who belongs to the ring sends two or more ring signatures or not. Thus, in the case that the proof \(\pi\) will be opened after issuing the transaction, these restrictions could be removed.
### _Proposed System_
Here, we describe the proposed anonymous yet accountable contract wallet system. The proposed system consists of two procedures, \(\mathsf{DeployContractWallet}\) and \(\mathsf{ScedTransaction}\). The \(\mathsf{DeployContractWallet}\) protocol is used to deploy a contract wallet, and the \(\mathsf{ScedTransaction}\) protocol issues a transaction against a transaction issuing request.
* \(\mathsf{DeployContractWallet}(\mathsf{Code},\mathsf{BlockChain})\): Let \(\mathsf{Code}\) be the code of a contract wallet. In the protocol, \(\mathsf{Code}\) is sent to nodes that support account abstraction. Here, we describe \(\mathsf{BlockChain}\) as the set of nodes. Then, \(\mathsf{Code}\) contains a policy \(\mathsf{Policy}\), and a set of verification keys (a ring \(R\) and ECDSA verification keys), and the rule that specifies the procedure run after the verification of signatures is passed. Finally, deploy a contract wallet \(\mathsf{CW}\).
* \(\mathsf{ScedTransaction}(\mathsf{TransactionData},\mathsf{BlockChain},\mathsf{CW})\): A transaction data \(\mathsf{TransactionData}\), containing accountable ring signatures and ECDSA signatures, is sent to \(\mathsf{CW}\) via nodes described by \(\mathsf{BlockChain}\). Then, \(\mathsf{CW}\) checks the validity of signatures according to \(\mathsf{Policy}\), and issues a transaction.
In our system, the \(\mathsf{RVerify}\) algorithm is run on-chain by the contract wallet, as in the case of ECDSA signatures, because any one can issue a transaction if no verification is involved. One may think that the \(\mathsf{RVerify}\) algorithm could be run off-chain, e.g., the operator of the underlying L2 system runs the \(\mathsf{RVerify}\) algorithm and sends the validity result to the wallet. However, it requires to assume an additional trust to the operator
### _Security Discussion_
* Anonymity: Due to account abstraction and the anonymity of the underlying accountable ring signature scheme, the fact that a group user \(i\) belonging \(R\) generates the signature to issue a transaction is not leaked.
* Unforgeability: Due to the unforgeability of the underlying accountable ring signature scheme and ECDSA, no entity, that does not belong to \(R\) or does not have a signing key corresponding to \(\mathsf{vk}\) specified by \(\mathsf{Policy}\), can issue a transaction. Due to the tamper resistance of the blockchain, the code of a contract wallet is not modified after its deployment. Thus, as long as all signatures (either/both ring signatures and/or the ECDSA signatures) specified by \(\mathsf{Policy}\) are sent to a contract wallet, no entity can issue a transaction that is valid under the Policy.
* Provability: Let \(\Sigma\) be a ring signature generated by \(\mathsf{sk}_{i}\) and verified by a contract wallet. Due to the correctness of the underlying accountable ring signature scheme, \(\mathsf{Judge}(\mathsf{opk},M,R,\Sigma,\mathsf{pk}_{i},\pi)=1\) holds.
* Proving Soundness: Let a ring signature be generated by \(\mathsf{sk}_{i}\). Due to the tracing soundness, no user (including group users) can produce \(\pi\) that is accepted by the \(\mathsf{Judge}\) algorithm with \(\mathsf{pk}\neq\mathsf{pk}_{i}\).
## IV Implementation
The dominant of the proposed system in terms of cryptographic operations is to verify accountable ring signatures on-chain. Thus, we mainly focus on the accountable ring signature scheme in our implementation.
We implemented the Bootle et al. accountable ring signature scheme using Node.js. Because it is secure under the Decisional Diffie-Hellman (DDH) assumption, we employed secp256k1 as the underlying elliptic curve, which is known as a DDH-hard curve. We also implemented the RVerify algorithm using Solidity because the verification procedure is run on-chain (by the contract wallet). For ECDSA signature verification, we employed the OpenZeppelin library7.
Footnote 7: [https://docs.openzeppelin.com/](https://docs.openzeppelin.com/)
### _Contract Wallet Deployment_
First, we show that ergs (which is used in zkSync) for contract wallet deployment (which represent the average of ergs of 100-times deployment in the case of \(|R|=4\) and \(|R|=10\)). We estimate the case of \(|R|=4\) and \(|R|=10\) (i.e., four/ten verification keys are contained in the ring). We also calculate the corresponding ETH and USD (using chart on 2022 December 1st). We can see that the costs are almost independent to the number of verification keys, i.e., \(|R|\), because each verification key is represented by two unsigned integers only.
### _Implementation by Node.js_
Here, we describe our implementation environment of Node.js in Table II.
We show our implementation results (which represent the average of running times of 100-times executions) of the accountable ring signature scheme in Table III. We estimate the case of \(|R|=4\) and \(|R|=10\).
Although they depend on the ring size, they are reasonable because they are run off-chain, except the RVerify algorithm in the proposed system. The running time of the RVerify algorithm here is also important because the Open and Judge algorithms internally run the RVerify algorithm, since the corresponding ring signatures to be opened are required to be valid.
### _Implementation by Solidity_
Here, we describe our implementation environment of Solidity in Table IV.
We show our implementation results (which represent the average of ergs of 100-times executions in the case of \(|R|=4\) and \(|R|=10\)) of the accountable ring signature scheme in Table V. We also calculate the corresponding ETH and USD (using chart on 2022 December 1st). Currently, transaction fee to run zkSync is not strictly established, and thus, they are reference values. Nevertheless, they are quite expensive as a fee to issue a transaction. We note that the ECDSA verification algorithm is run by 574,254 ergs. The main reason of these high costs is inefficiency of elliptic curve operations in Solidity. Moreover, it seems that the transaction needs to be divided in the real environment due to the ergs costs because there is a limitation on the transaction size that can be executed in a single transaction in the mainnet.
## V Conclusion
In this paper, we proposed an anonymous yet accountable contract wallet system based on account abstraction and accountable ring signatures. The proposed system is implemented using Solidity for zkSync. Moreover, we discussed potential of the proposed system, e.g., medical information sharing and asset management. Since the current implementation results using Solidity show the required costs are expensive, our result here might be regarded as somewhat conceptual. However, to the best of our knowledge, no previous implementation result is known that confirms the cost to run an accountable ring signature scheme in Solidity to date, and we believe that our result can be seen as an important stepping stone to provide anonymity and accountability simultaneously in blockchain systems.
Investigating other applications of the proposed system will be left to future work. The underlying account ring signature scheme does not provide post-quantum security due to the discrete logarithm-based construction. Thus, it is difficult to accept the current construction as a platform to manage large
amounts of assets due to the progress of quantum computing. Because a post-quantum accountable ring signature scheme has been proposed in [7], it would be interesting to employ the scheme, precisely, how to implement it using Solidity is left to future work.
**Acknowledgment**: The authors would like to thank Dr. Miyako Ohkubo (NICT) for her invaluable comments and suggestions. This work was supported by JSPS KAKENHI Grant Numbers JP21K11897 and JP22H03588.
|
2309.05797 | Triviality of the scaling limits of critical Ising and $\varphi^4$
models with effective dimension at least four | We prove that any scaling limit of a critical reflection positive Ising or
$\varphi^4$ model of effective dimension $d_{\text{eff}}$ at least four is
Gaussian. This extends the recent breakthrough work of Aizenman and
Duminil-Copin -- which demonstrates the corresponding result in the setup of
nearest-neighbour interactions in dimension four -- to the case of long-range
reflection positive interactions satisfying $d_{\text{eff}}=4$. The proof
relies on the random current representation which provides a geometric
interpretation of the deviation of the models' correlation functions from
Wick's law. When $d=4$, long-range interactions are handled with the derivation
of a criterion that relates the speed of decay of the interaction to two
different mechanisms that entail Gaussianity: interactions with a sufficiently
slow decay induce a faster decay at the level of the model's two-point
function, while sufficiently fast decaying interactions force a simpler
geometry on the currents which allows to extend nearest-neighbour arguments.
When $1\leq d\leq 3$ and $d_{\text{eff}}=4$, the phenomenology is different as
long-range effects play a prominent role. | Romain Panis | 2023-09-11T20:04:28Z | http://arxiv.org/abs/2309.05797v1 | Triviality of the scaling limits of critical Ising and \(\varphi^{4}\) models with effective dimension at least four
###### Abstract.
We prove that any scaling limit of a critical reflection positive Ising or \(\varphi^{4}\) model of effective dimension \(d_{\text{eff}}\) at least four is Gaussian. This extends the recent breakthrough work of Aizenman and Duminil-Copin [1]-- which demonstrates the corresponding result in the setup of nearest-neighbour interactions in dimension four--to the case of long-range reflection positive interactions satisfying \(d_{\text{eff}}=4\). The proof relies on the random current representation which provides a geometric interpretation of the deviation of the models' correlation functions from Wick's law. When \(d=4\), long-range interactions are handled with the derivation of a criterion that relates the speed of decay of the interaction to two different mechanisms that entail Gaussianity: interactions with a sufficiently slow decay induce a faster decay at the level of the model's two-point function, while sufficiently fast decaying interactions force a simpler geometry on the currents which allows to extend nearest-neighbour arguments. When \(1\leq d\leq 3\) and \(d_{\text{eff}}=4\), the phenomenology is different as long-range effects play a prominent role.
2020 Mathematics Subject Classification: 60G60, 60K35, 82B20, 82B27 Research supported by the Swiss National Science Foundation and the NCCR SwissMAP
###### Contents
* 1 Introduction
* 2 The Griffiths-Simon class of measures
* 3 Reflection positivity
* 4 Random current representation
* 5 Reflection positive Ising models satisfying \(d_{\text{eff}}>4\)
* 6 Reflection positive Ising models in dimension \(d=4\)
* 7 Reflection positive Ising models in dimension \(1\leq d\leq 3\) satisfying \(d_{\text{eff}}=4\)
* 8 Extension of the results to models in the Griffiths-Simon class
* A Spectral representation of reflection positive Ising models
* B The backbone representation of the Ising model
* C Properties of currents for models of the Ising-type in the GS class
* D Triviality and finiteness of the Bubble diagram
## 1. Introduction
### Motivation
We are interested in ferromagnetic real-valued spin models on \(\mathbb{Z}^{d}\) that arise in statistical mechanics. Mathematically, these models can be seen as probability measures on spin configurations \(\tau:\mathbb{Z}^{d}\to\mathbb{R}\) formally given by
\[\langle F(\tau)\rangle_{\beta}=\frac{1}{Z}\int F(\tau)\exp\left(\beta\sum_{x,y \in\mathbb{Z}^{d}}J_{x,y}\tau_{x}\tau_{y}\right)\prod_{x\in\mathbb{Z}^{d}} \mathrm{d}\rho(\tau_{x}), \tag{1.1}\]
where \(\beta>0\) is the inverse temperature, \(J_{x,y}\geq 0\) are (possibly long-range) interactions, \(Z\) is a normalisation constant, and \(\mathrm{d}\rho\) is a single-site probability measure. Of particular interest to us are the Ising model which corresponds to choosing \(\mathrm{d}\rho\) to be the uniform measure on \(\{-1,+1\}\), and the \(\varphi^{4}\) model which corresponds to confining the spins in a quartic potential given by
\[\mathrm{d}\rho(\varphi)=\frac{1}{z_{g,a}}e^{-g\varphi^{4}-a\varphi^{2}}\mathrm{ d}\varphi, \tag{1.2}\]
with \(g>0\), \(a\in\mathbb{R}\) and \(z_{g,a}\) a normalisation constant, and where \(\mathrm{d}\varphi\) is the Lebesgue measure on \(\mathbb{R}\).
The study of these models plays a key role in two distinct, yet interacting, research areas: _constructive Euclidean field theory_ and _statistical mechanics_.
Constructive Euclidean field theory aims at constructing random distributions on \(\mathbb{R}^{d}\), with a particular focus on interacting, or non-_trivial_, field theories. This contrasts with the study of Gaussian fields, which have a trivial correlation function structure in the sense that it is entirely determined by their two-point function via Wick's law. A natural attempt to build non-Gaussian field theories is to try to define a measure on the set of functions \(\mathbb{R}^{d}\to\mathbb{R}\) whose averages are given by
\[\langle F(\Phi)\rangle=\frac{1}{Z}\int F(\Phi)\exp\left(-H(\Phi)\right)\prod_{ x\in\mathbb{R}^{d}}\mathrm{d}\Phi_{x},\]
with
\[H(\Phi):=\int_{\mathbb{R}^{d}}\left[A|\nabla\Phi(x)|^{2}+B|\Phi(x)|^{2}+P( \Phi(x))\right]\mathrm{d}x,\]
where \(A,B>0\) and \(P\) is an even polynomial of degree \(4\) with a strictly positive leading coefficient. This choice corresponds to what would be the definition of the \(\varphi^{4}\) field theory on \(\mathbb{R}^{d}\). Due to the lack of a natural Lebesgue measure on infinite dimensional spaces, the above quantity is ill-defined. However, it is still possible to make sense of it using a pair of _ultraviolet_ (short distance) and _infrared_ (long distance) cutoffs. Highlights of this approach include the rigorous construction of the \(\varphi^{4}\) measure, with infrared cutoff, in dimension two by Nelson [26], and in dimension three by Glimm and Jaffe [10]. These works were later extended to the infinite volume limit [11, 12]. A few years after these first results, Aizenman [1] and Frohlich [14] showed that \(\varphi^{4}\) is not a good candidate to construct interacting field theories when \(d\geq 5\). In their works, they proved that any field obtained as a scaling limit of critical Ising or \(\varphi^{4}\) models in dimension \(d\geq 5\) is Gaussian. These papers, and other subsequent works [15, 1, 16, 17, 18, 19], provided strong heuristics that the same result should hold in dimension \(d=4\). It was not until very recently that these heuristics were confirmed by the breakthrough work of Aizenman and Duminil-Copin [1].
Constructive Euclidean field theory is also closely related to _constructive quantum field theory_ (CQFT). Indeed, the Osterwalder-Schrader Theorem [20, 21] provides a way to build quantum field theories in the sense proposed by Wightman [22] from Euclidean field theories. We refer to [1, 1, 1] for a more complete description of the CQFT point of view.
From the perspective of statistical mechanics, the Ising model and the \(\varphi^{4}\) model are among the simplest examples which exhibit a phase transition1 at a critical parameter \(\beta_{c}\in(0,\infty)\). As proved in [1, 1], this phase transition is _continuous_ for reflection positive interactions2, and one of the main challenges of the field is to understand the nature of their scaling limits at criticality.
The connection between the Ising model and the \(\varphi^{4}\) model is predicted to be very rich: they are believed to belong to the same universality class. Renormalisation group heuristics (see [10, 11] or the recent book [1]) predict that at their respective critical points many of their properties (e.g. critical exponents) coincide exactly. Hints of these deep links where established by Griffiths and Simon in [12], where they show that the \(\varphi^{4}\) model emerges as a certain near-critical scaling limit of a collection of mean-field Ising models. This permits to transfer rigorously many useful properties of the Ising model, such as correlation inequalities, to the \(\varphi^{4}\) model. In the other direction, the Ising model can be obtained as a limit of \(\varphi^{4}\) using the following limit
\[\frac{\delta_{-1}+\delta_{1}}{2}=\lim_{g\to\infty}\frac{1}{z_{g,-2g}}e^{-g( \varphi^{2}-1)^{2}}\mathrm{d}\varphi.\]
The high dimension triviality results mentioned above are related to the simplicity of the critical exponents of these models, suggesting that for \(d\geq 4\), they must take their _mean-field_ values. Rigorous results in that direction have been obtained in [1, 1, 2, 13, 14, 15]. What appeared to be a negative result from the perspective of constructive Euclidean field theory is positive in the framework of statistical mechanics as it provides information at criticality for a wide class of non-integrable models.
The main step in the proof of aizenman and Frohlich in dimension \(d\geq 5\) is the derivation of the so-called _tree diagram bound_ through geometric representations, and the use of _reflection positivity_[10, 11] to argue that the four-point Ursell function's scaling limit always vanishes at criticality. Although initially presented in the case of nearest-neighbour interactions \(J_{x,y}=\mathds{1}_{|x-y|_{1}=1}\), these methods are robust and extend to more general (in particular long-range) reflection positive interactions. However, this is no longer true in dimension four, where only the case of nearest-neighbour interactions is treated [1].
The interest in the study of long-range interactions comes from the fact that the rate of decay of the interactions may change the effective dimension of the model by increasing it, meaning that one can recover high-dimensional features in some well-chosen one, two or three dimensional systems. An observation of this phenomenon was made for algebraically decaying long-range interactions of the form \(1/r^{d+\alpha}\) by Fischer, Ma and Nickel [12] using renormalisation group heuristics. They noted that the parameter \(\alpha\) had the effect of changing the value of the upper critical dimension3 into \(d_{c}(\alpha)=\min(2\alpha,4)\), suggesting that the effective dimension of the model should be given by \(d_{\text{eff}}(\alpha)=d/(1\wedge(\alpha/2))\) (see Figure 1). This was later studied by Aizenman and Fernandez [1], and lead to the observation that some Ising models in dimension \(1\leq d\leq 3\) present trivial scaling limits at criticality, which is not expected in the case of nearest-neighbour interactions4. Other rigorous results were obtained through lace expansion methods [13, 14, 15]. Conversely, if the interaction decays fast enough, the upper critical dimension of the model is unchanged (this corresponds to \(\alpha\geq 2\) in the example above). The prediction is that only two situations may occur: either the interaction decays very fast and we expect to fall into the universality class of the nearest-neighbour models, or the decay is _exactly_ fast enough for additional logarithmic corrections to appear. The latter scenario was shown to occur [14] in dimension \(d\geq 4\), for (sufficiently spread out) interactions decaying such as \(1/r^{d+2}\).
Finally, let us briefly mention that long-range interactions of the above type have been used to conduct rigorously the so-called "\((d_{c}-\varepsilon)\)-expansions"-- motivated by Wilson and Fisher through renormalisation group heuristics [12]-- which give a precise understanding of the critical exponents of these models below the upper critical dimension, see [13, 14, 15, 16].
The goal of this paper is threefold. First, we prove that reflection positive Ising or \(\varphi^{4}\) models with effective dimension strictly above four are trivial. This revisits some of the results of [1, 2] together with the notion of effective dimension, and provides explicit examples of trivial models in dimensions one, two, and three. Second, we extend the results of [1] to near-critical and critical reflection positive Ising and \(\varphi^{4}\) models in dimension \(d=4\) beyond the nearest-neighbour case. In particular, this case contains algebraically decaying interactions (as above) with \(\alpha>0\). The result was already known [2] for \(\alpha\in(0,2)\) but is new in the case \(\alpha\geq 2\). Third, we prove triviality of the scaling limits of one, two, and three dimensional reflection positive Ising models with effective dimension four. This is the main novelty of the paper. Such examples of models can be obtained choosing \(\alpha=d/2\) above for \(1\leq d\leq 3\). Our results apply to a wide class of single-site measures called the _Griffiths-Simon class_ of measures (see Section 2), which in particular contains the examples mentioned below (1.1), and which can be recovered from weak limits of Ising-type single site measures.
As in [1, 1], we use the random current representation of the Ising model which enables, by means of the _switching lemma_, to express the correlation functions' deviation from Wick's law in terms of intersection probabilities of two independent random currents with distanced sources.
When \(d=4\), two situations may occur. First, the interaction's decay may be "slow", in which case we observe a decay of the model's two-point function which is slightly better than the one obtained for nearest-neighbour interactions (see Corollary 3.10). We can then conclude using the _tree diagram bound_ obtained in [11]. In particular, this first case contains reflection positive interactions of algebraic decay with \(\alpha\in(0,2]\). Second, the
Figure 1. Left: The graph of \(\alpha\mapsto d_{\mathrm{eff}}(\alpha)\) for the interaction \(J\) given by \(J_{x,y}=C|x-y|_{1}^{-d-\alpha}\) (for \(d\geq 2\)). The transition between the regime \(d_{\mathrm{eff}}(\alpha)>d\) and \(d_{\mathrm{eff}}=d\) occurs at \(\alpha=2\). At this point, we expect logarithmic corrections at the level of the decay of the critical two-point function. Right: A summary of the expected behaviour of the critical scaling limits for the same interaction \(J\). The red region (including segments and points), correspond to interactions which are expected to yield trivial scaling limits. The results of this paper concern the study of the “marginal” cases that separate the two phases.
decay of the interaction may be "too fast", in which case we observe no improvement at the level of the decay of the two-point function. As explained above, this case corresponds to the situation where we expect the model to behave like a nearest-neighbour one, this corresponds to choosing \(\alpha>2\) above. We follow the strategy of [1] and improve the tree diagram bound. The proof goes by arguing that in dimension four, just like random walks, if two independent random currents intersect at least once, they must re-intersect a large number of times. By the mean of a multi-scale analysis, the authors of [1] showed that intersections occur with large probability in a density of (well-chosen) scales. This essentially required three tools: regularity properties for the model's two-point function, a proof of the fact that intersections happen with (uniform) positive probability on all scales, and a mixing statement which allows to argue that intersections at different scales are roughly independent events. However, in the case of long-range interactions, these steps fail. Indeed, the extension of the proof to the general setup requires an adaptation of the reflection positivity arguments to the case of arbitrary interactions which builds on a different view point on the spectral analysis of these models (see Section 3 and Appendix A). This viewpoint was already introduced in [1, 1]. Then, long-range interactions may have the effect of making intersections less likely as it becomes possible to "jump" scales. Finally, long-range interactions may create more dependency between pieces of the current at different scales. We solve these problems by arguing that the currents do not jump above a \((4-\varepsilon)\)-dimensional annulus with very high probability (see Section 6.2). As it turns out, this is enough to (essentially) recover the same geometric properties of currents as in the nearest-neighbour case.
When \(1\leq d\leq 3\) and \(d_{\mathrm{eff}}=4\), the above improvements are not sufficient. The main reason is that in this precise regime, the decay of the interaction is too slow to exclude jumps above \((d-\varepsilon)\)-dimensional annuli (see Remark 6.10). This additional difficulty is treated by going one step further in the analysis of the currents (see Section 7.2). As a byproduct of our methods, we obtain a (quantitative) mixing statement that is valid for all models of effective dimension at least four (see Section 7.3).
To study the near-critical regime, it is important to introduce a typical length below which the model essentially behaves like a critical one. It is tempting to use the correlation length \(\xi(\beta)\) defined for \(\beta<\beta_{c}\), by
\[\xi(\beta):=-\left(\lim_{n\to\infty}\frac{\log\langle\tau_{0}\tau_{n\mathbf{e }_{1}}\rangle_{\beta}}{n}\right)^{-1}.\]
However, this quantity is not relevant in the case of long-range interactions since one may have \(\xi(\beta)=\infty\) for all \(\beta<\beta_{c}\) (see [21, 1, 1]). Another contribution of this paper is the introduction of the _sharp length_\(L(\beta)\) (defined in Section 3.6), whose definition is inspired by [1].
We first prove an improved tree diagram bound for the Ising model, and then extend it to the \(\varphi^{4}\) model (and more generally every model in the Griffiths-Simon class) using its viewpoint as a generalised Ising model.
Let us mention that we also expect a _direct_ analysis, meaning at the level of the \(\varphi^{4}\) model and without any mention of the Ising model, to be possible with the use of the _random tangled current representation_ of \(\varphi^{4}\) recently introduced in [10].
### Definitions and statement of the results
We start by stating the results for the case of the Ising model.
#### 1.2.1. Results for the Ising model
In what follows, \(\Lambda\) is a finite subset of \(\mathbb{Z}^{d}\). Let \(J=(J_{x,y})_{\{x,y\}\subset\mathbb{Z}^{d}}\) be an interaction (or a collection of coupling constants) and \(h\in\mathbb{R}\). For
\(\sigma=(\sigma_{x})_{x\in\Lambda}\in\{\pm 1\}^{\Lambda}\), introduce the _Hamiltonian_
\[H_{\Lambda,J,h}(\sigma):=-\sum_{\{x,y\}\subset\Lambda}J_{x,y}\sigma_{x}\sigma_{y} -h\sum_{x\in\Lambda}\sigma_{x},\]
and define the associated finite volume Gibbs equilibrium measure \(\langle\cdot\rangle_{\Lambda,J,h,\beta}\) at inverse temperature \(\beta\geq 0\) to be the probability measure under which, for each \(F:\{\pm 1\}^{\Lambda}\to\mathbb{R}\),
\[\langle F\rangle_{\Lambda,J,h,\beta}:=\frac{1}{Z(\Lambda,J,h,\beta)}\sum_{ \sigma\in\{\pm 1\}^{\Lambda}}F(\sigma)\exp\left(-\beta H_{\Lambda,J,h}(\sigma) \right),\]
where
\[Z(\Lambda,J,h,\beta):=\sum_{\sigma\in\{\pm 1\}^{\Lambda}}\exp\left(-\beta H _{\Lambda,J,h}(\sigma)\right),\]
is the _partition function_ of the model. We make the following assumptions on the interaction \(J\):
1. Ferromagnetic: For all \(x,y\in\mathbb{Z}^{d}\), \(J_{x,y}\geq 0\),
2. Locally finite: For any \(x\in\mathbb{Z}^{d}\), \[|J|:=\sup_{x\in\mathbb{Z}^{d}}\sum_{y\in\mathbb{Z}^{d}}J_{x,y}<\infty,\]
3. Translation invariant: For all \(x,y\in\mathbb{Z}^{d}\), \(J_{x,y}=J_{0,y-x}\),
4. Irreducible: For all \(x,y\in\mathbb{Z}^{d}\), there exist \(x_{1},\ldots,x_{k}\in\mathbb{Z}^{d}\) such that \[J_{x,x_{1}}J_{x_{1},x_{2}}\ldots J_{x_{k-1},x_{k}}J_{x_{k},y}>0,\]
5. Reflection positive: see Section 3.
We postpone the definition of reflection positivity to Section 3.1, but to fix the ideas the reader might keep in mind the following examples5 of interactions which satisfy (**A1**)-(**A5**):
Footnote 5: Here, \(|.|_{1}\) refers to the \(\ell^{1}\) norm on \(\mathbb{R}^{d}\).
1. (nearest-neighbour interactions) \(J_{x,y}=C1_{|x-y|_{1},1}\) for \(C>0\),
2. (exponential decay / Yukawa potentials) \(J_{x,y}=C\exp(-\mu|x-y|_{1})\) for \(\mu,C>0\),
3. (algebraic decay) \(J_{x,y}=C|x-y|_{1}^{-d-\alpha}\) for \(\alpha,C>0\).
Using Griffiths' inequalities [10], one can obtain the associated infinite volume Gibbs measure by taking weak limits of \(\langle\cdot\rangle_{\Lambda,J,h,\beta}\) as \(\Lambda\nearrow\mathbb{Z}^{d}\). We denote the limit by \(\langle\cdot\rangle_{J,h,\beta}\). For convenience, in what follows, we omit the mention of the interaction in the notation of the Gibbs measures.
In dimensions \(d>1\), the model exhibits a phase transition for the vanishing of the _spontaneous magnetisation_. That is, if
\[m^{*}(\beta):=\lim_{h\to 0^{+}}\langle\sigma_{0}\rangle_{\beta,h},\]
then, \(\beta_{c}:=\inf\{\beta>0,\;m^{*}(\beta)>0\}\in(0,\infty)\). The above assumptions guarantee [12] that \(\beta_{c}>0\) (in fact \(\beta_{c}\geq|J|^{-1}\)), while Peierls' celebrated argument [10] yields the bound \(\beta_{c}<\infty\). In dimension \(d=1\), the phase transition occurs [11] under the additional assumption that \(J_{x,y}\asymp|x-y|^{-1-\alpha}\) with \(\alpha\in(0,1]\). We now assume that \(h=0\). Our results concern the nature of the scaling limits at6, or near the critical parameter \(\beta_{c}\).
To determine the nature of the scaling limit, we look at the joint distribution of the _smeared observables_7
Footnote 7: Note that for \(f=\mathbb{1}_{[-1,1]^{d}}\),
\[\langle T_{f,L,\beta}(\sigma)^{2}\rangle_{\beta}=1,\]
and more generally for \(f\neq 0\), one has \(0<c_{f}\leq\langle T_{f,L,\beta}(\sigma)^{2}\rangle_{\beta}\leq C_{f}<\infty\), which means that the following quantity is bounded away from \(0\) and \(\infty\) by constants that only depend on \(f\). This indicates that this is the scaling that is the most likely to yield interesting limits. given for \(\beta>0\) and \(L\geq 1\) by
\[T_{f,L,\beta}(\sigma):=\frac{1}{\sqrt{\Sigma_{L}(\beta)}}\sum_{x\in\mathbb{Z} ^{d}}f\left(\frac{x}{L}\right)\sigma_{x},\]
where \(f\) ranges over the set \(\mathcal{C}_{0}(\mathbb{R}^{d})\) of continuous, real valued, and compactly supported functions, and where
\[\Sigma_{L}(\beta):=\big{\langle}\big{(}\sum_{x\in\Lambda_{L}}\sigma_{x}\big{)} ^{2}\big{\rangle}_{\beta}=\sum_{x,y\in\Lambda_{L}}\langle\sigma_{x}\sigma_{y} \rangle_{\beta},\text{ with }\Lambda_{L}:=[-L,L]^{d}\cap\mathbb{Z}^{d}.\]
**Definition 1.1**.: _A discrete system as above is said to converge in distribution to a scaling limit if the collection of random variables \((T_{f,L,\beta}(\sigma))_{f\in\mathcal{C}_{0}(\mathbb{R}^{d})}\) converges in distribution (in the sense of finite dimensional distributions) as \(L\) goes to infinity. Using Kolmogorov's extension theorem and the separability of \(\mathcal{C}_{0}(\mathbb{R}^{d})\), we can represent any scaling limit as a random field._
Our first result concerns the study of models of effective dimension \(d_{\text{eff}}>4\). We postpone the precise definition of effective dimension to Section 5 and illustrate this concept using the example of algebraically decaying reflection positive interactions mentioned above, i.e. \(J_{x,y}=C|x-y|_{1}^{-d-\alpha}\) for \(\alpha,C>0\). In that case, we will see that \(d_{\text{eff}}\geq\frac{d}{1\wedge(\alpha/2)}\) so that the hypothesis \(d_{\text{eff}}>4\) corresponds to \(d-2(\alpha\wedge 2)>0\).
**Theorem 1.2**.: _Let \(d\geq 1\). Let \(J\) be the interaction defined for \(x\neq y\in\mathbb{Z}^{d}\) by \(J_{x,y}=C_{0}|x-y|_{1}^{-d-\alpha}\) where \(C_{0},\alpha>0\). We also assume that \(d-2(\alpha\wedge 2)>0\). There exists \(C=C(C_{0},d),\gamma=\gamma(d)>0\) such that for all \(\beta\leq\beta_{c}\), \(L\geq 1\), \(f\in\mathcal{C}_{0}(\mathbb{R}^{d})\) and \(z\in\mathbb{R}\),_
\[\left|\langle\exp\left(zT_{f,L,\beta}(\sigma)\right)\rangle_{ \beta}-\exp\left(\frac{z^{2}}{2}\langle T_{f,L,\beta}(\sigma)^{2}\rangle_{ \beta}\right)\right|\\ \leq\exp\left(\frac{z^{2}}{2}\langle T_{|f|,L,\beta}(\sigma)^{2} \rangle_{\beta}\right)\frac{C(\beta^{-4}\vee\beta^{-2})\|f\|_{\infty}^{4}r_{f }^{7}z^{4}}{L^{d-2(\alpha\wedge 2)}},\]
_where \(\|f\|_{\infty}=\sup_{x\in\mathbb{R}^{d}}|f(x)|\) and \(r_{f}=\left(\max\{r\geq 0,\,\exists x\in\mathbb{R}^{d},\,|x|=r,\,f(x)\neq 0 \}\lor 1\right).\)_
_As a consequence, for \(\beta\leq\beta_{c}\), every sub-sequential scaling limit (in the sense of Definition 1.1) of the model is Gaussian._
We now move the focus to the case \(d=4\). As explained in the introduction, and discussed in Section 4, the Gaussian behaviour of the model can be seen at the level of the four-point Ursell function [23, 1] defined for all \(x,y,z,t\in\mathbb{Z}^{d}\) by
\[U_{4}^{\beta}(x,y,z,t):=\langle\sigma_{x}\sigma_{y}\sigma_{z}\sigma_{t}\rangle _{\beta}-\langle\sigma_{x}\sigma_{y}\rangle_{\beta}\langle\sigma_{z}\sigma_{t} \rangle_{\beta}-\langle\sigma_{x}\sigma_{z}\rangle_{\beta}\langle\sigma_{y} \sigma_{t}\rangle_{\beta}-\langle\sigma_{x}\sigma_{t}\rangle_{\beta}\langle \sigma_{y}\sigma_{z}\rangle_{\beta}.\]
In dimension \(d>4\), Aizeman [1] was able to conclude the triviality of the scaling limits using the tree diagram bound mentioned above,
\[|U_{4}^{\beta}(x,y,z,t)|\leq 2\sum_{u\in\mathbb{Z}^{d}}\langle\sigma_{x}\sigma_{u} \rangle_{\beta}\langle\sigma_{y}\sigma_{u}\rangle_{\beta}\langle\sigma_{z} \sigma_{u}\rangle_{\beta}\langle\sigma_{t}\sigma_{u}\rangle_{\beta}, \tag{1.3}\]
together with the crucial input of reflection positivity, which implies (with the Messager-Miracle-Sole inequalities [14]) the _infrared bound_[13, 15],
\[\langle\sigma_{x}\sigma_{y}\rangle_{\beta_{c}}\leq\frac{C}{|x-y|^{d-2}}, \tag{1.4}\]
where \(|.|\) denotes the infinite norm on \(\mathbb{R}^{d}\). As noticed in [1], the relevant question is to see whether \(|U_{4}^{\beta}(x,y,z,t)|/\langle\sigma_{x}\sigma_{y}\sigma_{z}\sigma_{t} \rangle_{\beta}\) vanishes or not, as the _mutual distance_\(L(x,y,z,t):=\min_{u\neq v\in\{x,y,z,t\}}|u-v|\) between \(x,y,z\) and \(t\) goes to infinity but the distances between the pairs are all of the same order. The proof can be summed up by the following (incomplete) argument: assume that \(\beta=\beta_{c}\) and that the two-point function is of comparable order for pairs of points at comparable distance; for a set of points \(x,y,z,t\) at mutual distance of order \(L\), the sum of the right-hand side of (1.3) is of order \(O(L^{8-3d})\), and we expect the four-point function \(\langle\sigma_{x}\sigma_{y}\sigma_{z}\sigma_{t}\rangle_{\beta_{c}}\) to be of order the product of two two-point functions, hence of order at least \(L^{4-2d}\). As a result, we have
\[\frac{|U_{4}^{\beta_{c}}(x,y,z,t)|}{\langle\sigma_{x}\sigma_{y} \sigma_{z}\sigma_{t}\rangle_{\beta_{c}}}=O(L^{4-d}). \tag{1.5}\]
The above bound is clearly inconclusive in the case \(d=4\). However, in the case of nearest-neighbour interactions, (1.3) was improved by a logarithmic factor to obtain Gaussianity.
The case of long-range interactions is more subtle since we do not necessarily expect any improvement in the tree diagram bound in dimension \(4\). As it turns out, we do not need any such improvement when the decay of the interaction is sufficiently slow so that the decay of the model's two-point function is faster than (1.4).
To determine whether this decay is fast enough or not, it is (almost) enough to look at whether the following quantity is finite or not:
\[\mathfrak{m}_{2}(J):=\sum_{x\in\mathbb{Z}^{d}}|x|^{2}J_{0,x}.\]
When \(\mathfrak{m}_{2}(J)=\infty\), the decay of the interaction is slow enough to conclude using 1.3.
**Theorem 1.3**.: _Let \(d=4\). Assume that \(J\) satisfies \((\mathbf{A1})\)-\((\mathbf{A5})\), and that \(\mathfrak{m}_{2}(J)=\infty\). Then, for all \(\beta\leq\beta_{c}\), \(f\in\mathcal{C}_{0}(\mathbb{R}^{d})\) and \(z\in\mathbb{R}\),_
\[\lim_{L\to\infty}\left|\left\langle\exp\left(zT_{f,L,\beta}(\sigma)\right) \right\rangle_{\beta}-\exp\left(\frac{z^{2}}{2}\langle T_{f,L,\beta}(\sigma)^ {2}\rangle_{\beta}\right)\right|=0.\]
_As a consequence, for \(\beta\leq\beta_{c}\), every sub-sequential scaling limit of the model is Gaussian._
**Remark 1.4**.: _As we will see in Section 5, the rate of convergence to \(0\) can be expressed in terms of_
\[\sum_{|x|\leq k}|x|^{2}J_{0,x}.\]
_For instance, in the case of \(J\) defined by \(J_{x,y}=C|x-y|_{1}^{-d-2}\), one can check that \(\mathfrak{m}_{2}(J)=\infty\). The rate of convergence to \(0\) is then given by \(C/\log L\)._
We now discuss the case \(\mathfrak{m}_{2}(J)<\infty\). In fact, we will have to restrict to interactions \(J\) satisfying the following additional condition, which is slightly stronger:
* There exist \(\mathbf{C},\varepsilon>0\) such that for all \(k\geq 1\), \[\sum_{|x|=k}|x|^{2}J_{0,x}\leq\frac{\mathbf{C}}{k^{1+\varepsilon}}.\] (1.6)
As explained above, in this case we expect that the mechanism which leads to Gaussianity is the same as for the nearest-neighbour case. Hence, we first prove an improved tree diagram bound. The quantity \(L(\beta)\) was briefly mentioned above and will be introduced in Section 3.6.
**Theorem 1.5** (Improved tree diagram bound for \(d=4\)).: _Let \(d=4\). Assume that \(J\) satisfies_ (**A1**)_-_(**A6**)_. There exist \(c,C>0\) such that, for all \(\beta\leq\beta_{c}\), for all \(x,y,z,t\in\mathbb{Z}^{4}\) at mutual distance at least \(L\) of each other with \(1\leq L\leq L(\beta)\),_
\[|U_{4}^{\beta}(x,y,z,t)|\leq\frac{C}{B_{L}(\beta)^{c}}\sum_{u\in\mathbb{Z}^{4} }\langle\sigma_{x}\sigma_{u}\rangle_{\beta}\langle\sigma_{y}\sigma_{u}\rangle _{\beta}\langle\sigma_{z}\sigma_{u}\rangle_{\beta}\langle\sigma_{t}\sigma_{u} \rangle_{\beta},\]
_where \(B_{L}(\beta)\) is the bubble diagram truncated at distance \(L\) defined by_
\[B_{L}(\beta):=\sum_{x\in\Lambda_{L}}\langle\sigma_{0}\sigma_{x}\rangle_{\beta }^{2}.\]
It is predicted, for an interaction \(J\) satisfying (**A1**)-(**A6**), that the bubble diagram diverges at criticality. This improves the \(O(1)\) of (1.5) to a \(O(B_{L}(\beta)^{-c})\).
**Remark 1.6**.: _As noticed in [1, 1], the bubble condition_
\[B(\beta_{c})<\infty, \tag{1.7}\]
_implies that some of the model's critical exponents take their mean-field value. It is also possible to show that the bubble condition (together with some monotonicity properties of the two-point function), implies triviality of the scaling limits. We provide a proof of this fact in Appendix D._
**Remark 1.7**.: _The reason why we restrict to interactions satisfying_ (**A6**) _is technical and will become more transparent in Section 6.2. All the examples of reflection positive interactions we will encounter either satisfy_ (**A6**) _or_ \(\mathfrak{m}_{2}(J)=\infty\) _(when_ \(d=4\)_). We strongly believe that the methods below are sufficient to conclude provided we know a sufficiently good estimate on quantities like (_1.6_)._
_One can check that the result still holds (see Remark 6.7) if one replaces_ (**A6**) _by_
\[(\textbf{A6}^{\prime})\]
_There exist_ \(\mathbf{C},\varepsilon>0\) _such that for all_ \(k\geq 1\)_,_
\[\sum_{|x|=k}|x|^{2}J_{0,x}\leq\frac{\mathbf{C}}{k(\log k)^{1+ \varepsilon}}. \tag{1.8}\]
_A natural question would be to ask what is the optimal condition \(J\) should satisfy when \(\mathfrak{m}_{2}(J)<\infty\). For instance, is there is any reflection positive interaction that does not satisfy_ (**A6**_\({}^{\prime}\)_) _but for which_ \(\mathfrak{m}_{2}(J)<\infty\)_? The methods below should be optimal whenever the interaction_ \(J\) _is such that the two-point function satisfies_ \(\langle\sigma_{0}\sigma_{x}\rangle_{\beta_{c}}\asymp|x|^{-2}\) _as_ \(|x|\to\infty\)_. As pointed out in_ _[_1_, Section 7]__, the Green function associated to_ \(J\) _has this decay provided_
\[\sum_{|x|\geq k}J_{0,x}=O\left(\frac{1}{k^{2}\log k}\right).\]
_If we expect the two-point function to behave, at criticality, like the Green function of the random walk associated to \(J\) (as suggested by the infrared bound in Proposition 3.4), then the above appears to be the optimal condition. This leaves open the possibility of finding a reflection positive positive interaction which satisfies_ (**A6**_\({}^{\prime}\)_) _with_ \(\varepsilon=0\)_, but for which we still expect the triviality result to hold._
With the improved tree diagram bound, we can obtain a formulation of triviality similar to the one obtained in Theorem 1.3.
**Corollary 1.8**.: _Let \(d=4\). Assume that \(J\) satisfies \(\mathbf{(A1)}\)-\(\mathbf{(A6)}\). There exist \(C,c,\gamma>0\) such that, for all \(\beta\leq\beta_{c}\), \(1\leq L\leq L(\beta)\), \(f\in\mathcal{C}_{0}(\mathbb{R}^{d})\), and \(z\in\mathbb{R}\),_
\[\left|\langle\exp\left(zT_{f,L,\beta}(\sigma)\right)\rangle_{\beta}-\exp \left(\frac{z^{2}}{2}\langle T_{f,L,\beta}(\sigma)^{2}\rangle_{\beta}\right) \right|\leq\exp\left(\frac{z^{2}}{2}\langle T_{|f|,L,\beta}(\sigma)^{2} \rangle_{\beta}\right)\frac{C\|f\|_{\infty}^{4}r_{f}^{\gamma}z^{4}}{(\log L)^{ c}}.\]
_As a consequence, for \(\beta=\beta_{c}\), every sub-sequential scaling limit of the model is Gaussian._
We now turn to the case of low dimensional models of effective dimension equal to four. As above, we illustrate the result by focusing on the case of algebraically decaying reflection positive interactions. The situation of interest corresponds to choosing \(\alpha=d/2\) for \(1\leq d\leq 3\). More general versions of the following statements can be found in Section 7 (see Theorem 7.1 and Corollary 7.3).
For technical reasons (see Remark 7.2), the following statement is not as quantitative as the above ones, and for simplicity we restrict it to \(\beta=\beta_{c}\).
**Theorem 1.9** (Improved tree diagram bound for \(1\leq d\leq 3\)).: _Let \(1\leq d\leq 3\). Let \(J\) be the interaction defined for \(x\neq y\in\mathbb{Z}^{d}\) by \(J_{x,y}=C_{0}|x-y|_{1}^{-3d/2}\) (i.e. \(\alpha=d/2\)) where \(C_{0}>0\). There exist \(C>0\) and a function \(\psi:\mathbb{R}\to\mathbb{R}_{>0}\) which satisfies \(\psi(t)\to\infty\) as \(t\to\infty\), such that, for all \(x,y,z,t\in\mathbb{Z}^{d}\) at mutual distance at least \(L\) of each other,_
\[|U_{4}^{\beta_{c}}(x,y,z,t)|\leq\frac{C}{\psi(B_{L}(\beta_{c}))}\sum_{u\in \mathbb{Z}^{d}}\langle\sigma_{x}\sigma_{u}\rangle_{\beta_{c}}\langle\sigma_{y} \sigma_{u}\rangle_{\beta_{c}}\langle\sigma_{z}\sigma_{u}\rangle_{\beta_{c}} \langle\sigma_{t}\sigma_{u}\rangle_{\beta_{c}}.\]
**Remark 1.10**.: _In fact, for \(d=1\), the result is much stronger and we recover the improvement of order \(O(B_{L}(\beta)^{-c})\) obtained when \(d=4\). The precise statements will be given in Section 7._
We can still deduce a triviality statement from this improved tree diagram bound. It involves the so-called _renormalised coupling constant_. We begin with a definition. For \(\sigma>0\), we define the _correlation length of order \(\sigma\)_ by: for \(\beta<\beta_{c}\),
\[\xi_{\sigma}(\beta):=\left(\frac{\sum_{x\in\mathbb{Z}^{d}}|x|^{\sigma}\langle \sigma_{0}\sigma_{x}\rangle_{\beta}}{\chi(\beta)}\right)^{1/\sigma},\]
where \(\chi(\beta):=\sum_{x\in\mathbb{Z}^{d}}\langle\sigma_{0}\sigma_{x}\rangle_{\beta}\). As it turns out, the above quantity is well-defined when \(J_{x,y}=C|x-y|^{-d-\alpha}\) as soon as \(\sigma<\alpha\) (see for instance [11, 12, 13]). Also, by the results of [1], one has \(\xi_{\sigma}(\beta)\to\infty\) as \(\beta\to\beta_{c}\). For such a \(\sigma>0\), we introduce another convenient measure of the interaction called the renormalised coupling constant of order \(\sigma\) and defined for \(\beta<\beta_{c}\) by:
\[g_{\sigma}(\beta):=-\frac{1}{\chi(\beta)^{2}\xi_{\sigma}(\beta)^{d}}\sum_{x,y,z\in\mathbb{Z}^{d}}U_{4}^{\beta}(0,x,y,z).\]
The vanishing of the above quantity is known to imply triviality of the scaling limits of the model (see [11, Theorem 11] or [1, 10, 12]).
**Corollary 1.11**.: _We keep the assumptions of Theorem 1.9. Then, for \(\sigma\in(0,d/2)\),_
\[\lim_{\beta\nearrow\beta_{c}}g_{\sigma}(\beta)=0.\]
_As a consequence, for \(\beta=\beta_{c}\), every sub-sequential scaling limit of the model is Gaussian._
#### 1.2.2. Results for the \(\varphi^{4}\) model
We now extend the above results to the \(\varphi^{4}\) model. These results also extend to models in the Griffiths-Simon class of measures, whose definition is postponed to the next section. We refer to Section 8 for the general statement of triviality for these models.
We start with a proper definition of the \(\varphi^{4}\) model. Let \(\rho\) be given by (1.2). As for the Ising model, the ferromagnetic \(\varphi^{4}\) model on \(\Lambda\) is defined by the finite volume Gibbs equilibrium state: for \(F:\mathbb{R}^{\Lambda}\to\mathbb{R}\),
\[\langle F(\varphi)\rangle_{\Lambda,\rho,\beta}=\frac{1}{Z(\Lambda,\rho,\beta) }\int F(\varphi)\exp\left(-\beta H_{\Lambda,J}(\varphi)\right)\prod_{x\in \Lambda}\mathrm{d}\rho(\varphi_{x}),\]
where \(Z(\Lambda,\rho,\beta)\) is the partition function and
\[H_{\Lambda,J}(\varphi):=-\sum_{\{x,y\}\subset\Lambda}J_{x,y}\varphi_{x} \varphi_{y}.\]
We call \(\langle\cdot\rangle_{\rho,\beta}\) the model's infinite volume Gibbs measure. It is also possible to introduce a critical parameter \(\beta_{c}(\rho)\), together with a sharp length \(L(\rho,\beta)\), a four-point Ursell function \(U_{4}^{\rho,\beta}\), and a renormalised coupling constant \(g_{\sigma}(\rho,\beta)\). The extension of the results concerning models of effective dimension \(d_{\mathrm{eff}}>4\) is quite straightforward and will be discussed in Section 5 (see Remark 5.7). We focus on the results for \(1\leq d\leq 4\) with \(d_{\mathrm{eff}}=4\).
**Theorem 1.12**.: _Let \(d=4\). Assume that \(J\) satisfies \(\mathbf{(A1)}\)-\(\mathbf{(A5)}\), and that \(\mathfrak{m}_{2}(J)=\infty\). Then, for all \(\beta\leq\beta_{c}(\rho)\), \(f\in\mathcal{C}_{0}(\mathbb{R}^{d})\) and \(z\in\mathbb{R}\),_
\[\lim_{L\to\infty}\left|\left\langle\exp\left(zT_{f,L,\beta}(\varphi)\right) \right\rangle_{\rho,\beta}-\exp\left(\frac{z^{2}}{2}\langle T_{f,L,\beta}( \varphi)^{2}\rangle_{\rho,\beta}\right)\right|=0.\]
_As a consequence, for \(\beta=\beta_{c}\), every sub-sequential scaling limit of the model is Gaussian._
For the \(\varphi^{4}\) model, the tree diagram bound takes a slightly different form.
**Theorem 1.13**.: _Let \(d=4\). Assume that \(J\) satisfies \(\mathbf{(A1)}\)-\(\mathbf{(A6)}\). There exist \(c,C>0\) such that, for all \(\beta\leq\beta_{c}(\rho)\), for all \(x,y,z,t\in\mathbb{Z}^{4}\) at mutual distance at least \(L\) of each other with \(1\leq L\leq L(\rho,\beta)\),_
\[|U_{4}^{\rho,\beta}(x,y,z,t)|\\ \leq C\left(\frac{B_{0}(\rho,\beta)}{B_{L}(\rho,\beta)}\right)^{c} \sum_{u\in\mathbb{Z}^{4}}\sum_{u^{\prime},u^{\prime\prime}\in\mathbb{Z}^{4}} \langle\tau_{x}\tau_{u}\rangle_{\rho,\beta}\beta J_{u,u^{\prime}}\langle\tau_{ u^{\prime}}\tau_{y}\rangle_{\rho,\beta}\langle\tau_{z}\tau_{u}\rangle_{\rho, \beta}\beta J_{u,u^{\prime\prime}}\langle\tau_{u^{\prime\prime}}\tau_{t} \rangle_{\rho,\beta}.\]
**Corollary 1.14**.: _Let \(d=4\). Assume that \(J\) satisfies \(\mathbf{(A1)}\)-\(\mathbf{(A6)}\). Consider a \(\varphi^{4}\) model on \(\mathbb{Z}^{4}\) with coupling constants \(J\). There exist \(C,c,\gamma>0\) such that, for all \(\beta\leq\beta_{c}(\rho)\), \(1\leq L\leq L(\rho,\beta)\), \(f\in\mathcal{C}_{0}(\mathbb{R}^{d})\), and \(z\in\mathbb{R}\),_
\[\left|\left\langle\exp\left(zT_{f,L,\beta}(\varphi)\right)\right\rangle_{ \rho,\beta}-\exp\left(\frac{z^{2}}{2}\langle T_{f,L,\beta}(\varphi)^{2}\rangle _{\rho,\beta}\right)\right|\\ \leq\exp\left(\frac{z^{2}}{2}\langle T_{|f|,L,\beta}(\varphi)^{2 }\rangle_{\rho,\beta}\right)\frac{C\|f\|_{\infty}^{4}r_{f}^{\gamma}z^{4}}{( \log L)^{c}}.\]
_As a consequence, for \(\beta=\beta_{c}\), every sub-sequential scaling limit of the model is Gaussian._
We may also extend the results obtained for the Ising model for \(1\leq d\leq 3\) and \(d_{\mathrm{eff}}=4\). The corresponding modifications of Theorem 1.9 and Corollary 1.11 will be stated in Section 8. Their main consequence for the \(\varphi^{4}\) model is stated below.
**Theorem 1.15**.: _Let \(1\leq d\leq 3\). Let \(J\) be the interaction defined for \(x\neq y\in\mathbb{Z}^{d}\) by \(J_{x,y}=C|x-y|_{1}^{-3d/2}\) (i.e. \(\alpha=d/2\)) where \(C>0\). Then, for \(\sigma\in(0,d/2)\),_
\[\lim_{\beta\nearrow\beta_{c}(\rho)}g_{\sigma}(\rho,\beta)=0.\]
_As a consequence, for \(\beta=\beta_{c}(\rho)\), every sub-sequential scaling limit of the model is Gaussian._
### Organisation of the paper
In Section 2, we define the Griffiths-Simon class of single-site measures to which our results apply. In Section 3, we recall the definition of reflection positivity and present the main properties it implies for the models under consideration (monotonicity of the two-point function, infrared bound, etc). The main result of this section is the derivation of the existence of regular scales in Propositions 3.28 and 3.29. In Section 4, we provide the basic knowledge on the random current representation of the Ising model and explain the heuristics it provides on Gaussianity of the scaling limits. We introduce the notion of effective dimension and prove a generalisation of Theorem 1.2 (see Theorem 5.5) to models with effective dimension 4 in Section 5. Then, in Section 6, we prove Theorems 1.3 and 1.5, together with Corollary 1.8. The handling of long-range interactions is performed in Section 6.2. In Section 7, we prove more general versions of Theorem 1.9 and Corollary 1.11 (see Theorem 7.1 and Corollary 7.3). The main modifications in comparison to the case are treated in Section 7.2. In Section 8, we extend the results to all the models introduced in Section 2.
In Appendix A, we provide the proofs of the main spectral tools we will use for reflection positive models in the Griffiths-Simon class (see Theorem 3.11 and Proposition 3.16). In Appendix B, we recall the definition of the backbone representation of the Ising model which will be useful in our proofs. In Appendix C, we recall some useful bounds for the probability of connectivity events for the random current representation of Ising-type models in the Griffiths-Simon class. Finally, in Appendix D, we provide an alternative proof of some of our results in the case where the bubble condition (1.7) is satisfied.
## Ackowledgements
We warmly thank Hugo Duminil-Copin for suggesting the problem, for stimulating discussions, and for constant support. We thank Sebastien Ott for pointing us references concerning the spectral representation of the Ising model. We thank Lucas D'Alimonte, Piet Lammers, Trishen Gunaratnam, and Christoforos Panagiotis for numerous valuable comments and suggestions on a first version of the paper.
## Notations
We write a point \(x\in\mathbb{R}^{d}\) as \(x=(x_{1},\ldots,x_{d})\) and denote by \(\mathbf{e}_{j}\) the unit vector with \(x_{j}=1\). We will use the following notations for the standard norms on \(\mathbb{R}^{d}\): for \(x\in\mathbb{R}^{d}\), \(|x|_{1}:=|x_{1}|+\ldots+|x_{d}|\), \(\|x\|_{2}^{2}:=x_{1}^{2}+\ldots+x_{d}^{2}\), and \(|x|:=\max_{1\leq i\leq d}|x_{i}|\). Finally, for \(k\geq 1\), denote \(\Lambda_{k}:=[-k,k]^{d}\cap\mathbb{Z}^{d}\).
If \((a_{n})_{n\geq 0},(b_{n})_{n\geq 0}\in(\mathbb{R}_{*}^{*})^{\mathbb{N}}\), we will write \(a_{n}\gtrsim b_{n}\) (resp. \(a_{n}\asymp b_{n}\)) if there exists \(C_{1}=C_{1}(d)\) (resp. \(C_{1}=C_{1}(d),C_{2}=C_{2}(d)>0\)) such that for all \(n\geq 1\), \(b_{n}\leq C_{1}a_{n}\) (resp. \(C_{1}a_{n}\leq b_{n}\leq C_{2}b_{n}\)). We will also use Landau's formalism and write \(a_{n}=O(b_{n})\) (resp. \(a_{n}=o(b_{n})\)) if there exists \(C=C(d)>0\) such that for all \(n\geq 1\), \(a_{n}\leq Cb_{n}\) (resp. \(\lim_{n\to\infty}a_{n}/b_{n}=0\)).
## 2. The Griffiths-Simon class of measures
In this section, we define the proper class of single-site measures to which our results apply.
**Definition 2.1** (The GS class of measures).: _A Borel measure \(\rho\) on \(\mathbb{R}\) is said to belong to the Griffiths-Simon (GS) class of measures if it satisfies one of the following conditions:_
1. _there exist an integer_ \(N\geq 1\)_, a renormalisation constant_ \(Z>0\)_, and sequences_ \((K_{i,j})_{1\leq i,j\leq N}\in(\mathbb{R}^{+})^{N^{2}}\) _and_ \((Q_{n})_{1\leq n\leq N}\in\mathbb{R}^{N}\) _such that for every_
bounded and measurable,_
\[\int_{\mathbb{R}}F(\tau)\mathrm{d}\rho(\tau)=\frac{1}{Z}\sum_{\sigma\in\{\pm 1\}^{N} }F\left(\sum_{n=1}^{N}Q_{n}\sigma_{n}\right)\exp\left(\sum_{i,j=1}^{N}K_{i,j} \sigma_{i}\sigma_{j}\right),\]
* _the measure_ \(\rho\) _can be presented as a weak limit of probability measures of the above type, and it is of sub-Gaussian growth: for some_ \(\alpha>2\)_,_ \[\int_{\mathbb{R}}e^{|\tau|^{\alpha}}\mathrm{d}\rho(\tau)<\infty.\]
_Measures that satisfy \((i)\) are said to be of the "Ising type"._
The following result was proved in [1, 1] to extend the Lee-Yang theorem, together with Griffiths' correlation inequalities, to the \(\varphi^{4}\) model on \(\mathbb{Z}^{d}\). We sketch its proof for sake of completeness.
**Proposition 2.2** (The \(\varphi^{4}\) measure belongs to the GS class, [1]).: _Let \(g>0\) and \(a\in\mathbb{R}\). The probability measure \(\rho_{g,a}\) on \(\mathbb{R}\) given by_
\[\mathrm{d}\rho_{g,a}(\varphi)=\frac{1}{z_{g,a}}e^{-g\varphi^{4}-a\varphi^{2}} \mathrm{d}\varphi,\]
_where \(z_{g,a}\) is a renormalisation constant, belongs to the GS class._
Proof.: Let \(N\geq 1\). Let \(\widetilde{g}=(12g)^{-1/4}\) and \(\widetilde{a}=2a\widetilde{g}^{2}\). Define the coupling constants
\[c_{N}:=\widetilde{g}N^{-3/4},\qquad d_{N}:=\frac{1}{N}\left(1-\frac{\widetilde {a}}{\sqrt{N}}\right).\]
Define the Ising Gibbs measure \(\mu_{N}\) on the complete graph \(K_{N}\), with Hamiltonian given by
\[\mathbf{H}_{N}(\sigma):=-d_{N}\sum_{\{i,j\}\subset K_{N}}\sigma_{i}\sigma_{j}.\]
Let \(\rho_{N}\) be the law of the random variable \(\Phi_{N}:=c_{N}\sum_{i=1}^{N}\sigma_{i}\). Then, \(\rho_{N}\) converges weakly to \(\rho_{g,a}\).
## 3. Reflection positivity
In this section, we define reflection positivity and gather all the properties it implies in our setup (monotonicity, infrared bound, gradient estimates, etc). We refer to the review [10] or the original papers [10, 11], for more information on this notion.
The end-goal is to derive the existence of regular scales for general reflection positive interactions (see Propositions 3.28 and 3.29). Most of the results are classical and their proofs in the case of nearest-neighbour ferromagnetic (n.n.f) models in the GS class were already derived in [1] using the spectral representation of the Ising model through the lens of transfer matrices. This approach is not optimal in the most general setup. Our viewpoint will be that of self-adjoint operators in infinite dimensional spaces, which allows to import general results from [12].
The following statements apply to both Ising and \(\varphi^{4}\) systems. To unify the notations, we refer to the spin or field variables by the symbol \(\tau\), with an a-priori spin distribution \(\mathrm{d}\rho(\tau)\) in the GS class which is supported on a set \(\mathcal{S}\subset\mathbb{R}\). The expectation value functional with respect to the Gibbs measure, or functional integral, for a system in a domain \(\Lambda\), is denoted \(\langle\cdot\rangle_{\Lambda,\rho,\beta}\). We denote by \(\langle\cdot\rangle_{\rho,\beta}\) the state's natural infinite volume limit. We also denote by \(\beta_{c}(\rho)\) the critical inverse temperature, and \(\xi(\rho,\beta)\) the correlation length. We sometimes omit \(\rho\) in the notations when it is clear from context.
We will use the following notation for the model's two-point function,
\[S_{\rho,\beta}(x):=\langle\tau_{0}\tau_{x}\rangle_{\rho,\beta}.\]
Also, introduce _the finite-volume susceptibility_, for \(L\geq 1\),
\[\chi_{L}(\rho,\beta):=\sum_{x\in\Lambda_{L}}S_{\rho,\beta}(x),\]
and define the _susceptibility_ to be \(\chi(\rho,\beta):=\lim_{L\to\infty}\chi_{L}(\rho,\beta)\). Finally, recall that \(|J|=\sum_{x\in\mathbb{Z}^{d}}J_{0,x}\).
### Definition of reflection positivity
Let \(d\geq 1\). Consider the torus \(\mathbb{T}_{L}:=(\mathbb{Z}/L\mathbb{Z})^{d}\) with \(L\geq 2\) an even integer. The torus is endowed with a natural reflection symmetry along hyperplanes \(\mathcal{H}\) which are orthogonal to one of the lattice's directions. The hyperplane \(\mathcal{H}\) either passes trough sites of \(\mathbb{T}_{L}\) or through mid-edges, and \(\mathcal{H}\) divides the torus into two pieces \(\mathbb{T}_{L}^{+}\) and \(\mathbb{T}_{L}^{-}\). The two pieces are disjoint for mid-edges reflections and satisfy \(\mathbb{T}_{L}^{+}\cap\mathbb{T}_{L}^{-}=\mathcal{H}\) for site reflections. Denote by \(\mathcal{A}^{\pm}\) the algebra of all real valued functions \(f\) that depend only on the spins in \(\mathbb{T}_{L}^{\pm}\). Denote by \(\Theta\) the reflection map associated with \(\mathcal{H}\); it naturally acts on \(\mathcal{A}^{\pm}\): for all \(f\in\mathcal{A}^{\pm}\),
\[\Theta(f)(\tau):=f(\Theta(\tau)),\qquad\forall\tau\in\mathcal{S}^{\mathbb{T}_{ L}}.\]
If \(J=(J_{x,y})_{x,y\in\mathbb{Z}^{d}}\in(\mathbb{R}^{+})^{\mathbb{Z}^{d}\times \mathbb{Z}^{d}}\), we can view it as an interaction \(J^{(L)}\) on \(\mathbb{T}_{L}\) by setting
\[J^{(L)}_{x,y}:=\sum_{z\in\mathbb{Z}^{d}}J_{x,y+Lz}.\]
**Definition 3.1** (Reflection positivity).: _Let \(J=(J_{x,y})_{x,y\in\mathbb{Z}^{d}}\in(\mathbb{R}^{+})^{\mathbb{Z}^{d}\times \mathbb{Z}^{d}}\) be an interaction. The measure \(\langle\cdot\rangle_{\mathbb{T}_{L},\rho,\beta}=\langle\cdot\rangle_{\mathbb{ T}_{L},\rho,J^{(L)},\beta}\) is called reflection positive (RP) with respect to \(\Theta\), if for all \(f,g\in\mathcal{A}^{+}\),_
\[\langle f\cdot\Theta(g)\rangle_{\mathbb{T}_{L},\rho,\beta}=\langle\Theta(f) \cdot g\rangle_{\mathbb{T}_{L},\rho,\beta},\]
_and,_
\[\langle f\cdot\Theta(f)\rangle_{\mathbb{T}_{L},\rho,\beta}\geq 0.\]
_We say that \(J\) is reflection positive if for all \(L\geq 2\) even, the associated measure \(\langle\cdot\rangle_{\mathbb{T}_{L},\rho,\beta}\) is reflection positive with respect to \(\Theta\) for all such reflections \(\Theta\)._
Before discussing the interest of studying such interactions let us briefly mention some examples (for more details see [11, 12, 13, 14]):
* (nearest-neighbour interactions) \(J_{x,y}=1_{|x-y|_{1}=1}\),
* (exponential decay / Yukawa potentials) \(J_{x,y}=C\exp(-\mu|x-y|_{1})\) for \(\mu,C>0\),
* (power law decay) \(J_{x,y}=C|x-y|_{1}^{-d-\alpha}\) for \(\alpha,C>0\),
* \(J_{x,y}=1/\prod_{i=1}^{d}(|x_{i}-y_{i}|+a_{i}^{2})^{\tau_{i}}\), for \(\tau_{i}\geq 0\) and \(a_{i}\in\mathbb{R}\).
The last example above can be found in [11, 14] and is an example of a \(d\)-dimensional RP interaction constructed as the product of \(1\)-dimensional RP interactions. Furthermore, models of the GS class whose couplings are linear combinations with positive coefficients of the couplings mentioned above are also reflection-positive. Note that all finite range reflection positive interaction satisfy \(J_{0,x}=0\) whenever \(|x|>1\) (see p.12 of [11] for more information).
We now turn to the main consequences of reflection positivity. In what follows, we consider reflection positive models with \(\rho\) in the GS class. We fix \(J\) satisfying (**A1**)-(**A5**).
### Messager-Miracle-Sole inequalities
The Messager-Miracle-Sole inequality provides monotonicity properties for \(S_{\rho,\beta}\) in the case of reflection positive interactions (see Figure 2 for an illustration of this result).
**Proposition 3.2** (MMS inequalities, [10, 11, 12]).: _Let \(\Lambda\) be a region endowed with reflection symmetry with respect to a plane \(\mathcal{P}\). Let \(A,B\) be two sets of points on the same side of the reflection plane. If \(\Theta\) is the reflection with respect to \(\mathcal{P}\),_
\[\left\langle\prod_{x\in A}\tau_{x}\prod_{x\in B}\tau_{x}\right\rangle_{\Lambda,\rho,\beta}\geq\left\langle\prod_{x\in A}\tau_{x}\prod_{x\in\Theta(B)}\tau_{ x}\right\rangle_{\Lambda,\rho,\beta}.\]
_Moreover, in the infinite volume limit \(\Lambda\nearrow\mathbb{Z}^{d}\) this result can be extended to reflections with respect to hyperplanes passing through sites or mid-edges; and to reflections with respect to diagonal hyperplanes or more precisely, reflections changing only two coordinates \(x_{i}\) and \(x_{j}\) which are sent to \(x_{i}\pm L\) and \(x_{j}\mp L\) respectively, for some \(L\in\mathbb{Z}\)._
As a result, we get the following monotonicity property for reflection positive models in the GS class.
**Corollary 3.3** (Monotonicity of the two-point function).: _Let \(d\geq 1\) and \(\beta>0\). Then,_
1. _for all_ \(1\leq j\leq d\)_, the sequence_ \((S_{\rho,\beta}(k\mathbf{e}_{j}))_{k\geq 0}\) _is decreasing,_
2. _for_ \(x\in\mathbb{Z}^{d}\)_,_ \[S_{\rho,\beta}((|x|,0_{\perp}))\geq S_{\rho,\beta}(x)\geq S_{\rho,\beta}((|x| _{1},0_{\perp})),\] ( **MMS1** ) _where_ \(0_{\perp}\in\mathbb{Z}^{d-1}\) _is the null vector. In particular, for all_ \(x,y\in\mathbb{Z}^{d}\) _with_ \(d|x|\leq|y|\)_,_ \[S_{\rho,\beta}(x)\geq S_{\rho,\beta}(y).\] ( **MMS2** )
### The infrared bound
The second interesting property we can get on \(S_{\rho,\beta}\) using reflection positivity is a quantitative estimate of its decay up to the critical point. This relies on the infrared bound which is recalled below.
We still work on the \(d\)-dimensional torus \(\mathbb{T}_{L}\) (with \(L\) even). In view of the model's translation invariance, it is natural to introduce the Fourier transform of \(S_{\rho,\beta}\),
\[\widehat{S}^{(L)}_{\rho,\beta}(p):=\sum_{x\in\mathbb{T}_{L}}e^{ip\cdot x} \langle\tau_{x}\tau_{y}\rangle_{\mathbb{T}_{L},\rho,\beta}\]
where \(p\) ranges over \(\mathbb{T}_{L}^{*}:=\left(\frac{2\pi}{L}\mathbb{Z}\right)^{d}\cap(-\pi,\pi]^{d}.\) As it turns out, \(\widehat{S}^{(L)}_{\rho,\beta}(p)\) can be expressed in terms of the Fourier _spin-wave modes_, defined as
\[\widehat{\tau}_{\beta}(p):=\frac{1}{\sqrt{(2L)^{d}}}\sum_{x\in\mathbb{T}_{L}}e ^{ip\cdot x}\tau_{x}.\]
Indeed, one has for \(p\in\mathbb{T}_{L}^{*}\),
\[\widehat{S}^{(L)}_{\rho,\beta}(p)=\langle|\widehat{\tau}_{\beta}(p)|^{2} \rangle_{\mathbb{T}_{L},\rho,\beta}, \tag{3.1}\]
so that in particular \(\widehat{S}^{(L)}_{\rho,\beta}(p)\geq 0\).
The following result was first proved and used in [10, 11].
**Proposition 3.4** (Infrared bound).: _For any \(p\in\mathbb{T}_{L}^{*}\setminus\{0\}\),_
\[\widehat{S}^{(L)}_{\rho,\beta}(p)\leq\frac{1}{2\beta|J|(1-\widehat{J}(p))},\]
_where \(\widehat{J}(p):=\sum_{x\in\mathbb{Z}^{d}}e^{ip\cdot x}\frac{J_{0,x}}{|J|}\)._
**Remark 3.5**.: _Note that_
\[|J|(1-\widehat{J}(p))=2\sum_{x\in\mathbb{Z}^{d}}\sin^{2}\left(\frac{p\cdot x} {2}\right)J_{0,x}.\]
Introduce for \(\beta<\beta_{c}(\rho)\), and \(p\in(-\pi,\pi]^{d}\),
\[\widehat{S}_{\rho,\beta}(p):=\sum_{x\in\mathbb{Z}^{d}}e^{ip\cdot x}S_{\rho, \beta}(x).\]
Note that this quantity is well defined since \(\sum_{x\in\mathbb{Z}^{d}}S_{\rho,\beta}(x)<\infty\) for \(\beta<\beta_{c}(\rho)\) as proved in [1]. The next result will use the celebrated Simon-Lieb inequality which we now recall.
**Lemma 3.6** (Simon-Lieb inequality, [12, 13]).: _Let \(d\geq 1\). For every ferromagnetic model in the GS class on \(\mathbb{Z}^{d}\) with translation invariant coupling constants, every \(\beta>0\), every finite subset \(\Lambda\) of \(\mathbb{Z}^{d}\) containing \(0\), and every \(x\notin\Lambda\),_
\[S_{\rho,\beta}(x)\leq\sum_{\begin{subarray}{c}u\in\Lambda\\ v\notin\Lambda\end{subarray}}S_{\rho,\beta}(u)\beta J_{u,v}S_{\rho,\beta}(x-v).\]
The following result extends the infrared bound to \(\widehat{S}_{\rho,\beta}\).
**Proposition 3.7**.: _Let \(\beta<\beta_{c}(\rho)\). Let \(p\in(-\pi,\pi]^{d}\). Then,_
\[\widehat{S}_{\rho,\beta}(p)\leq\frac{1}{2\beta|J|(1-\widehat{J}(p))}.\]
Proof.: If \(\beta<\beta_{c}(\rho)\), it is classical that there is only one infinite volume equilibrium state that we denote \(\langle\cdot\rangle_{\rho,\beta}\). Moreover, for all \(x\in\mathbb{Z}^{d}\),
\[\langle\tau_{0}\tau_{x}\rangle_{\mathbb{T}_{L},\rho,\beta}\underset{L\to \infty}{\longrightarrow}\langle\tau_{0}\tau_{x}\rangle_{\rho,\beta}. \tag{3.2}\]
Fix \(L\) even and take limits along sequences of the form \((L^{k})_{k\geq 1}\) so that \(\mathbb{T}^{*}_{L}\subset\mathbb{T}^{*}_{L^{k}}\) for all \(k\geq 1\). For \(p\in\mathbb{T}^{*}_{L}\), notice that by Fatou's lemma and (3.2),
\[\widehat{S}_{\rho,\beta}(p)+\chi(\rho,\beta) = \sum_{x\in\mathbb{Z}^{d}}(1+\cos(p\cdot x))S_{\rho,\beta}(x)\] \[\leq \liminf\left(\widehat{S}^{(L^{k})}_{\rho,\beta}(p)+\chi^{(L^{k})} (\rho,\beta)\right)\] \[\leq \frac{1}{2\beta|J|(1-\widehat{J}(p))}+\liminf\chi^{(L^{k})}(\rho, \beta),\]
where \(\chi^{(L^{k})}(\rho,\beta):=\widehat{S}^{(L^{k})}_{\rho,\beta}(0)\) and \(\chi(\rho,\beta):=\widehat{S}_{\rho,\beta}(0)\). It then suffices to show that \(\chi^{(L^{k})}(\rho,\beta)\) goes to \(\chi(\rho,\beta)\) as \(k\) goes to infinity. Using the Simon-Lieb inequality (as in [1]), we get that for any \(K\geq 0\),
\[\chi^{(L^{k})}(\rho,\beta)\leq\widetilde{\varphi}_{\rho,\beta}(\Lambda_{K}) \chi^{(L^{k})}(\rho,\beta)+\chi_{K}(\rho,\beta), \tag{3.3}\]
where \(\widetilde{\varphi}_{\rho,\beta}(\Lambda_{K}):=\beta\sum_{\begin{subarray}{c }x\in\Lambda_{K}\\ y\notin\Lambda_{K}\end{subarray}}J_{x,y}\langle\tau_{0}\tau_{x}\rangle_{\rho,\beta}\).
Then, since \(\chi(\rho,\beta)<\infty\), one has \(\limsup_{K}\widetilde{\varphi}_{\rho,\beta}(\Lambda_{K})=0\). In particular, for every \(\varepsilon>0\),
\[\limsup\chi^{(L^{k})}(\rho,\beta)\leq\frac{1}{1-\varepsilon}\chi(\rho,\beta).\]
Which gives the result.
As a first consequence of the above result, using that \((1-\widehat{J}(p))\gtrsim|p|^{2}\) near \(0\), we see that if \(d\geq 3\), there exists \(C=C(d)>0\) such that for \(\beta<\beta_{c}(\rho)\),
\[\langle\tau_{0}^{2}\rangle_{\rho,\beta}=\int_{(-\pi,\pi]^{d}}\widehat{S}_{ \rho,\beta}(p)\mathrm{d}p\leq\frac{C}{\beta|J|}.\]
Note that this bound extends to \(\beta_{c}(\rho)\) by continuity. Since \(\beta\mapsto\langle\tau_{0}^{2}\rangle_{\rho,\beta}\) is non-decreasing8, we also get that for all \(\beta\leq\beta_{c}(\rho)\),
Footnote 8: This is a classical consequence of Griffiths’ inequalities.
\[\langle\tau_{0}^{2}\rangle_{\rho,\beta}\leq\frac{C}{\beta_{c}(\rho)|J|}. \tag{3.4}\]
Proposition 3.4 together with the MMS inequalities also yield the following result.
**Proposition 3.8** (Infrared bound).: _Let \(d\geq 3\). There exists \(C=C(d)>0\) such that for every \(\beta\leq\beta_{c}(\rho)\), and every \(x\in\mathbb{Z}^{d}\setminus\{0\}\),_
\[S_{\rho,\beta}(x)\leq\frac{C}{\beta|J||x|^{d}}\int_{\left(-\pi|x|,\pi|x| \right]^{d}}\frac{e^{-\|p\|_{2}^{2}}}{1-\widehat{J}(|p|/|x|)}\mathrm{d}p.\]
_In particular, for all \(x\in\mathbb{Z}^{d}\setminus\{0\}\),_
\[S_{\rho,\beta}(x)\leq S_{\rho,\beta_{c}(\rho)}(x)\leq\frac{C}{\beta_{c}(\rho) |J||x|^{d-2}}.\] ( **IRB** )
Proof.: We prove the result for \(\beta<\beta_{c}(\rho)\) and extend it to \(\beta_{c}(\rho)\) with a continuity argument.
Using (**MMS2**), we get that for some \(C_{1}=C_{1}(d)>0\),
\[S_{\rho,\beta}(x)\leq\frac{C_{1}}{|x|^{d}}\sum_{y\in\mathrm{Ann}(|x|(2d)^{-1},|x|d^{-1})}S_{\rho,\beta}(y)\leq\frac{C_{1}}{|x|^{d}}\chi_{|x|}(\rho,\beta). \tag{3.5}\]
We now observe that Proposition 3.7 provides a control on the finite volume susceptibility \(\chi_{L}(\rho,\beta)\). Let
\[\widetilde{\chi}_{L}(\rho,\beta):=\sum_{x\in\mathbb{Z}^{d}}e^{-(\|x\|_{2}/L)^ {2}}S_{\rho,\beta}(x).\]
There exists \(C_{2}=C_{2}(d)>0\) such that, \(\chi_{L}(\beta)\leq C_{2}\widetilde{\chi}_{L}(\rho,\beta)\). Using classical Fourier identities, we get \(C_{3}=C_{3}(d)>0\) such that,
\[\chi_{L}(\rho,\beta)\leq C_{2}\widetilde{\chi}_{L}(\rho,\beta)\leq C_{3}L^{d} \int_{(-\pi,\pi]^{d}}e^{-L^{2}\|p\|_{2}^{2}}\widehat{S}_{\rho,\beta}(p)\mathrm{ d}p.\]
With the change of variable \(u=pL\), and Proposition 3.4,
\[\int_{(-\pi,\pi]^{d}}e^{-L^{2}\|p\|_{2}^{2}}\widehat{S}_{\rho, \beta}(p)\mathrm{d}p \leq \frac{1}{L^{d}}\int_{(-L\pi,L\pi]^{d}}e^{-\|u\|_{2}^{2}}\widehat{S }_{\rho,\beta}(u/L)\mathrm{d}u\] \[\leq \frac{C_{3}}{\beta|J|L^{d}}\int_{(-L\pi,L\pi]^{d}}\frac{e^{-\|u\|_ {2}^{2}}}{1-\widehat{J}(u/L)}\mathrm{d}u.\]
The second part of the statement then follows from monotonicity in \(\beta\) of \(S_{\beta}(x)\) together with the observation that \(1-\widehat{J}(k)\gtrsim\|k\|_{2}^{2}\) as \(k\to 0\).
The preceding result essentially gives that the decay of the two-point function is governed by the behaviour of \(1-\widehat{J}(p)\) as \(p\) goes to \(0\). We have the following estimates for the examples of RP interactions given above,
* Nearest-neighbour interactions or Yukawa potentials: as \(p\to 0\), \[1-\widehat{J}(p)\asymp|p|^{2}.\]
* Power law decay interactions: as \(p\to 0\), \[1-\widehat{J}(p)\asymp\left\{\begin{array}{ll}|p|^{2}&\mbox{if }\alpha>2,\\ |p|^{2}\log\frac{1}{|p|}&\mbox{if }\alpha=2,\\ |p|^{\alpha}&\mbox{if }\alpha\in(0,2).\end{array}\right.\]
Together with Corollary 3.8, we get that for algebraic decay interactions,
\[\langle\tau_{0}\tau_{x}\rangle_{\rho,\beta}\leq\frac{C}{\beta|J|}\left\{ \begin{array}{ll}|x|^{-(d-2)}&\mbox{if }\alpha>2,\\ |x|^{-(d-2)}(\log|x|)^{-1}&\mbox{if }\alpha=2,\\ |x|^{-(d-\alpha)}&\mbox{if }\alpha\in(0,2).\end{array}\right.\]
**Remark 3.9**.: _The above bound is also valid for \(d=2\) in the case \(\alpha\in(0,2)\) and for \(d=1\) with \(\alpha\in(0,1)\), since in both cases \(|p|^{-\alpha}\) is locally integrable._
Proposition 3.8 also yields the following improvement on the bound of the model's two-point function when the interaction \(J\) has a slow decay.
**Corollary 3.10**.: _Let \(d=4\). Assume that \(\mathfrak{m}_{2}(J)=\infty\). Then, as \(|x|\to\infty\),_
\[\langle\tau_{0}\tau_{x}\rangle_{\rho,\beta_{c}(\rho)}=o\left(\frac{1}{|x|^{2}} \right).\]
### Spectral representation of reflection positive models and applications
In this subsection, we generalise the spectral representation used in [1]. The following lines are inspired by [1, 12]. We fix a reflection positive model on \(\mathbb{Z}^{d}\) (\(d\geq 1\)) with an interaction \(J\) satisfying (**A1**)-(**A5**). The following result is classical for n.n.f models in the GS class (see [1, 2] or [1, Proposition A.6]). We present its proof in Appendix A.
**Theorem 3.11** (Spectral representation).: _Let \(d\geq 1\). For every \(\beta\leq\beta_{c}(\rho)\) and every function \(v:\mathbb{Z}^{d-1}\to\mathbb{C}\) in \(\ell^{2}(\mathbb{Z}^{d-1})\), there exists a positive measure \(\mu_{v,\beta}\) of finite mass_
\[\int_{0}^{\infty}\mathrm{d}\mu_{v,\beta}(a)=\sum_{x_{\perp},y_{\perp}\in \mathbb{Z}^{d-1}}v_{x_{\perp}}\overline{v_{y\perp}}S_{\rho,\beta}((0,x_{\perp} -y_{\perp}))\leq\|v\|_{2}^{2}\langle\tau_{0}^{2}\rangle_{\rho,\beta},\]
_such that for every \(n\in\mathbb{Z}\),_
\[\sum_{x_{\perp},y_{\perp}\in\mathbb{Z}^{d-1}}v_{x_{\perp}}\overline{v_{y_{\perp}}} S_{\rho,\beta}((n,x_{\perp}-y_{\perp}))=\int_{0}^{\infty}e^{-a|n|}\mathrm{d}\mu_{v,\beta}(a). \tag{3.6}\]
The following result provides a very useful representation of the Fourier transform of \(S_{\rho,\beta}\).
**Corollary 3.12**.: _Let \(\beta<\beta_{c}(\rho)\). Let \(p=(p_{1},p_{\perp})\in(-\pi,\pi]^{d}\). There exists a measure \(\mu_{p_{\perp},\beta}\) such that_
\[\widehat{S}_{\rho,\beta}(p):=\sum_{x\in\mathbb{Z}^{d}}e^{ip\cdot x}S_{\rho, \beta}(x)=\int_{0}^{\infty}\frac{e^{a}-e^{-a}}{\mathcal{E}_{1}(p_{1})+\left(e ^{a/2}-e^{-a/2}\right)^{2}}\mathrm{d}\mu_{p_{\perp},\beta}(a),\]
_where \(\mathcal{E}_{1}(k):=2(1-\cos k)=4\sin^{2}(k/2)\)._
**Remark 3.13**.: _The preceding result is still true under any permutation of the indices._
Proof.: Consider for \(L\geq 1\), \(x_{\perp}\in\mathbb{Z}^{d-1}\) and \(p_{\perp}\in(-\pi,\pi]^{d-1}\),
\[v_{L}(x_{\perp}):=\frac{e^{ip_{\perp}\cdot x_{\perp}}}{\sqrt{|\Lambda_{L}^{(d- 1)}|}}\mathds{1}_{x_{\perp}\in\Lambda_{L}^{(d-1)}},\]
where \(\Lambda_{L}^{(d-1)}:=[-L,L]^{d-1}\cap\mathbb{Z}^{d-1}\). We apply Theorem 3.11 to the sequence \((v_{L})_{L\geq 1}\) which yields that for \(L\geq 1\), and \(n\in\mathbb{Z}\)
\[\frac{1}{|\Lambda_{L}^{(d-1)}|}\sum_{x_{\perp},y_{\perp}\in\Lambda_{L}^{(d-1) }}e^{ip_{\perp}\cdot(x_{\perp}-y_{\perp})}S_{\rho,\beta}((n,x_{\perp}-y_{\perp }))=\int_{0}^{\infty}e^{-a|n|}\mathrm{d}\mu_{v_{L},\beta}(a).\]
Observe that one can rewrite the left-hand side above as
\[\sum_{z_{\perp}\in\mathbb{Z}^{d-1}}e^{ip_{\perp}\cdot z_{\perp}}S_{\rho,\beta }((n,z_{\perp}))\frac{|\Lambda_{L}^{(d-1)}\cap(\Lambda_{L}^{(d-1)}-z_{\perp}) |}{|\Lambda_{L}^{(d-1)}|}.\]
Thus, (use [1] and the fact that \(\beta<\beta_{c}(\rho)\) to justify that every sum converges absolutely)
\[\lim_{L\to\infty}\frac{1}{|\Lambda_{L}^{(d-1)}|}\sum_{x_{\perp},y_{\perp}\in \Lambda_{L}^{(d-1)}}e^{ip_{\perp}\cdot(x_{\perp}-y_{\perp})}S_{\rho,\beta}((n, x_{\perp}-y_{\perp}))=\sum_{z_{\perp}\in\mathbb{Z}^{d-1}}e^{ip_{\perp}\cdot z_{ \perp}}S_{\rho,\beta}((n,z_{\perp})).\]
The moments of \(a\mapsto e^{-a}\) under \(\mu_{v_{L},\beta}\) converge as \(L\to\infty\). Hence, the moment criterion for the convergence of positive measures over bounded intervals (here \([0,1]\)) allows to conclude existence of the (weak) limit \(\lim_{L\to\infty}\mu_{v_{L},\beta}=:\mu_{p_{\perp},\beta}.\) We obtained that for any \(n\in\mathbb{Z}\),
\[\sum_{z_{\perp}\in\mathbb{Z}^{d-1}}e^{ip_{\perp}\cdot z_{\perp}}S_{\rho,\beta }((n,z_{\perp}))=\int_{0}^{\infty}e^{-a|n|}\mathrm{d}\mu_{p_{\perp},\beta}(a).\]
Taking the Fourier transform on each side of the above formula, and evaluating it at \(p_{1}\) yields the desired formula.
**Remark 3.14**.: _We used the fact that for \(p_{1},a\in\mathbb{R}\),_
\[\sum_{n\in\mathbb{Z}}e^{ip_{1}n-a|n|}=\frac{e^{a}-e^{-a}}{\mathcal{E}_{1}(p_{1 })+\left(e^{a/2}-e^{-a/2}\right)^{2}}.\]
**Corollary 3.15**.: _Let \(\beta<\beta_{c}(\rho)\), \(d\geq 1\). Then,_
1. \(\widehat{S}_{\rho,\beta}(p_{1},\ldots,p_{d})\) _is monotone decreasing in each_ \(|p_{j}|\) _over_ \([-\pi,\pi]\)_._
2. \(\mathcal{E}_{1}(p_{1})\widehat{S}_{\rho,\beta}(p)\) _and_ \(|p_{1}|^{2}\widehat{S}_{\rho,\beta}(p)\) _are monotone increasing in_ \(|p_{1}|\)
Proof.: The result is a direct consequence of Corollary 3.12 and of the monotonicity of the following functions:
\[u\in[0,\pi]\mapsto\mathcal{E}_{1}(u),\qquad u>0\mapsto\frac{u^{2}}{\mathcal{E}_{1 }(u)},\]
and for all \(a\geq 0\),
\[u>0\mapsto\frac{\mathcal{E}_{1}(u)}{\mathcal{E}_{1}(u)+(e^{a/2}-e^{-a/2})^{2}}.\]
The next result, obtained in [1] in the case of nearest-neighbour interactions, will be useful to derive Theorem 3.18. We derive it using the same methods used to obtain Theorem 3.11, and postpone the proof to Appendix A.
**Proposition 3.16**.: _Let \(d\geq 1\) and \(\beta<\beta_{c}(\rho)\). Introduce for \(p\in\mathbb{R}^{d}\),_
\[\widehat{S}_{\rho,\beta}^{\rm(mod)}(p):=\widehat{S}_{\rho,\beta}(p)+\widehat{ S}_{\rho,\beta}(p+\pi(1,1,0,\ldots,0)).\]
_Then \(\widehat{S}_{\rho,\beta}^{\rm(mod)}\) is monotone decreasing in \(|p_{1}-p_{2}|\) with \(p_{1}+p_{2}\) and \((p_{3},\ldots,p_{d})\) constant._
**Corollary 3.17**.: _Let \(d\geq 2\), \(\beta<\beta_{c}(\rho)\). There exists \(C=C(d)>0\) such that for all \(p\in[-\pi/2,\pi/2]^{d}\),_
\[\widehat{S}_{\rho,\beta}((|p|,0_{\perp}))\geq\widehat{S}_{\rho,\beta}(p)\geq \widehat{S}_{\rho,\beta}((|p|_{1},0_{\perp}))-\frac{C}{\beta}.\]
Proof.: The first inequality is a direct consequence of the first item of Corollary 3.15. For the second inequality, notice that an iteration of Proposition 3.16 yields
\[\widehat{S}_{\rho,\beta}^{\rm(mod)}(p)\geq\widehat{S}_{\rho,\beta}^{\rm(mod)} ((|p|_{1},0_{\perp})).\]
Now recall that by Corollary 3.12, \(\widehat{S}_{\rho,\beta}\geq 0\), and that by the infrared bound, there exists \(C>0\) such that for \(p\in[-\pi/2,\pi/2]^{d}\),
\[|\widehat{S}_{\rho,\beta}(p+\pi(1,1,0,\ldots,0))|\leq\frac{C}{\beta}.\]
The following result was first derived in [1]. The above work is sufficient to extends it to general reflection positive interactions.
**Theorem 3.18** (Sliding-scale infrared bound, [1, Theorem 5.6]).: _Let \(d\geq 2\). There exists \(C=C(d)>0\) such that for all \(\beta\leq\beta_{c}(\rho)\) and \(1\leq\ell\leq L\),_
\[\frac{\chi_{L}(\rho,\beta)}{L^{2}}\leq\frac{C}{\beta}\frac{\chi_{\ell}(\rho, \beta)}{\ell^{2}}.\]
**Remark 3.19**.: _In fact, using the results of Section 3.6, provided that the sharp length (defined below) satisfies \(L(\rho,\beta)\geq 1\), we can replace the \(C/\beta\) by a \(C\) in the theorem above. This minor improvement will be used later to derive bounds with constants that are independent of \(\beta\)._
### Gradient estimates
The following is a consequence of Theorem 3.11. It will play a crucial role in the proof of existence of regular scales that follows.
**Proposition 3.20** (Gradient estimate, [1, Proposition 5.9]).: _Let \(d\geq 1\). There exists \(C=C(d)>0\) such that for every \(\beta\leq\beta_{c}(\rho)\), every \(x\in\mathbb{Z}^{d}\) and every \(1\leq i\leq d\),_
\[|S_{\rho,\beta}(x\pm\mathbf{e}_{i})-S_{\rho,\beta}(x)|\leq\frac{F(|x|)}{|x|}S_ {\rho,\beta}(x)\]
_where \(F(n):=C\frac{S_{\rho,\beta}(d\mathsf{ne}_{1})}{S_{\rho,\beta}(d\mathsf{ne}_{1 })}\log\left(\frac{2S_{\rho,\beta}(\frac{n}{2}\mathbf{e}_{1})}{S_{\rho,\beta}( n\mathbf{e}_{1})}\right)\)._
The above estimate becomes particularly interesting whenever there exists \(c_{0}>0\) such that,
\[S_{\rho,\beta}(2d\mathsf{ne}_{1})\geq c_{0}S_{\rho,\beta}\left(\frac{n}{2} \mathbf{e}_{1}\right).\]
Indeed, in that case, one can find \(C_{0}=C_{0}(c_{0},d)>0\) such that for all \(x\in\partial\Lambda_{n}\) and \(1\leq i\leq d\),
\[|S_{\rho,\beta}(x\pm\mathbf{e}_{i})-S_{\rho,\beta}(x)|\leq\frac{C_{0}}{|x|}S_ {\rho,\beta}(x).\]
### The sharp length and a lower bound on the two-point function
Since we work with infinite range interactions, it is possible that \(\xi(\rho,\beta)=\infty\) throughout the subcritical phase \(\beta<\beta_{c}(\rho)\) (this is for instance the case for algebraically decaying RP interactions [13, 1]). This forces us to revisit the notion of "typical length" in these setups. As suggested by the work [1], the quantity defined below is a good candidate for the typical size of a box in which the model has a critical behaviour.
**Definition 3.21** (Sharp length).: _Let \(\beta>0\). Let \(S\) be a finite subset of \(\mathbb{Z}^{d}\) containing \(0\). Let_
\[\varphi_{\rho,\beta}(S):=\beta\sum_{\begin{subarray}{c}x\in S\\ y\notin S\end{subarray}}J_{x,y}\langle\tau_{0}\tau_{x}\rangle_{S,\rho,\beta}.\]
_Define9 the sharp length of parameter \(\alpha\in(0,1)\) by_
Footnote 9: The \((2d)^{-1}\) in the definition of \(L^{(\alpha)}(\rho,\beta)\) is purely technical and will be useful in the proof of Proposition 3.23.
\[L^{(\alpha)}(\rho,\beta):=\frac{1}{2d}\cdot\inf\left\{k\geq 1,\;\exists S \subset\mathbb{Z}^{d}\;\text{with}\;0\in S,\;\text{rad}(S)\leq 2k,\;\varphi_{ \rho,\beta}(S)<\alpha\right\},\]
_where \(\text{rad}(S):=\max\left\{|x-y|,\;x,y\in S\right\}\), and with the convention that \(\inf\emptyset=\infty\). We will set \(L(\rho,\beta):=L^{(1/2)}(\rho,\beta)\)._
**Remark 3.22**.: _Using the work of [1], together with the strategy implemented in [1], we see that for any \(\alpha\in(0,1)\),_
\[L^{(\alpha)}(\rho,\beta_{c}(\rho))=\infty.\]
_Indeed, using Simon-Lieb inequality as in (3.3), one can show that if \(S\) is a finite subset of \(\mathbb{Z}^{d}\) containing \(0\) and satisfying \(\varphi_{\rho,\beta_{c}(\rho)}(S)<1\), then_
\[\chi(\rho,\beta_{c}(\rho))\leq\frac{|S|\langle\tau_{0}^{2}\rangle_{\rho,\beta _{c}(\rho)}}{1-\varphi_{\rho,\beta_{c}(\rho)}(S)}.\]
_This is in contradiction with the infiniteness of the susceptibility at criticality. A similar argument gives that \(L^{(\alpha)}(\rho,\beta)\) increases to infinity as \(\beta\) tends to \(\beta_{c}(\rho)\)._
Below \(L(\rho,\beta)\), the two-point function can be lower bounded by an algebraically decaying function. We start by stating this result in the special case where \(\mathsf{m}_{2}(J)<\infty\).
**Proposition 3.23** (Lower bound on the two-point function).: _Let \(d\geq 2\). Assume that \(J\) satisfies \((\mathbf{A1})\)-\((\mathbf{A5})\) and that \(\mathfrak{m}_{2}(J)<\infty\). There exists \(c=c(d,J)>0\) such that for all \(\beta\leq\beta_{c}(\rho)\), and for all \(x\in\mathbb{Z}^{d}\) satisfying \(1\leq|x|\leq L(\rho,\beta)\),_
\[\langle\tau_{0}\tau_{x}\rangle_{\rho,\beta}\geq\frac{c}{\beta|x|^{d-1}}.\]
Proof.: Let \(n\leq(2d)L(\rho,\beta)\). By definition of \(L(\rho,\beta)\), one has \(\varphi_{\rho,\beta}(\Lambda_{n})\geq 1/2\). Using \((\mathbf{IRB})\) and the assumption \(\mathfrak{m}_{2}(J)<\infty\), one has for some \(C_{1}>0\),
\[\beta\sum_{\begin{subarray}{c}x\in\Lambda_{n/2}\\ y\notin\Lambda_{n}\end{subarray}}J_{x,y}\langle\tau_{0}\tau_{x}\rangle_{\rho,\beta}\leq C_{1}n^{2}\sum_{|x|\geq n/2}J_{0,x}\leq 4C_{1}\sum_{|x|\geq n/2}|x|^{2}J_ {0,x}\leq\frac{1}{4},\]
provided that \(n\geq N_{0}\) (where \(N_{0}\) only depends on \(J\)). Hence, we now additionally assume that \((2d)L(\rho,\beta)\geq N_{0}\). We obtained that,
\[\beta\sum_{\begin{subarray}{c}x\in\Lambda_{n}\setminus\Lambda_{n/2}\\ y\notin\Lambda_{n}\end{subarray}}J_{x,y}\langle\tau_{0}\tau_{x}\rangle_{ \Lambda_{n},\rho,\beta}\geq\frac{1}{4}.\]
Then, using \((\mathbf{MMS1})\), for some \(C_{2},C_{3}>0\),
\[\frac{1}{4}\leq C_{2}\beta n^{d-1}\langle\tau_{0}\tau_{(n/2)\mathbf{e}_{1}} \rangle_{\rho,\beta}\sum_{k=0}^{n}\sum_{|y|\geq k}J_{0,y}\leq C_{3}\beta \mathfrak{m}_{2}(J)n^{d-1}\langle\tau_{0}\tau_{(n/2)\mathbf{e}_{1}}\rangle_{ \rho,\beta},\]
so that for some \(c_{1}>0\),
\[\langle\tau_{0}\tau_{(n/2)\mathbf{e}_{1}}\rangle_{\rho,\beta}\geq\frac{c_{1}} {\beta n^{d-1}}.\]
Hence, using this time \((\mathbf{MMS2})\), there exists \(c_{2}>0\) such that, if \((2d)^{-1}N_{0}\leq k\leq L(\rho,\beta)\) and \(x\in\partial\Lambda_{k}\),
\[\langle\tau_{0}\tau_{x}\rangle_{\rho,\beta}\geq\langle\tau_{0}\tau_{\frac{2d |x|}{2}\mathbf{e}_{1}}\rangle_{\rho,\beta}\geq\frac{c_{2}}{\beta|x|^{d-1}}.\]
We now handle the smaller values of \(k=|x|\) by noticing that for \(1\leq k\leq(2d)^{-1}N_{0}\wedge L(\rho,\beta)\), the hypothesis that \(\varphi_{\rho,\beta}(\Lambda_{k})\geq\frac{1}{2}\), together with \((\mathbf{MMS1})\), yield
\[\langle\tau_{0}\tau_{k\mathbf{e}_{1}}\rangle_{\rho,\beta}\geq\frac{c_{3}}{ \beta},\]
for some \(c_{3}=c_{3}(d,J)\). This concludes the proof.
As it turns out, it also possible to extend this result with the following assumption: there exist \(c_{0},C_{0},\alpha>0\) such that,
\[\frac{c_{0}}{k^{1+\alpha}}\leq\sum_{|x|=k}J_{0,x}\leq\frac{C_{0}}{k^{1+\alpha }},\qquad\forall k\geq 1. \tag{3.7}\]
Using Proposition 3.8 we get that reflection positive interactions satisfying the above assumption also satisfy: there exists \(C=C(d)>0\) such that for all \(\beta\leq\beta_{c}(\rho)\), for all \(x\in\mathbb{Z}^{d}\setminus\{0\}\),
\[\langle\tau_{0}\tau_{x}\rangle_{\rho,\beta}\leq\frac{C}{\beta_{c}(\rho)|x|^{d- \alpha\wedge 2}(\log|x|)^{\delta_{\alpha,2}}}. \tag{3.8}\]
The prototypical example of interactions satisfying \((\mathbf{A1})\)-\((\mathbf{A5})\) and (3.7) is given by algebraically decaying RP interactions.
**Remark 3.24**.: _Note that we may have models for which \(\mathfrak{m}_{2}(J)<\infty\) and yet Assumption 3.7 is not satisfied. The next proposition will be useful in the study of models with \(d\in\{1,2,3\}\) and \(d_{\mathrm{eff}}=4\), which do not satisfy \(\mathfrak{m}_{2}(J)<\infty\)._
**Proposition 3.25**.: _Let \(d\geq 2\). Assume that \(J\) satisfies (**A1**)-(**A5**) and (3.7). There exists \(c>0\) which only depend on \(J\) and \(d\) such that, for all \(\beta\leq\beta_{c}(\rho)\), for all \(1\leq|x|\leq cL(\rho,\beta)\),_
\[\langle\tau_{0}\tau_{x}\rangle_{\rho,\beta}\geq\frac{c}{\beta|x|^{d-1}}\times \left\{\begin{array}{ll}1&\mbox{if }\alpha>1\\ (\log|x|)^{-1}&\mbox{if }\alpha=1\\ |x|^{\alpha-1}&\mbox{if }\alpha\in(0,1).\end{array}\right.\]
_Moreover, the above result still holds for \(d=1\) and \(\alpha\in(0,1)\)._
**Remark 3.26**.: _The lower bound matches (3.8) for \(\alpha\in(0,1)\)._
Proof.: We proceed like in the proof of Proposition 3.23. Notice that if \(\varepsilon>0\) is sufficiently small and \(n\geq 1\),
\[\beta\sum_{\begin{subarray}{c}x\in\Lambda_{\varepsilon n}\\ y\notin\Lambda_{n}\end{subarray}}J_{x,y}\langle\tau_{0}\tau_{x}\rangle_{\rho, \beta}\leq C_{1}\left(\sum_{x\in\Lambda_{\varepsilon n}}\langle\tau_{0}\tau_ {x}\rangle_{\rho,\beta}\right)\sum_{|u|\geq n/2}J_{0,u}\leq C_{2}\varepsilon^{ \alpha\wedge 2}<\frac{1}{4},\]
where \(C_{1},C_{2}>0\), and where we used (3.7) together with (3.8) on the second inequality. For such a choice of \(\varepsilon\), we have that for \(1\leq n\leq L(\rho,\beta)\),
\[\beta\sum_{\begin{subarray}{c}x\in\Lambda_{n}\setminus\Lambda_{\varepsilon n} \\ y\notin\Lambda_{n}\end{subarray}}J_{x,y}\langle\tau_{0}\tau_{x}\rangle_{\rho, \beta}\geq\frac{1}{4}.\]
Using (**MMS1**), we find that
\[\frac{1}{4\beta}\leq C_{3}\langle\sigma_{0}\sigma_{\varepsilon n{\bf e_{1}}} \rangle_{\beta}n^{d-1}\sum_{k=1}^{n}\sum_{|u|\geq k}J_{0,u}\leq C_{4}\langle \sigma_{0}\sigma_{\varepsilon n{\bf e_{1}}}\rangle_{\beta}n^{d-1}\sum_{k=1}^{n }\frac{1}{k^{\alpha}},\]
from which we obtain the desired result.
### Existence of regular scales
In this subsection, we introduce the notion of _regular_ scales. These scales will be defined in such a way that, on them, the two-point function behaves "nicely" i.e. as if we knew that \(\langle\tau_{0}\tau_{x}\rangle_{\rho,\beta}\) decayed algebraically fast (for \(1\leq|x|\leq L(\rho,\beta)\)).
**Definition 3.27** (Regular scales).: _Fix \(c,C>0\). An annular region \({\rm Ann}(n/2,8n)\) is said to be \((c,C)\)-regular if the following properties hold :_
* _For every_ \(x,y\in{\rm Ann}(n/2,8n)\)_,_ \(S_{\rho,\beta}(y)\leq CS_{\rho,\beta}(x)\)_,_
* _for every_ \(x,y\in{\rm Ann}(n/2,8n)\)_,_ \(|S_{\rho,\beta}(x)-S_{\rho,\beta}(y)|\leq\frac{C|x-y|}{|x|}S_{\rho,\beta}(x)\)_,_
* \(\chi_{2n}(\rho,\beta)\geq(1+c)\chi_{n}(\rho,\beta)\)_,_
* _for every_ \(x\in\Lambda_{n}\) _and_ \(y\notin\Lambda_{Cn}\)_,_ \(S_{\rho,\beta}(y)\leq\frac{1}{2}S_{\rho,\beta}(x)\)_._
_A scale \(k\) is said to be regular if \(n=2^{k}\) is such that \({\rm Ann}(n/2,8n)\) is \((c,C)\)-regular, a vertex \(x\in\mathbb{Z}^{d}\) will be said to be in a regular scale if it belongs to an annulus \({\rm Ann}(n,2n)\) with \(n=2^{k}\) and \(k\) a regular scale._
We can now state the main result of this subsection.
**Proposition 3.28** (Existence of regular scales).: _Let \(d\geq 3\). Let \(J\) satisfy (**A1**)-(**A5**) and \(\mathfrak{m}_{2}(J)<\infty\). Let \(\gamma>2\). There exist \(c_{0},c_{1},C_{0}>0\) such that for every \(\rho\) in the GS class, every \(\beta\leq\beta_{c}(\rho)\), and every \(1\leq n^{\gamma}\leq N\leq L(\rho,\beta)\), there are at least \(c_{1}\log_{2}\left(\frac{N}{n}\right)\)\((c_{0},C_{0})\)-regular scales between \(n\) and \(N\)._
The proof of this result can be found in [1]. However, since it is a crucial tool for what follows, we decide to include it.
Proof.: Using the lower bound of Proposition 3.23, together with (**IRB**), we get the existence of \(c_{1},c_{2}>0\) such that
\[\chi_{N}(\rho,\beta)\geq\frac{c_{1}}{\beta_{c}(\rho)}N\geq\frac{c_{1}}{\beta_{c} (\rho)}\left(\frac{N}{n}\right)^{\frac{\gamma-2}{\gamma-1}}n^{2}\geq c_{2} \left(\frac{N}{n}\right)^{\frac{\gamma-2}{\gamma-1}}\chi_{n}(\rho,\beta).\]
Using Theorem 3.18, we find \(r,c_{3}>0\) and independent of \(n,N\), such that there are at least \(c_{3}\log_{2}(N/n)\) scales \(m=2^{k}\) between \(n\) and \(N\) such that
\[\chi_{rm}(\rho,\beta)\geq\chi_{16dm}(\rho,\beta)+\chi_{m}(\rho,\beta). \tag{3.9}\]
We prove that such an \(m\) is a \((c_{0},C_{0})\)-regular scale for a good choice of \(c_{0},C_{0}\). Indeed, to show it satisfies (**P1**) it is enough10
Footnote 10: This comes from the fact that any \(x\in\operatorname{Ann}(m/2,8m)\) satisfies
\[S_{\rho,\beta}(16dm\mathbf{e}_{1})\leq S_{\rho,\beta}(x)\leq S_{\rho,\beta} \left(\frac{m\mathbf{e}_{1}}{2}\right).\]
to show that \(S_{\rho,\beta}(\frac{1}{2}m\mathbf{e}_{1})\leq C_{4}S_{\rho,\beta}(16dm \mathbf{e}_{1})\) for some constant \(C_{4}=C_{4}(d)>0\). However, one has
\[|\operatorname{Ann}(16dm,rm)|S_{\rho,\beta}(16dm\mathbf{e}_{1}) \geq\chi_{rm}(\rho,\beta)-\chi_{16dm}(\rho,\beta)\\ \geq\chi_{m}(\rho,\beta)\geq|\Lambda_{m/(2d)}|S_{\rho,\beta}(m \mathbf{e}_{1}/2) \tag{3.10}\]
where in the first inequality we used (**MMS1**) to get that for all \(x\in\operatorname{Ann}(16dm,rm)\) one has \(S_{\rho,\beta}(x)\leq S_{\rho,\beta}(16dm\mathbf{e}_{1})\), in the second inequality we used (3.9), and in the third one we used (**MMS1**) again to argue that for all \(x\in\Lambda_{m/(2d)}\) one has \(S_{\rho,\beta}(x)\geq S_{\rho,\beta}(\frac{1}{2}m\mathbf{e}_{1})\). This gives (**P1**). Note that (**P2**) follows from the remark below Proposition 3.20. Now, using again (3.10) and (**MMS2**), we get that for every \(x\in\operatorname{Ann}(m,2m)\) one has
\[S_{\rho,\beta}(x)\geq S_{\rho,\beta}(16dm\mathbf{e}_{1})\geq\frac{c_{5}}{m^{ d}}\chi_{m}(\rho,\beta), \tag{3.11}\]
which implies (**P3**). Finally, we obtain11 (**P4**) by observing that for every \(R\), if \(y\notin\Lambda_{dRm}\) and \(x\in\Lambda_{m}\),
Footnote 11: This is the only place where the hypothesis \(d\geq 3\) plays a role.
\[|\Lambda_{Rm}|S_{\rho,\beta}(y)\leq\chi_{Rm}(\rho,\beta)\leq C_{6}R^{2}\chi_{ m}(\rho,\beta)\leq C_{7}R^{2}m^{d}S_{\rho,\beta}(x), \tag{3.12}\]
where we used (**MMS1**) in the first inequality, Theorem 3.18 in the second (more precisely the improvement discussed in Remark 3.19), and (3.11) in the last one. We obtain the result by choosing \(C_{0}\) sufficiently large, and \(c_{0}\) sufficiently small.
Using Proposition 3.25, we can extend the above result to interactions \(J\) satisfying (3.7).
**Proposition 3.29**.: _Let \(d\geq 1\). Let \(J\) satisfy_ (**A1**)_-_(**A5**) _and_ (3.7) _with \(\alpha>0\) if \(d\geq 3\) and \(\alpha\in(0,1)\) if \(d\in\{1,2\}\). Let \(\gamma>2\). There exist \(c,c_{0},c_{1},C_{0}>0\) such that for every \(\rho\) in the GS class, every \(\beta\leq\beta_{c}(\rho)\), and every \(1\leq n^{\gamma}\leq N\leq cL(\rho,\beta)\), there are at least \(c_{1}\log_{2}\left(\frac{N}{n}\right)\,(c_{0},C_{0})\)-regular scales between \(n\) and \(N\)._
Proof.: We only need to take care of the case \(d\in\{1,2\}\) and \(\alpha\in(0,1)\). As noticed above, in this case \(S_{\rho,\beta}(x)\asymp|x|^{d-\alpha}\) below \(L(\rho,\beta)\). The existence of regular scales in that case is then a direct consequence of the remark below Proposition 3.20.
## 4. Random current representation
Let \(d\geq 1\). Let \(J\) be an interaction on \(\mathbb{Z}^{d}\) satisfying (**A1**)-(**A5**) and let \(\Lambda\) be a finite subset of \(\mathbb{Z}^{d}\).
### Definitions and the switching lemma
**Definition 4.1**.: _A current \(\mathbf{n}\) on \(\Lambda\) is a function defined on the set \(\mathcal{P}_{2}(\Lambda):=\{\{x,y\},\,x,y\in\Lambda\}\) and taking its values in \(\mathbb{N}=\{0,1,\ldots\}\). We denote by \(\Omega_{\Lambda}\) the set of currents on \(\Lambda\). The set of sources of \(\mathbf{n}\), denoted by \(\partial\mathbf{n}\), is defined as_
\[\partial\mathbf{n}:=\left\{x\in\Lambda\,,\,\sum_{y\in\Lambda}\mathbf{n}_{x,y }\text{ is odd}\right\}.\]
_We also set \(w_{\beta}(\mathbf{n}):=\prod_{\{x,y\}\subset\Lambda}\dfrac{(\beta J_{x,y})^{ \mathbf{n}_{x,y}}}{\mathbf{n}_{x,y}!}\)._
There is a way to expend the correlation functions of the Ising model to relate them to currents. Indeed, if we use, for \(\sigma\in\{\pm 1\}^{\Lambda}\), the expansion
\[\exp(\beta J_{x,y}\sigma_{x}\sigma_{y})=\sum_{\mathbf{n}_{x,y}\geq 0}\dfrac{( \beta J_{x,y}\sigma_{x}\sigma_{y})^{\mathbf{n}_{x,y}}}{\mathbf{n}_{x,y}!},\]
we obtain that
\[Z(\Lambda,\beta)=2^{|\Lambda|}\sum_{\partial\mathbf{n}=\emptyset}w_{\beta}( \mathbf{n}).\]
More generally, the correlation functions are given by: for \(A\subset\Lambda\),
\[\left\langle\sigma_{A}\right\rangle_{\Lambda,\beta}=\dfrac{\sum_{\partial \mathbf{n}=A}w_{\beta}(\mathbf{n})}{\sum_{\partial\mathbf{n}=\emptyset}w_{ \beta}(\mathbf{n})}, \tag{4.1}\]
where \(\sigma_{A}:=\prod_{x\in A}\sigma_{x}\).
A current configuration \(\mathbf{n}\) with \(\partial\mathbf{n}=\emptyset\) can be seen as the edge count of a multigraph obtained as a union of loops. Adding sources to a current configuration comes down to adding a collection of paths connecting pairwise the sources. For instance, a current configuration with sources \(\partial\mathbf{n}=\{x,y\}\) can be seen as the edge count of a multigraph consisting of a family of loops together with a path from \(x\) to \(y\).
**Remark 4.2**.: _It is possible to make the above remark rigorous by looking at the so-called backbone representation of the Ising model, which is a way to expand the correlation functions in terms of a weighted sum over a collection of source-pairing paths. This expansion will be useful later and we refer to Appendix B for definitions and properties._
As we are about to see, connectivity properties of the multigraph induced by a current will play a crucial role in the analysis of the underlying Ising model, this motivates the following definition.
**Definition 4.3**.: _Let \(\mathbf{n}\in\Omega_{\Lambda}\) and \(x,y\in\Lambda\)._
* _We say that_ \(x\) _is connected to_ \(y\) _in_ \(\mathbf{n}\) _and write_ \(x\xleftrightarrow{\mathbf{n}}y\)_, if there exists a sequence of points_ \(x_{0}=x,x_{1},\ldots,x_{m}=y\) _such that_ \(\mathbf{n}_{x_{i},x_{i+1}}>0\) _for_ \(0\leq i\leq m-1\)_._
* _The cluster of_ \(x\)_, denoted by_ \(\mathbf{C_{n}}(x)\)_, is the set of points connected to_ \(x\) _in_ \(\mathbf{n}\)_._
The main interest of the above expansion lies in the following result that allows to switch the sources of two currents. This combinatorial result first appeared in [10] to prove the concavity of the magnetisation of an Ising model with positive external field, but the probabilistic picture attached to it was popularised in [1].
**Lemma 4.4** (Switching lemma).: _For any \(A,B\subset\Lambda\) and any function \(F\) from the set of currents into \(\mathbb{R}\),_
\[\begin{split}(\mathbf{SL})&\sum_{\begin{subarray}{c} \mathbf{n}_{1}\in\Omega_{\Lambda}:\,\partial\mathbf{n}_{1}=A\\ \mathbf{n}_{2}\in\Omega_{\Lambda}:\,\partial\mathbf{n}_{2}=B\end{subarray}}F( \mathbf{n}_{1}+\mathbf{n}_{2})w_{\beta}(\mathbf{n}_{1})w_{\beta}(\mathbf{n}_{2} )\\ &=\sum_{\begin{subarray}{c}\mathbf{n}_{1}\in\Omega_{\Lambda}:\, \partial\mathbf{n}_{1}=A\Delta B\\ \mathbf{n}_{2}\in\Omega_{\Lambda}:\,\partial\mathbf{n}_{2}=\emptyset\end{subarray} }F(\mathbf{n}_{1}+\mathbf{n}_{2})w_{\beta}(\mathbf{n}_{1})w_{\beta}(\mathbf{n}_ {2})\mathbbm{1}_{(\mathbf{n}_{1}+\mathbf{n}_{2})\in\mathcal{F}_{B}},\end{split}\]
_where \(A\Delta B=(A\cup B)\setminus(A\cap B)\) is the symmetric difference of sets and \(\mathcal{F}_{B}\) is given by_
\[\mathcal{F}_{B}=\{\mathbf{n}\in\Omega_{\Lambda}\,,\,\exists\mathbf{m}\leq \mathbf{n}\,,\,\partial\mathbf{m}=B\}.\]
In fact, we will also need a slightly different version of the switching lemma, called the _switching principle_, whose proof can be found in [1, Lemma 2.1]. We use the representation of a current \(\mathbf{n}\in\Omega_{\Lambda}\) into a multigraph \(\mathcal{N}\) in which the vertex set is \(\Lambda\) and where there are exactly \(\mathbf{n}_{x,y}\) edges between \(x\) and \(y\). We will also use the notation \(\partial\mathcal{N}=\partial\mathbf{n}\).
**Lemma 4.5** (Switching principle).: _For any multigraph \(\mathcal{M}\) with vertex set \(\Lambda\), any \(A\subset\Lambda\), and any function \(f\) of a current,_
\[\sum_{\begin{subarray}{c}\mathcal{N}\subset\mathcal{M}\\ \partial\mathcal{N}=A\end{subarray}}f(\mathcal{N})=\mathbbm{1}_{\exists \mathcal{K}\subset\mathcal{M},\,\partial\mathcal{K}=A}\sum_{\begin{subarray}{ c}\mathcal{N}\subset\mathcal{M}\\ \partial\mathcal{N}=\emptyset\end{subarray}}f(\mathcal{N}\Delta\mathcal{K}).\] ( \[\mathbf{SP}\] )
The switching lemma provides probabilistic interpretations of several quantities of interest like differences or ratios of correlation functions. The natural probability measures are defined as follows. If \(A\subset\Lambda\), define a probability measure \(\mathbf{P}^{A}_{\Lambda,\beta}\) on \(\Omega_{\Lambda}\) by: for \(\mathbf{n}\in\Omega_{\Lambda}\),
\[\mathbf{P}^{A}_{\Lambda,\beta}[\mathbf{n}]:=\mathbbm{1}_{\partial\mathbf{n}=A }\frac{w_{\beta}(\mathbf{n})}{\sum_{\partial\mathbf{m}=A}w_{\beta}(\mathbf{m} )},\]
and for \(A_{1},\ldots,A_{k}\subset\Lambda\), define
\[\mathbf{P}^{A_{1},\ldots,A_{k}}_{\Lambda,\beta}:=\mathbf{P}^{A_{1}}_{\Lambda, \beta}\otimes\ldots\otimes\mathbf{P}^{A_{k}}_{\Lambda,\beta}.\]
When \(A=\{x,y\}\), we will write \(xy\) instead of \(\{x,y\}\) in the above notation.
The first consequence of the switching lemma is the following expression of a ratio of correlation functions in terms of the probability of the occurrence of a certain connectivity event in a system of random currents. One has,
\[\frac{\langle\sigma_{A}\rangle_{\Lambda,\beta}\langle\sigma_{B}\rangle_{ \Lambda,\beta}}{\langle\sigma_{A}\sigma_{B}\rangle_{\Lambda,\beta}}=\mathbf{ P}^{A\Delta B,\emptyset}_{\Lambda,\beta}\left[\mathbf{n}_{1}+\mathbf{n}_{2} \in\mathcal{F}_{B}\right], \tag{4.2}\]
where \(\mathcal{F}_{B}\) was defined in Lemma 4.4. In particular, for \(0,x,u\in\Lambda\),
\[\frac{\langle\sigma_{0}\sigma_{u}\rangle_{\Lambda,\beta}\langle\sigma_{u} \sigma_{x}\rangle_{\Lambda,\beta}}{\langle\sigma_{0}\sigma_{x}\rangle_{\Lambda,\beta}}=\mathbf{P}^{0x,\emptyset}_{\Lambda,\beta}[0\stackrel{{ \mathbf{n}_{1}+\mathbf{n}_{2}}}{{\longleftrightarrow}}u]. \tag{4.3}\]
Later, it will also be interesting to control two-point connectivity probabilities. One can prove (see [1, Proposition A.3]) the following result: for every \(x,u,v\in\Lambda\),
\[\mathbf{P}^{0x,\emptyset}_{\Lambda,\beta}[u,v\stackrel{{ \mathbf{n}_{1}+\mathbf{n}_{2}}}{{\longleftrightarrow}}0]\leq\frac{ \langle\sigma_{0}\sigma_{u}\rangle_{\Lambda,\beta}\langle\sigma_{u}\sigma_{v} \rangle_{\Lambda,\beta}\langle\sigma_{v}\sigma_{x}\rangle_{\Lambda,\beta}}{ \langle\sigma_{0}\sigma_{x}\rangle_{\Lambda,\beta}}\\ +\frac{\langle\sigma_{0}\sigma_{v}\rangle_{\Lambda,\beta}\langle \sigma_{v}\sigma_{u}\rangle_{\Lambda,\beta}\langle\sigma_{u}\sigma_{x}\rangle_{ \Lambda,\beta}}{\langle\sigma_{0}\sigma_{x}\rangle_{\Lambda,\beta}}. \tag{4.4}\]
As proved in [1], the probability measure \(\mathbf{P}^{A}_{\Lambda,\beta}\), for \(A\) a finite (even) subset of \(\mathbb{Z}^{d}\), admits a weak limit as \(\Lambda\nearrow\mathbb{Z}^{d}\) that we denote by \(\mathbf{P}^{A}_{\beta}\). This yields infinite volume versions of the above results.
### The four-point Ursell function
Newman [23] proved that the triviality of the scaling limits of the Ising model is equivalent to the vanishing of the scaling limit of the four-point Ursell function. This result was later quantified by Aizenman [1, Proposition 12.1] who obtained the following bound.
**Proposition 4.6** (Deviation from Wick's law).: _Let \(d\geq 2\) and \(n\geq 2\). For all \(x_{1},\ldots,x_{2n}\in\mathbb{Z}^{d}\),_
\[\left|\langle\sigma_{x_{1}}\ldots\sigma_{x_{2n}}\rangle_{\beta}- \sum_{\begin{subarray}{c}\pi\text{\,pairing of }j=1\\ \{1,\ldots,2n\}\end{subarray}}\prod_{j=1}^{n}\langle\sigma_{x_{\pi(2j-1)}}, \sigma_{x_{\pi(2j)}}\rangle_{\beta}\right|\\ \leq\frac{3}{2}\sum_{1\leq i<j<k<\ell\leq 2n}|U_{4}^{\beta}(x_{i},x _{j},x_{k},x_{\ell})|\sum_{\begin{subarray}{c}\pi\text{\,pairing of }\\ \{1,\ldots,2n\}\setminus\{i,j,k,\ell\}\end{subarray}}\prod_{j=1}^{n-2}\langle \sigma_{x_{\pi(2j-1)}},\sigma_{x_{\pi(2j)}}\rangle_{\beta}.\]
The switching lemma provides a probabilistic interpretation of the four-point Ursell function. This was a key step of the proof of triviality in [1, 1]. Although it was first stated in the case of nearest-neighbour interactions, the proof is valid on any graph and thus remains valid in the case of general interactions.
**Proposition 4.7** (Representation of the four-point Ursell function).: _For \(x,y,z,t\in\mathbb{Z}^{d}\),_
\[U_{4}^{\beta}(x,y,z,t)=-2\langle\sigma_{x}\sigma_{y}\rangle_{\beta}\langle \sigma_{z}\sigma_{t}\rangle_{\beta}\mathbf{P}_{\beta}^{xy,zt}[\mathbf{C}_{ \mathbf{n}_{1}+\mathbf{n}_{2}}(x)\cap\mathbf{C}_{\mathbf{n}_{1}+\mathbf{n}_{ 2}}(z)\neq\emptyset]. \tag{4.5}\]
This identity might seem tricky to analyse due to the lack of independence between \(\mathbf{C}_{\mathbf{n}_{1}+\mathbf{n}_{2}}(x)\) and \(\mathbf{C}_{\mathbf{n}_{1}+\mathbf{n}_{2}}(z)\) but it is possible to show [1] that
\[\mathbf{P}_{\beta}^{xy,zt}[\mathbf{C}_{\mathbf{n}_{1}+\mathbf{n}_{2}}(x)\cap \mathbf{C}_{\mathbf{n}_{1}+\mathbf{n}_{2}}(z)\neq\emptyset]\leq\mathbf{P}_{ \beta}^{xy,zt,\emptyset,\emptyset}[\mathbf{C}_{\mathbf{n}_{1}+\mathbf{n}_{3}}( x)\cap\mathbf{C}_{\mathbf{n}_{2}+\mathbf{n}_{4}}(z)\neq\emptyset].\]
In particular, if \(\mathcal{I}:=\mathbf{C}_{\mathbf{n}_{1}+\mathbf{n}_{3}}(x)\cap\mathbf{C}_{ \mathbf{n}_{2}+\mathbf{n}_{4}}(z)\), this leads to the following bound,
\[|U_{4}^{\beta}(x,y,z,t)|\leq 2\langle\sigma_{x}\sigma_{y}\rangle_{\beta} \langle\sigma_{z}\sigma_{t}\rangle_{\beta}\mathbf{P}_{\beta}^{xy,zt,\emptyset,\emptyset}[|\mathcal{I}|>0]. \tag{4.6}\]
The random current representation allows us to obtain an expression of \(U_{4}^{\beta}\) in terms of the probability of intersection of two independent random currents of prescribed sources. As explained in [1], the relevant question is then to see whether, in the limit \(L(x,y,z,t)\to\infty\), the ratio \(|U_{4}^{\beta}(x,y,z,t)|/\langle\sigma_{x}\sigma_{y}\sigma_{z}\sigma_{t} \rangle_{\beta}\) vanishes or not.
### Heuristic for triviality
We work at \(\beta\leq\beta_{c}\) and assume some regularity on the two-point function in the sense that it takes comparable values for pairs of points at comparable distances smaller than \(L(\beta)\).
Let us first consider the case of the nearest-neighbour Ising model. If we expect the intersection properties of two independent random current clusters to behave essentially like the ones of two independent random walks in \(\mathbb{Z}^{d}\) conditioned to start and end at \(x,y\) and \(z,t\) respectively, we expect the probability on the right-hand side of (4.6) to be very small in dimension \(d>4\). Following this analogy, Aizenman [1] argued the case \(d>4\) by using a first moment method on \(|\mathcal{I}|\) which yields the so-called _tree diagram bound_,
\[|U_{4}^{\beta}(x,y,z,t)|\leq 2\sum_{u\in\mathbb{Z}^{d}}\langle\sigma_{x}\sigma_{ u}\rangle_{\beta}\langle\sigma_{y}\sigma_{u}\rangle_{\beta}\langle\sigma_{z} \sigma_{u}\rangle_{\beta}\langle\sigma_{t}\sigma_{u}\rangle_{\beta}. \tag{4.7}\]
As discussed in the introduction, (4.7) together with (**IRB**), imply
\[\frac{|U_{4}^{\beta}(x,y,z,t)|}{\langle\sigma_{x}\sigma_{y}\sigma_{z}\sigma_{ t}\rangle_{\beta}}=O(L^{4-d}),\]
where \(L\leq L(\beta)\) is the mutual distance between \(x,y,z\) and \(t\).
In the case of the "marginal" dimension \(d=4\) the above bound yields no interesting result and we need to go one step further in the analysis of (4.6). Going back to the analogy with random walks, it is a well-known result that four is the critical dimension in terms of intersection for the simple random walk, meaning that two independent (simple) random walks with starting and ending points at mutual distance \(L\), will intersect with probability \(O(1/\log L)\) while the expected number of points in the intersection will typically be of order \(\Omega(1)\) (see [10, Chapter 10]). This shows that when two independent (simple) random walks in dimension four intersect, they do so a logarithmic number of times. Transposing this idea in the realm of random currents suggests that the probability that two independent random currents with sources at mutual distance at least \(L\) intersect, but not so many times, should decay as \(O(1/(\log L)^{c})\) for some \(c>0\). This is indeed the result that Aizenman and Duminil-Copin obtained to improve by a logarithmic factor the tree diagram bound.
For general long-range interactions it is possible to extend these ideas. It is well known that long-range step distributions can virtually "increase" the effective dimension of a random walk to the point that some low-dimensional random walks start manifesting the above properties, only observed in dimension \(d\geq 4\) for the simple random walk. This observation is made more explicit by the following computation (see Section 5).
\[\frac{|U_{4}^{\beta}(x,y,z,t)|}{\langle\sigma_{x}\sigma_{y}\sigma_{z}\sigma_{ t}\rangle_{\beta}}=O(L^{4(d/d_{\mathrm{eff}})-d}). \tag{4.8}\]
As a result, models with effective dimension strictly above four are easily shown to be trivial, this is the content of Section 5. The main contribution of this paper is to treat the case \(d_{\mathrm{eff}}=4\), and to show that we can still improve the tree diagram bound there.
## 5. Reflection positive Ising models satisfying \(d_{\mathrm{eff}}>4\)
In this section, we study models of effective dimension \(d_{\mathrm{eff}}>4\) and prove a more general version of Theorem 1.2. As discussed in the introduction, choosing sufficiently slowly decaying interactions might have the effect of increasing the dimension of the model. As a result, we expect to find models in low dimensions which admit trivial scaling limits. These results had already been obtained in [1, 1], although not under this slightly stronger form. We begin with a definition.
**Definition 5.1** (Effective dimension).: _Let \(d\geq 1\). Assume that \(J\) satisfies \(\mathbf{(A1)}\)-\(\mathbf{(A4)}\). The effective dimension \(d_{\mathrm{eff}}\) of the model is related to the critical exponent of the two-point function. Assume that there exists \(\eta\geq 0\) such that, as \(|x|\to\infty\),_
\[\langle\sigma_{0}\sigma_{x}\rangle_{\beta_{c}}\asymp\frac{1}{|x|^{d-2+\eta+o( 1)}}.\]
_The effective dimension \(d_{\mathrm{eff}}\) is then given by_
\[d_{\mathrm{eff}}:=\frac{d}{1-(\eta\wedge 2)/2}.\]
**Remark 5.2**.: _The above formula can be justified using Fourier transform considerations, see [1]. We saw in (3.1) that the spin-wave mode squared averages \(\langle|\widehat{\tau}_{\beta}(p)|^{2}\mathbb{T}_{L,\rho,\beta}\)--or thermal averages-- coincide with the two-point function's Fourier transform. As it turns out, the relevant quantity to look at is often the density of these spin-wave modes--i.e. \(\mathrm{d}p\)-- expressed as a function of the excitation level which is measured by \(\widehat{S}_{\beta}\). For the Gaussian case, one has \(\widehat{S}(p)\asymp p^{-2}\) which leads to a density of levels \(\mathrm{d}\widehat{S}^{-d/2}\). For the case of the Ising model, if we assume that \(\widehat{S}_{\beta_{c}}(p)\asymp p^{-(2-\eta)}\) for some critical exponent \(\eta\), we end up with a density \(\mathrm{d}\widehat{S}^{-d_{\mathrm{eff}}/2}_{\beta_{c}}\) where \(d_{\mathrm{eff}}=d/(1-\eta/2)\)._
Note that for the above definition to make sense, we need the existence of the critical exponent \(\eta\), which is expected to hold (but only known to exist in particular cases). However, we can still bound the effective dimension. Glimm and Jaffe [12] proved that for reflection positive interactions \(\eta<2\), which justifies that \(d_{\mathrm{eff}}<\infty\) in our setup. For reflection positive models, (**IRB**) yields that \(\eta\geq 0\), so that
\[d_{\mathrm{eff}}\geq d.\]
**Remark 5.3** (Algebraically decaying RP interactions).: _In the case of reflection positive models with coupling constants of algebraic decay, i.e. \(J_{x,y}=C|x-y|_{1}^{-d-\alpha}\) for \(\alpha,C>0\), Proposition 3.8 implies that \(\eta\geq|2-\alpha|_{+}\), which yields,_
\[d_{\mathrm{eff}}\geq\frac{d}{1\wedge(\alpha/2)}.\]
_As a consequence, if \(d-2(\alpha\wedge 2)>0\), one has \(d_{\mathrm{eff}}>4\)._
More generally, with the definition given above, we see that \(d_{\mathrm{eff}}>4\) if
\[d>4(1-\eta/2).\]
In what follows, we assume that \(d_{\mathrm{eff}}>4\), that is, there exists \(\mathbf{C}>0\) and \(\eta\in[0,2)\) such that for all \(x\in\mathbb{Z}^{d}\setminus\{0\}\),
\[\langle\sigma_{0}\sigma_{x}\rangle_{\beta_{c}}\leq\frac{\mathbf{C}}{|x|^{d-2+ \eta}}, \tag{5.1}\]
where \(\eta\geq 0\) is such that \(d+2\eta>4\).
**Remark 5.4**.: _Note that the above assumption is automatically satisfied when the interaction \(J\) satisfies_ (**A1**)_-_(**A5**) _and_ (3.7) _with \(d-2(\alpha\wedge 2)>0\)._
**Theorem 5.5**.: _Let \(d\geq 1\). Assume that \(J\) satisfies_ (**A1**)_-_(**A5**) _and_ (5.1)_. There exist \(C=C(\mathbf{C},d),\gamma=\gamma(d)>0\) such that for all \(\beta\leq\beta_{c}\), \(L\geq 1\), \(f\in\mathcal{C}_{0}(\mathbb{R}^{d})\) and \(z\in\mathbb{R}\),_
\[\left|\left\langle\exp\left(zT_{f,L,\beta}(\sigma)\right)\right \rangle_{\beta}-\exp\left(\frac{z^{2}}{2}\langle T_{f,L,\beta}(\sigma)^{2} \rangle_{\beta}\right)\right|\\ \leq\exp\left(\frac{z^{2}}{2}\langle T_{|f|,L,\beta}(\sigma)^{2} \rangle_{\beta}\right)\frac{C(\beta^{-4}\vee\beta^{-2})\|f\|_{\infty}^{4}r_{f} ^{\gamma}z^{4}}{L^{d+2\eta-4}}.\]
**Remark 5.6**.: _If we assume that the critical exponent \(\eta\) exists, we see that \(d+2\eta-4=d[(4/d_{\mathrm{eff}})-1]\), which is consistent with the decay of (4.8)._
Proof.: Let \(f\in\mathcal{C}_{0}(\mathbb{R}^{d})\), \(\beta\leq\beta_{c}\) and \(z\in\mathbb{C}\). Note that \(r_{f}\geq 1\). Using Proposition 4.6 one gets for \(n\geq 2\) (the inequality being trivial for \(n=0,1\)),
\[\left|\left\langle T_{f,L,\beta}(\sigma)^{2n}\right\rangle_{\beta }-\frac{(2n)!}{2^{n}n!}\left\langle T_{f,L,\beta}(\sigma)^{2}\right\rangle_{ \beta}^{n}\right|\\ \leq\frac{3}{2}(2n)^{4}\left\langle T_{|f|,L,\beta}(\sigma)^{2n -4}\right\rangle_{\beta}\|f\|_{\infty}^{4}S(\beta,L,f), \tag{5.2}\]
where
\[S(\beta,L,f):=\sum_{x_{1},x_{2},x_{3},x_{4}\in\Lambda_{rfL}}\frac{\left|U_{4} ^{\beta}(x_{1},x_{2},x_{3},x_{4})\right|}{\Sigma_{L}(\beta)^{2}}.\]
Multiplying by \(\frac{z^{2n}}{(2n)!}\) and summing (5.2) over \(n\), one gets,
\[\left|\langle\exp\left(zT_{f,L,\beta}(\sigma)\right)\rangle_{\beta} -\exp\left(\frac{z^{2}}{2}\langle T_{f,L,\beta}(\sigma)^{2}\rangle_{\beta} \right)\right|\\ \leq C_{1}z^{4}\exp\left(\frac{z^{2}}{2}\langle T_{|f|,L,\beta}( \sigma)^{2}\rangle_{\beta}\right)\|f\|_{\infty}^{4}S(\beta,L,f).\]
Applying the tree diagram bound (4.7), we obtain
\[S(\beta,L,f)\leq 2\sum_{\begin{subarray}{c}x\in\mathbb{Z}^{d}\\ x_{1},x_{2},x_{3},x_{4}\in\Lambda_{r_{f}L}\end{subarray}}\frac{\langle\sigma_ {x}\sigma_{x_{1}}\rangle_{\beta}\langle\sigma_{x}\sigma_{x_{2}}\rangle_{\beta }\langle\sigma_{x}\sigma_{x_{3}}\rangle_{\beta}\langle\sigma_{x}\sigma_{x_{4} }\rangle_{\beta}}{\Sigma_{L}(\beta)^{2}}.\]
Splitting the sum above,
\[S(\beta,L,f)/2\leq\underbrace{\sum_{\begin{subarray}{c}x\in\mathbb{A}_{dr_{ f}L}\\ x_{1},x_{2},x_{3},x_{4}\in\Lambda_{r_{f}L}\end{subarray}}(\ldots)}_{(1)}+\underbrace{ \sum_{\begin{subarray}{c}x\not\in\Lambda_{dr_{f}L}\\ x_{1},x_{2},x_{3},x_{4}\in\Lambda_{r_{f}L}\end{subarray}}(\ldots)}_{(2)}.\]
\(\bullet\)**Bound on (1)**. The first term can be written
\[\sum_{\begin{subarray}{c}x\in\Lambda_{dr_{f}L}\\ x_{1},x_{2},x_{3},x_{4}\in\Lambda_{r_{f}L}\end{subarray}}\frac{\langle\sigma_ {x}\sigma_{x_{1}}\rangle_{\beta}\langle\sigma_{x}\sigma_{x_{2}}\rangle_{\beta }\langle\sigma_{x}\sigma_{x_{3}}\rangle_{\beta}\langle\sigma_{x}\sigma_{x_{4} }\rangle_{\beta}}{\Sigma_{L}(\beta)^{2}}=\sum_{x\in\Lambda_{dr_{f}L}}\frac{ \left(\sum_{y\in\Lambda_{r_{f}L}}\langle\sigma_{x}\sigma_{y}\rangle_{\beta} \right)^{4}}{\Sigma_{L}(\beta)^{2}}.\]
Noticing that for \(x\in\Lambda_{dr_{f}L}\), \(\sum_{y\in\Lambda_{r_{f}L}}\langle\sigma_{x}\sigma_{y}\rangle_{\beta}\leq \chi_{2dr_{f}L}(\beta)\), using Theorem 3.18 to bound \(\chi_{2dr_{f}L}(\beta)\) in terms of \(\chi_{L}(\beta)\), and using that \(\chi_{L}(\beta)\leq C_{2}L^{-d}\Sigma_{L}(\beta)\), we get
\[(1)\leq C_{3}\beta^{-4}r_{f}^{8+d}\frac{\chi_{L}(\beta)^{2}}{L^{d}}.\]
We can then use (5.1) to get the bound \(\chi_{L}(\beta)\leq C_{4}L^{2-\eta}\), so that
\[(1)\leq C_{4}\beta^{-4}r_{f}^{8+d}L^{-(d+2\eta-4)}.\]
\(\bullet\)**Bound on (2)**. Combining (**MMS2**) and the sliding-scale infrared bound of Theorem 3.18, we get that for \(i=1,\ldots,4\),
\[\langle\sigma_{x}\sigma_{x_{i}}\rangle_{\beta}\leq\frac{C_{5}}{|x|^{d}}\chi_ {|x|/d}(\beta)\leq\frac{C_{6}}{\beta L^{2}|x|^{d-2}}\chi_{L}(\beta). \tag{5.3}\]
Bounding the terms indexed by \(x_{1}\) and \(x_{2}\) in the sum using (5.3) and the other two using (5.1), we get
\[(2) \leq C_{7}\beta^{-2}\sum_{x\notin\Lambda_{dr_{f}L}}\sum_{x_{1},\ldots,x_{4}\in\Lambda_{r_{f}L}}\frac{\chi_{L}(\beta)^{2}}{\Sigma_{L}(\beta)^{2}|x|^{ 2d-4}L^{4}}\frac{1}{|x-x_{3}|^{d-2+\eta}}\frac{1}{|x-x_{4}|^{d-2+\eta}}\] \[\leq C_{8}\beta^{-2}r_{f}^{2d+4}L^{2d}\frac{\chi_{L}(\beta)^{2}}{ \Sigma_{L}(\beta)^{2}}\sum_{x\notin\Lambda_{dr_{f}L}}\frac{1}{|x|^{2d-4+2\eta}}\] \[\leq C_{9}\beta^{-2}r_{f}^{8+d-2\eta}L^{-(d+2\eta-4)},\]
where we used the inequality \(|x-x_{i}|\geq(d-1)r_{f}L\) in the second line, and again \(\chi_{L}(\beta)\leq C_{2}L^{-d}\Sigma_{L}(\beta)\) in the third line.
**Remark 5.7**.: _It is possible-- see [1] or Section 8-- to obtain a version of the tree diagram bound for models in the GS class. In the general setup, it rewrites: for all \(\beta>0\), for all \(x,y,z,t\in\mathbb{Z}^{d}\),_
\[|U_{4}^{\rho,\beta}(x,y,z,t)|\leq 2\sum_{u,u^{\prime},u^{\prime\prime}\in \mathbb{Z}^{d}}\langle\tau_{x}\tau_{u}\rangle_{\rho,\beta}\beta J_{u,u^{\prime }}\langle\tau_{u^{\prime}}\tau_{y}\rangle_{\rho,\beta}\langle\tau_{z}\tau_{u} \rangle_{\rho,\beta}\beta J_{u,u^{\prime\prime}}\langle\tau_{u^{\prime\prime}} \tau_{t}\rangle_{\rho,\beta}. \tag{5.4}\]
_Using the results of Section 3, together with (5.4), we can easily extend the result of this section to the case of models in the GS class for which the two-point function \(\langle\tau_{0}\tau_{x}\rangle_{\rho,\beta_{c}(\rho)}\) satisfies condition (5.1)._
## 6. Reflection positive Ising models in dimension \(d=4\)
The goal of this section is to prove Theorems 1.3 and 1.5, and Corollary 1.8. The proof of the first result is a direct consequence of the methods developed in the preceding section.
Proof of Theorem 1.3.: Let \(d=4\). Assume that the interaction \(J\) satisfies (**A1**)-(**A5**) and \(\mathfrak{m}_{2}(J)=\infty\). Let \(\beta\leq\beta_{c}.\) Using Corollary 3.10 and adapting the proof of Section 5 to the case \(\langle\sigma_{0}\sigma_{x}\rangle_{\beta_{c}}=o(|x|^{2-d})\), we get (using the above notations) that \(S(\beta,L,f)=o(1)\) as \(L\to\infty\). The explicit rate of convergence to \(0\) is obtained by estimating (see Proposition 3.8)
\[\int_{(-\pi L,\pi L]^{d}}\frac{e^{-\|p\|_{2}^{2}}}{1-\widehat{J}(|p|/L)} \mathrm{d}p,\]
as \(L\to\infty\). For instance, in the case of algebraically decaying RP interactions of the form \(J_{x,y}=C|x-y|_{1}^{-d-2}\) (\(\alpha=2\)), we get a decay of speed \(O(1/\log L)\).
We now turn to the proof of Theorem 1.5. In the rest of the section, we fix \(d=4\) and an interaction \(J\) satisfying (**A1**)-(**A6**). Hence, there exist \(\mathbf{C},\varepsilon>0\) such that for all \(k\geq 1\),
\[\sum_{|x|=k}|x|^{2}J_{0,x}\leq\frac{\mathbf{C}}{k^{1+\varepsilon}}. \tag{6.1}\]
We expect this case to be more subtle than the \(\mathfrak{m}_{2}(J)=\infty\) case since we do not get any improvement on the effective dimension or on the decay of the two-point function. We will use the argument of [1] and adapt it to the long range setup. The proof follows essentially the same steps: we make up for the lack of precise knowledge on the behaviour of the two-point function at criticality by proving the existence of _regular scales_ in which the two-point function behave nicely (see Proposition 3.28); then, we introduce a nice local intersection event which occurs with positive probability; and finally, we obtain a _mixing_ statement (which is of independent interest) which allows us to argue that intersections at different scales are roughly independent events. As a result, we are able to prove that as soon as two independent currents intersect, they intersect a large number of time (see Proposition 6.2). One difficulty occurs in the process : since the model is long-range, the clusters of the sources might make big jumps and avoid scales which may drastically reduce the probability of the intersection event. Furthermore, the infinite range interactions may be problematic to obtain the mixing statement as they create more correlation between pieces of the current at different scales. The solution to get rid of that difficulty is to prove that these scale jumps occur with sufficiently small probability: this is the main technical point of this section, and the proof will be enabled by the assumption (6.1). As a consequence, we will be able to argue that the sources' clusters have a similar geometry as the one obtained in the nearest-neighbour case. This will be enough to adapt the multi-scale analysis of [1].
### Proof of Theorem 1.5 conditionally on the clustering bound
We will need the following deterministic lemma which relates the number of points in a set \(\mathcal{S}\subset\mathbb{Z}^{d}\) to the number of concentric annuli of the form \(u+\mathrm{Ann}(u_{k},u_{k+1})\) with \(u\in\mathcal{S}\) it takes to cover \(\mathcal{S}\). For any (possibly finite) increasing sequence \(\mathcal{U}=(u_{k})_{k\geq 0}\), any \(u\in\mathbb{Z}^{d}\), and any \(K\geq 0\), define,
\[\mathbf{M}_{u}(\mathcal{S};\mathcal{U},K):=\left|\{0\leq k\leq K,\,\mathcal{S }\cap[u+\mathrm{Ann}(u_{k},u_{k+1})]\neq\emptyset\}\right|.\]
**Lemma 6.1** (Covering Lemma, [1, Lemma 4.2]).: _With the above notations, for any sequence \(\mathcal{U}=(u_{k})_{k\geq 1}\) with \(u_{1}\geq 1\) and \(u_{k+1}\geq 2u_{k}\),_
\[|\mathcal{S}|\geq 2^{\min_{u\in\mathcal{S}}\mathbf{M}_{u}(\mathcal{S} \mathcal{U},K)/5}.\]
Fix \(D\) large enough. Define recursively a (possibly finite) sequence \(\mathcal{L}\) of integers \(\ell_{k}=\ell_{k}(\beta,D)\) by the formula: \(\ell_{0}=0\) and
\[\ell_{k+1}=\inf\{\ell\geq\ell_{k},\,B_{\ell}(\beta)\geq DB_{\ell_{k}}(\beta)\}.\]
Note that by (**IRB**), \(B_{L}-B_{\ell}\leq C_{0}\log(L/\ell)\). From this remark and the definition above one can deduce12
Footnote 12: Indeed, the lower bound is immediate and for the upper bound, write for \(k\geq 1\),
\[B_{\ell_{k}-1}(\beta) \leq DB_{\ell_{k-1}}(\beta)\leq DB_{\ell_{k-1}-1}(\beta)-CD\log\left( 1-\frac{1}{\ell_{k-1}}\right)\] \[\leq D^{k-1}B_{\ell_{1}-1}(\beta)+C\sum_{i=1}^{k-1}\frac{D^{i}}{\ell_{ k-i}}\leq C_{1}D^{k}\]
for \(C_{1}\) large enough (independent of \(D\) and \(k\)). Use (**IRB**) once again to write, \[B_{\ell_{k}}(\beta)\leq B_{\ell_{k}-1}+C\log 2\leq C_{1}D^{k}+C\log 2\leq C_{2}D^{k}.\]
that
\[D^{k}\leq B_{\ell_{k}}(\beta)\leq CD^{k}, \tag{6.2}\]
for every \(k\) and some large constant \(C\) independent of \(\beta,k\), and \(D\).
Theorem 1.5 will be a consequence of the following result. Recall that
\[\mathcal{I}=\mathbf{C}_{\mathbf{n}_{1}+\mathbf{n}_{3}}(x)\cap\mathbf{C}_{ \mathbf{n}_{2}+\mathbf{n}_{4}}(z).\]
**Proposition 6.2** (Clustering bound).: _For \(D\) large enough, there exists \(\delta=\delta(D)>0\) such that for all \(\beta\leq\beta_{c}\), for all \(K>3\) with \(\ell_{K+1}\leq L(\beta)\), and for all \(u,x,y,z,t\in\mathbb{Z}^{4}\) with mutual distance between \(x,y,z,t\) larger than \(2\ell_{K}\),_
\[\mathbf{P}_{\beta}^{ux,uz,uy,ut}[\mathbf{M}_{u}(\mathcal{I};\mathcal{L},K)< \delta K]\leq 2^{-\delta K}.\]
Let us see why this bound implies the improved tree diagram bound.
Proof of Theorem 1.5.: Choose \(D\) large enough so that Proposition 6.2 holds. Fix \(x,y,z,t\) at mutual distance larger than \(2\ell_{K}\). Using Lemma 6.1 together with the switching lemma (**SL**) we get,
\[\mathbf{P}_{\beta}^{xy,zt,\emptyset,\emptyset}[0<|\mathcal{I}|<2^{\delta K/5} ]\leq\sum_{u\in\mathbb{Z}^{4}}\mathbf{P}_{\beta}^{xy,zt,\emptyset,\emptyset}[ u\in\mathcal{I},\,\mathbf{M}_{u}(\mathcal{I};\mathcal{L},K)<\delta K]\]
\[=\sum_{u\in\mathbb{Z}^{4}}\frac{\langle\sigma_{x}\sigma_{u}\rangle_{\beta} \langle\sigma_{y}\sigma_{u}\rangle_{\beta}\langle\sigma_{z}\sigma_{u}\rangle_{ \beta}\langle\sigma_{t}\sigma_{u}\rangle_{\beta}}{\langle\sigma_{x}\sigma_{y} \rangle_{\beta}\langle\sigma_{z}\sigma_{t}\rangle_{\beta}}\mathbf{P}_{\beta}^{ux,uz,uy,ut}[\mathbf{M}_{u}(\mathcal{I};\mathcal{L},K)<\delta K],\]
so that, with Proposition 6.2,
\[\mathbf{P}_{\beta}^{xy,zt,\emptyset,\emptyset}[0<|\mathcal{I}|<2^{\delta K/5} ]\leq 2^{-\delta K}\sum_{u\in\mathbb{Z}^{4}}\frac{\langle\sigma_{x}\sigma_{u} \rangle_{\beta}\langle\sigma_{y}\sigma_{u}\rangle_{\beta}\langle\sigma_{z}\sigma _{u}\rangle_{\beta}\langle\sigma_{t}\sigma_{u}\rangle_{\beta}}{\langle\sigma_{x} \sigma_{y}\rangle_{\beta}\langle\sigma_{z}\sigma_{t}\rangle_{\beta}}.\]
Moreover,
\[\mathbf{P}^{xy,zt,\emptyset,\emptyset}_{\beta}[|\mathcal{I}|\geq 2^{\delta K/5}] \leq 2^{-\delta K/5}\sum_{u\in\mathbb{Z}^{4}}\frac{\langle\sigma_{x} \sigma_{u}\rangle_{\beta}\langle\sigma_{y}\sigma_{u}\rangle_{\beta}\langle \sigma_{z}\sigma_{u}\rangle_{\beta}\langle\sigma_{t}\sigma_{u}\rangle_{\beta}}{ \langle\sigma_{x}\sigma_{y}\rangle_{\beta}\langle\sigma_{z}\sigma_{t}\rangle_{ \beta}},\]
which implies that
\[|U_{4}^{\beta}(x,y,z,t)|\leq\frac{2}{2^{\delta K/5}}\sum_{u\in \mathbb{Z}^{4}}\langle\sigma_{x}\sigma_{u}\rangle_{\beta}\langle\sigma_{y} \sigma_{u}\rangle_{\beta}\langle\sigma_{z}\sigma_{u}\rangle_{\beta}\langle \sigma_{t}\sigma_{u}\rangle_{\beta}.\]
Now, if \(L:=2\ell_{K}\), observe that \(\ell_{K+1}\geq L\) so that by (6.2), \(B_{L}(\beta)\leq B_{\ell_{K+1}}(\beta)\leq CD^{K+1}\). Hence, we may find \(c>0\) sufficiently small (independent of \(L\) and \(\beta\)), such that \(K\geq c\log B_{L}(\beta)\). This gives the result.
We now turn to the proof of Proposition 6.2. We start by showing that thanks to the hypothesis (**A6**), the interaction decays sufficiently fast so that the "jumps" made by the current are not so problematic. As a byproduct of these estimates, we are able to show that the clusters do not perform "back and forth" between different scales. This property was already a key step in the proof of [1].
### Properties of the current
We will use a first moment method and argue that the expected number of long edges that have a non zero-weight under the current measure decays quickly in a certain sense. The following results rely on the existence of regular scales and thus require reflection positivity.
In what follows, we use the notion of backbone. The main properties of this object are gathered in Appendix B.
First, we prove a bound on the probability that an edge is open in the percolation configuration deduced from the current measure. Note that this bound is in fact valid on any graph with any interactions.
**Lemma 6.3** (Bound on open edge probability).: _Let \(d\geq 1\). Let \(\beta>0\). For \(x,y,u,v\in\mathbb{Z}^{d}\), one has_
\[\mathbf{P}^{xy}_{\beta}[\mathbf{n}_{u,v}\geq 1]\leq \mathbf{P}^{xy,\emptyset}_{\beta}[\mathbf{n}_{u,v}\geq 1]\\ \leq\beta J_{u,v}\left(2\langle\sigma_{u}\sigma_{v}\rangle_{\beta }+\frac{\langle\sigma_{x}\sigma_{u}\rangle_{\beta}\langle\sigma_{v}\sigma_{y} \rangle_{\beta}}{\langle\sigma_{x}\sigma_{y}\rangle_{\beta}}+\frac{\langle \sigma_{x}\sigma_{v}\rangle_{\beta}\langle\sigma_{u}\sigma_{y}\rangle_{\beta}}{ \langle\sigma_{x}\sigma_{y}\rangle_{\beta}}\right).\]
Proof.: The first inequality follows from a monotonicity argument. For the second inequality, we write,
\[\mathbf{P}^{xy,\emptyset}_{\beta}[\mathbf{n}_{u,v}\geq 1]\leq \mathbf{P}^{xy}_{\beta}[\mathbf{n}_{u,v}\geq 1]+\mathbf{P}^{\emptyset}_{ \beta}[\mathbf{n}_{u,v}\geq 1].\]
Then, we observe that \(\mathbf{P}^{\emptyset}_{\beta}[\mathbf{n}_{u,v}\geq 1]\leq\beta J_{u,v} \langle\sigma_{u}\sigma_{v}\rangle_{\beta}\), which leads to
\[\mathbf{P}^{xy}_{\beta}[\mathbf{n}_{u,v}\geq 1]\leq\beta J_{u,v} \frac{\langle\sigma_{x}\sigma_{y}\sigma_{u}\sigma_{v}\rangle_{\beta}}{ \langle\sigma_{x}\sigma_{y}\rangle_{\beta}}\\ \leq\beta J_{u,v}\left(\langle\sigma_{u}\sigma_{v}\rangle_{\beta }+\frac{\langle\sigma_{x}\sigma_{u}\rangle_{\beta}\langle\sigma_{v}\sigma_{y} \rangle_{\beta}}{\langle\sigma_{x}\sigma_{y}\rangle_{\beta}}+\frac{\langle \sigma_{x}\sigma_{v}\rangle_{\beta}\langle\sigma_{u}\sigma_{y}\rangle_{\beta}}{ \langle\sigma_{x}\sigma_{y}\rangle_{\beta}}\right),\]
where we used Lebowitz' inequality [1] to get \(U_{4}^{\beta}(x,y,u,v)\leq 0\) (see also Proposition 4.7).
We now introduce the event we will be interested in. It is illustrated in Figure 3 below.
**Definition 6.4** (Jump event).: _Let \(1\leq k\leq m\). We define \(\mathsf{Jump}(k,m)\) to be the event that there exist \(u\in\Lambda_{k}\) and \(v\notin\Lambda_{m}\) such that \(\mathbf{n}_{u,v}\geq 1\)._
We now prove that if we consider a current with two sources, and an annulus located between them, but "far away" from each of them, then with high probability the current does not "jump over it". For convenience, we fix one of these sources to be the origin. Recall that \(\varepsilon>0\) is given by (6.1).
**Lemma 6.5** (Jumping a scale is unlikely).: _Let \(d=4\). Assume that \(J\) satisfies \((\mathbf{A1})\)-\((\mathbf{A6})\). Let \(\nu\in(0,1)\) be such that \(\nu>\frac{d}{d+\varepsilon}\). There exist \(C,\eta>0\) such that for all \(\beta\leq\beta_{c}\), for all \(y\in\mathbb{Z}^{d}\) in a regular scale with \(1\leq|y|\leq L(\beta)\), and for all \(k\geq 1\) such that \(k^{1+4/\varepsilon}\leq|y|\),_
\[\mathbf{P}_{\beta}^{0y,\emptyset}\left[\mathsf{Jump}(k,k+k^{\nu})\right]\leq \frac{C}{k^{\eta}}.\]
**Remark 6.6**.: _If one takes \(|y|\geq\ell_{K+1}\), this lemma ensures that the current visits the annuli \(\operatorname{Ann}(\ell_{k},\ell_{k+1})\) for \(1\leq k\leq K\) with high probability. This property will be crucial in the proof of Proposition 6.2._
Proof.: In what follows \(C=C(d)>0\) may change from line to line. It is sufficient to bound,
\[\sum_{u\in\Lambda_{k},\,v\notin\Lambda_{k+k^{\nu}}}\mathbf{P}_{\beta}^{0y, \emptyset}[\mathbf{n}_{u,v}\geq 1].\]
Figure 3. A realisation of the event \(\mathsf{Jump}(k,m)\) for a current \(\mathbf{n}\) with source set \(\partial\mathbf{n}=\{0,y\}\). The bold black path represents the backbone \(\Gamma(\mathbf{n})\). The dashed curves represent long open edges that jump over the annulus \(\operatorname{Ann}(k,m)\).
Using Lemma 6.3, we have (see Figure 4),
\[\sum_{u\in\Lambda_{k},\;v\notin\Lambda_{k+k^{\nu}}}\mathbf{P}_{\beta}^ {0y,\emptyset}[\mathbf{n}_{u,v} \geq 1]\leq 2\beta\sum_{u\in\Lambda_{k},\;v\notin\Lambda_{k+k^{\nu}}}J _{u,v}\Bigg{(}\langle\sigma_{u}\sigma_{v}\rangle_{\beta}\] \[+\frac{\langle\sigma_{0}\sigma_{u}\rangle_{\beta}\langle\sigma_{v }\sigma_{y}\rangle_{\beta}}{\langle\sigma_{0}\sigma_{y}\rangle_{\beta}}+\frac{ \langle\sigma_{0}\sigma_{v}\rangle_{\beta}\langle\sigma_{u}\sigma_{y}\rangle_{ \beta}}{\langle\sigma_{0}\sigma_{y}\rangle_{\beta}}\Bigg{)}=:A_{1}+A_{2}+A_{3}. \tag{6.3}\]
\(\bullet\)**Bound on \(A_{1}.\)** Write
\[\beta\sum_{u\in\Lambda_{k},\;v\notin\Lambda_{k+k^{\nu}}}J_{u,v} \langle\sigma_{u}\sigma_{v}\rangle_{\beta} \leq C\sum_{u\in\Lambda_{k},\;v\notin\Lambda_{k+k^{\nu}}}\frac{|v-u |^{2}J_{0,v-u}}{|v-u|^{d}}\] \[\leq Ck^{d}\sum_{v\notin\Lambda_{k^{\nu}}}\frac{|v|^{2}J_{0,v}}{|v|^{ d}}\] \[\leq Ck^{d(1-\nu)}\sum_{v\notin\Lambda_{k^{\nu}}}|v|^{2}J_{0,v}\leq Ck ^{d(1-\nu)-\nu\varepsilon},\]
where we used (**IRB**) on the first line, translation invariance on the second line, and (6.1) on the last line.
\(\bullet\)**Bound on \(A_{2}.\)** For the second term, by the lower bound on the two-point function of Proposition 3.23 (together with the assumption that \(1\leq|y|\leq L(\beta)\)) one has \(\langle\sigma_{0}\sigma_{y}\rangle_{\beta}^{-1}\leq C\beta|y|^{d-1}\). Using (**IRB**) and the translation invariance of \(J\),
\[\beta\sum_{u\in\Lambda_{k},\;v\in\Lambda_{|y|/2}(y)}J_{u,v}\frac{ \langle\sigma_{0}\sigma_{u}\rangle_{\beta}\langle\sigma_{v}\sigma_{y}\rangle_{ \beta}}{\langle\sigma_{0}\sigma_{y}\rangle_{\beta}} \leq C\beta^{2}|y|^{d-1}\sum_{\begin{subarray}{c}u\in\Lambda_{k}\\ v\in\Lambda_{|y|/2}(y)\end{subarray}}J_{0,v-u}\langle\sigma_{0}\sigma_{u} \rangle_{\beta}\langle\sigma_{v}\sigma_{y}\rangle_{\beta}\] \[\leq Ck^{2}|y|^{d-1}\sum_{v\in\Lambda_{2|y|/3}(y)}\frac{J_{0,v}}{(|v- y|+1)^{d-2}}\] \[\leq Ck^{2}|y|^{d-1}\sum_{p=1}^{2|y|/3}\frac{1}{p^{d-2}}\sum_{v\in \Lambda_{2|y|/3}(y)}J_{0,v}\mathbbm{1}_{|y-v|=p}.\]
Now, using13 (6.1),
Footnote 13: Note that this bound is very sub-optimal in some cases but is enough for our purpose. For instance, assuming more regularity on the interaction we could replace (6.1) by a bound of the form \(J_{0,x}\leq C|x|^{-d-2-\varepsilon}\) for all \(x\in\mathbb{Z}^{d}\setminus\{0\}\). In that case, we would have \(J_{0,v}\lesssim|y|^{-d-2-\varepsilon}\) for \(v\in\Lambda_{2|y|/3}(y)\) which leads to a bound of the first contribution of \(A_{2}\) of order \(k^{2}/|y|^{1+\varepsilon}\) for any \(d\geq 4\).
\[\sum_{v\in\Lambda_{2|y|/3}(y)}J_{0,v}\mathbbm{1}_{|y-v|=p}\leq Cp|y|^{-(3+ \varepsilon)}\]
Using the assumption that \(|y|\geq k^{1+4/\varepsilon}\),
\[\sum_{u\in\Lambda_{k},\;v\in\Lambda_{|y|/2}(y)}J_{u,v}\frac{\langle\sigma_{0} \sigma_{u}\rangle_{\beta}\langle\sigma_{v}\sigma_{y}\rangle_{\beta}}{\langle \sigma_{0}\sigma_{y}\rangle_{\beta}}\leq\frac{Ck^{2}\log|y|}{|y|^{\varepsilon} }\leq\frac{Ck^{2}}{|y|^{\varepsilon/2}}\leq\frac{C}{k^{\varepsilon/2}}.\]
Finally, in the case \(v\notin\Lambda_{|y|/2}(y)\cup\Lambda_{k+k^{\nu}}\), we may use (**P1**) (for \(v\in\Lambda_{4|y|}(y)\setminus(\Lambda_{|y|/2}(y)\cup\Lambda_{k+k^{\nu}})\)), as well as (**MMS2**) (for \(v\notin\Lambda_{4|y|}(y)\)), to show that \(\langle\sigma_{v}\sigma_{y}\rangle_{\beta}\leq C_{0}\langle\sigma_{0}\sigma_{ y}\rangle_{\beta}\), so
that,
\[\beta\sum_{u\in\Lambda_{k},\;v\notin\Lambda_{|y|/2}(y)\cup \Lambda_{k+k^{\nu}}}J_{u,v}\frac{\langle\sigma_{0}\sigma_{u}\rangle_{\beta} \langle\sigma_{v}\sigma_{y}\rangle_{\beta}}{\langle\sigma_{0}\sigma_{y}\rangle_ {\beta}} \leq C_{0}\beta\sum_{u\in\Lambda_{k},\;v\notin\Lambda_{|y|/2}(y)\cup \Lambda_{k+k^{\nu}}}J_{u,v}\langle\sigma_{0}\sigma_{u}\rangle_{\beta}\] \[\leq Ck^{2(1-\nu)}\sum_{v\notin\Lambda_{k^{\nu}}}|v|^{2}J_{0,v}\leq Ck ^{2(1-\nu)-\nu\varepsilon}.\]
\(\bullet\)**Bound on \(A_{3}.\)** Since \(|y|\geq k^{1+4/\varepsilon}\), one has for \(u\in\Lambda_{k}\), \(\langle\sigma_{u}\sigma_{y}\rangle_{\beta}\leq C_{0}\langle\sigma_{0}\sigma_{ y}\rangle_{\beta}\) by the property (**P1**) of regular scales. Using this remark, together with (**IRB**) and (6.1), yields,
\[A_{3} \leq 2\beta C_{0}\sum_{u\in\Lambda_{k},\;v\notin\Lambda_{k+k^{\nu}}} J_{u,v}\langle\sigma_{0}\sigma_{v}\rangle_{\beta}\] \[\leq C\sum_{u\in\Lambda_{k},\;v\notin\Lambda_{k+k^{\nu}}}\frac{|v-u| ^{2}J_{0,v-u}}{|v-u|^{2}}\frac{1}{|v|^{d-2}}\] \[\leq \frac{C}{k^{d}}\sup_{\begin{subarray}{c}u\in\Lambda_{k}\\ v\notin\Lambda_{k+k^{\nu}}\end{subarray}}\left(\frac{|v|}{|u-v|}\right)^{2}\sum _{u\in\Lambda_{k},\;v\notin\Lambda_{k+k^{\nu}}}|u-v|^{2}J_{0,v-u}.\] \[\leq Ck^{2(1-\nu)}\sum_{v\notin\Lambda_{k^{\nu}}}|v|^{2}J_{0,v}\leq Ck ^{2(1-\nu)-\nu\varepsilon}.\]
We then obtain the result taking \(\eta\) sufficiently small, and \(C\) large enough.
**Remark 6.7**.: _We could prove a similar result with the assumption_ (**A6**) _replaced by_ (**A6\({}^{\prime}\)**) _as mentioned in the introduction. The only difference is that we now need to consider the event \(\mathsf{Jump}(k,k+k/(\log k)^{1-\nu})\) for \(\nu\in(0,1)\) sufficiently close to \(1\). Using the exact same method as above we obtain \(\eta,C>0\) such that, when \(|y|\geq\exp(k^{1+4/\varepsilon})\),_
\[\mathbf{P}_{\beta}^{0y,\emptyset}[\mathsf{Jump}(k,k+k/(\log k)^{1-\nu})]\leq \frac{C}{(\log k)^{\eta}}.\]
_Similar modifications can be applied below._
Figure 4. A graphical representation of the bound (6.3). We represented each potential contribution to \(A_{1},A_{2},A_{3}\) (from left to right). The backbone is the bold path joining \(0\) and \(y\). Long open edges are the dashed curves. The largest contribution should come from \(A_{2}\) in which long edges are induced by the backbone.
As a corollary of the above result, we can show that the probability that a backbone does a _zigzag_ between two "distanced" scales is very small. This will be very useful later to argue that intersection events are (essentially) local.
**Definition 6.8** (Zigzag event).: _For \(1\leq k\leq\ell\leq M\) and \(u,v\in\mathbb{Z}^{d}\), let \(\mathsf{ZZ}(u,v;k,\ell,M)\) be the event that the backbone of \(\mathbf{n}\) (with \(\partial\mathbf{n}=\{u,v\}\)) goes from \(u\) to a point in \(\operatorname{Ann}(\ell,M)\), then to a point in \(\Lambda_{k}\), before finally hitting \(v\). We let \(\mathsf{ZZ}(u,v;k,\ell,\infty)\) be the union of all \(\mathsf{ZZ}(u,v;k,\ell,M)\) for \(M\geq\ell\)._
**Corollary 6.9** (No zigzag for the backbone).: _Let \(d=4\). Assume that \(J\) satisfies \((\mathbf{A1})\)-\((\mathbf{A6})\). Let \(\nu\in(0,1)\) be such that \(\nu>\frac{d}{d+\varepsilon}\). There exist \(C,\eta>0\) such that for all \(\beta\leq\beta_{c}\), for all \(k,\ell\geq 1\) and \(y\in\mathbb{Z}^{d}\) in a regular scale with \(k^{8/(1-\nu)}\leq\ell\) and \(\ell^{1+4/\varepsilon}\leq|y|\),_
\[\mathbf{P}_{\beta}^{0y}[\mathsf{ZZ}(0,y;k,\ell,\infty)]\leq\frac{C}{\ell^{ \eta}}.\]
Proof.: Notice that,
\[\mathsf{ZZ}(0,y;k,\ell,\infty)\subset\mathsf{ZZ}(0,y;k,\ell,\ell+\ell^{\nu}) \cup\mathsf{Jump}(\ell,\ell+\ell^{\nu}).\]
The chain rule for backbones (see Appendix B), the assumption of regularity made on \(y\), as well as (**IRB**), yield
\[\mathbf{P}_{\beta}^{0y}[\mathsf{ZZ}(0,y;k,\ell,\ell+\ell^{\nu})] \leq \sum_{\begin{subarray}{c}v\in\operatorname{Ann}(\ell,\ell+\ell^ {\nu})\\ w\in\Lambda_{k}\end{subarray}}\frac{\langle\sigma_{0}\sigma_{v}\rangle_{\beta} \langle\sigma_{v}\sigma_{w}\rangle_{\beta}\langle\sigma_{w}\sigma_{y}\rangle_{ \beta}}{\langle\sigma_{0}\sigma_{y}\rangle_{\beta}}\] \[\leq C\frac{k^{4}\ell^{3+\nu}}{\ell^{4}}\leq\frac{C}{\ell^{(1-\nu)/ 2}}.\]
We conclude using Lemma 6.5.
**Remark 6.10**.: _Note that in the above result we heavily relied on the that fact the current cannot jump over an annulus of dimension strictly smaller than four. We will see that Lemma 6.5 does not hold anymore in the case \(d_{\mathrm{eff}}=4\) and \(1\leq d\leq 3\) which makes the study of the zigzag event more complicated._
This second technical result is a small modification of Lemma 6.5 but will be crucial in the proof of the mixing. It is a little easier since the sources are now located close to the origin: there is no "long backbone" which might help to jump scales. Note that this lemma does not rely on the existence of regular scales.
**Lemma 6.11**.: _Let \(d=4\). Assume that \(J\) satisfies \((\mathbf{A1})\)-\((\mathbf{A6})\). Let \(\nu\in(0,1)\) be such that \(\nu>\frac{d}{d+\varepsilon}\). There exist \(C,\eta>0\) such that for all \(\beta\leq\beta_{c}\), for all \(n<m\leq M\leq k\) with \(1\leq M^{3/2}\leq k\leq L(\beta)\), for all \(x\in\Lambda_{n}\) and all \(u\in\operatorname{Ann}(m,M)\),_
\[\mathbf{P}_{\beta}^{xu,\emptyset}[\mathsf{Jump}(k,k+k^{\nu})]\leq\frac{C}{k^{ \eta}}.\]
Proof.: We follow the same steps as in the proof of Lemma 6.5. Using Lemma 6.3,
\[\sum_{w\in\Lambda_{k},\,v\notin\Lambda_{k+k^{\nu}}}\mathbf{P}_{ \beta}^{xu,\emptyset}[\mathbf{n}_{w,v}\geq 1]\\ \leq 2\beta\sum_{w\in\Lambda_{k},\,v\notin\Lambda_{k+k^{\nu}}}J_ {w,v}\left(\langle\sigma_{w}\sigma_{v}\rangle_{\beta}+\frac{\langle\sigma_{x} \sigma_{w}\rangle_{\beta}\langle\sigma_{v}\sigma_{u}\rangle_{\beta}}{\langle \sigma_{x}\sigma_{u}\rangle_{\beta}}+\frac{\langle\sigma_{x}\sigma_{v}\rangle _{\beta}\langle\sigma_{w}\sigma_{u}\rangle_{\beta}}{\langle\sigma_{x}\sigma_{u }\rangle_{\beta}}\right).\]
As for the bound of \(A_{1}\) above,
\[\beta\sum_{w\in\Lambda_{k},\,v\notin\Lambda_{k+k^{\nu}}}J_{w,v}\langle\sigma_{ w}\sigma_{v}\rangle_{\beta}\leq C_{1}k^{d(1-\nu)-\nu\varepsilon}.\]
Using the lower bound of Proposition 3.23 (which is licit because \(1\leq|x-u|\leq L(\beta)\)) together with (**IRB**) and (6.1), we get
\[\beta\sum_{w\in\Lambda_{k},\,v\notin\Lambda_{k+k^{\nu}}}J_{w,v}\frac {\langle\sigma_{x}\sigma_{v}\rangle_{\beta}\langle\sigma_{w}\sigma_{u}\rangle_ {\beta}}{\langle\sigma_{x}\sigma_{u}\rangle_{\beta}} \leq \beta^{2}C_{2}M^{d-1}\sum_{w\in\Lambda_{k},\,v\notin\Lambda_{k+k^ {\nu}}}J_{w,v}\langle\sigma_{x}\sigma_{v}\rangle_{\beta}\langle\sigma_{w} \sigma_{u}\rangle_{\beta}\] \[\leq C_{3}M^{d-1}k^{2}k^{-(d-2)}\sum_{v\notin\Lambda_{k^{\nu}}}J_{0,v}\] \[\leq C_{4}M^{d-1}k^{-\nu(2+\varepsilon)}.\]
Finally, with the same reasoning, we also get
\[\sum_{w\in\Lambda_{k},\,v\notin\Lambda_{k+k^{\nu}}}J_{w,v}\frac{\langle\sigma_ {x}\sigma_{w}\rangle_{\beta}\langle\sigma_{v}\sigma_{u}\rangle_{\beta}}{ \langle\sigma_{x}\sigma_{u}\rangle_{\beta}}\leq C_{5}M^{d-1}k^{-\nu(2+ \varepsilon)}.\]
The assumption made on \(\nu\) and the inequality \(M^{3}\leq k^{2}\) yield the result.
Similarly, we can rule out the zigzag of the backbone in this setup.
**Corollary 6.12**.: _Let \(d=4\). Assume that \(J\) satisfies_ (**A1**)_-_(**A6**)_. Let \(\nu\in(0,1)\) be such that \(\nu>\frac{d}{d+\varepsilon}\). There exist \(C,\eta>0\) such that for all \(\beta\leq\beta_{c}\), for all \(n<m\leq M\leq k\) with \(1\leq M^{6/(1-\nu)}\leq k\leq L(\beta)\), for all \(x\in\Lambda_{n}\) and all \(u\in\operatorname{Ann}(m,M)\),_
\[\mathbf{P}_{\beta}^{xu}[\mathbb{Z}\mathsf{Z}(x,u;M,k,\infty)]\leq\frac{C}{k^{ \eta}}.\]
Proof.: We repeat the argument used to prove Corollary 6.9, except that the proof is now easier since \(u\) already lies in \(\Lambda_{M}\). Let \(\nu\in(0,1)\) be such that \(\nu>\frac{d}{d+\varepsilon}\). One has,
\[\mathbb{Z}\mathsf{Z}(x,u;M,k,\infty)\subset\mathbb{Z}\mathsf{Z}(x,u;M,k,k+k^{ \nu})\cup\mathsf{Jump}(k,k+k^{\nu}).\]
The chain rule for backbones, the lower bound of Proposition 3.23, and (**IRB**), yield
\[\mathbf{P}_{\beta}^{xu}[\mathbb{Z}\mathsf{Z}(x,u;M,k,k+k^{\nu})] \leq \sum_{v\in\operatorname{Ann}(k,k+k^{\nu})}\frac{\langle\sigma_{ x}\sigma_{v}\rangle_{\beta}\langle\sigma_{v}\sigma_{u}\rangle_{\beta}}{\langle \sigma_{x}\sigma_{u}\rangle_{\beta}}\] \[\leq C\frac{M^{d-1}k^{3+\nu}}{k^{4}}\leq\frac{C}{k^{(1-\nu)/2}}.\]
We conclude using Lemma 6.11.
This final technical lemma will be useful to argue that for a current \(\mathbf{n}\) with \(\partial\mathbf{n}=\{x,y\}\), the restriction of \(\mathbf{n}\) to \((\overline{\Gamma(\mathbf{n})})^{c}\) (that we denote by \(\mathbf{n}\setminus\overline{\Gamma(\mathbf{n})}\) below), where \(\Gamma(\mathbf{n})\) is the backbone of \(\mathbf{n}\) and \(\overline{\Gamma(\mathbf{n})}\) is the set of edges revealed during the exploration of \(\Gamma(\mathbf{n})\), essentially behaves like a sourceless current on a smaller graph. In that case, jump events should become even more unlikely since there is no backbone to create long connections anymore.
If \(\mathbf{n}\) is a current, and \(E\) is a set of edges, we let \(\mathbf{n}_{E}\) be the restriction of \(\mathbf{n}\) to the edges in \(E\).
**Lemma 6.13**.: _Let \(d=4\). Assume that \(J\) satisfies_ (**A1**)_-_(**A6**)_. Let \(\nu\in(0,1)\) be such that \(\nu>\frac{d}{d+\varepsilon}\). There exist \(C,\eta>0\) such that for all \(\beta\leq\beta_{c}\), for all \(k\geq 1\), for all \(x,y\in\mathbb{Z}^{d}\),_
\[\mathbf{P}_{\beta}^{xy}[\mathbf{n}\setminus\overline{\Gamma(\mathbf{n})}\in \mathsf{Jump}(k,k+k^{\nu})]\leq\mathbf{P}_{\beta}^{xy,\emptyset}[(\mathbf{n}_{ 1}+\mathbf{n}_{2})\setminus\overline{\Gamma(\mathbf{n}_{1})}\in\mathsf{Jump}(k, k+k^{\nu})]\leq\frac{C}{k^{\eta}},\]
_where by \((\mathbf{n}_{1}+\mathbf{n}_{2})\setminus\overline{\Gamma(\mathbf{n}_{1})}\) we mean the restriction of \((\mathbf{n}_{1}+\mathbf{n}_{2})\) to the graph depleted of the edges belonging to \(\overline{\Gamma(\mathbf{n}_{1})}\)._
Proof.: The first inequality follows by monotonicity. Write \(\mathcal{A}:=\{(\mathbf{n}_{1}+\mathbf{n}_{2})\setminus\overline{\Gamma(\mathbf{n}_{ 1})}\in\mathsf{Jump}(k,k+k^{\nu})\}\) and for a consistent (see Appendix B for a definition) path \(\gamma:x\to y\), \(\mathcal{A}_{\gamma}:=\{(\mathbf{n}_{1}+\mathbf{n}_{2})_{\overline{\gamma}^{c }}\in\mathsf{Jump}(k,k+k^{\nu})\}\). The idea is to condition on the backbone of \(\mathbf{n}_{1}\). Going to partition functions14, one has \(\mathbf{P}_{\beta}^{xy,\emptyset}[\mathcal{A}]=Z_{\beta}^{xy,\emptyset}[ \mathcal{A}]/Z_{\beta}^{xy,\emptyset}\) where,
Footnote 14: One would need to restrict to a finite subset \(\Lambda\) of \(\mathbb{Z}^{d}\) first, and then take the limit \(\Lambda\to\mathbb{Z}^{d}\). We omit this detail here.
\[Z_{\beta}^{xy,\emptyset}[\mathcal{A}] := \sum_{\gamma:x\to y\text{ consistent }\sum_{\begin{subarray}{c} \partial\mathbf{n}_{1}=\{x,y\}\\ \partial\mathbf{n}_{2}=\emptyset\end{subarray}}w_{\beta}(\mathbf{n}_{1})w_{ \beta}(\mathbf{n}_{2})\mathbbm{1}_{\Gamma(\mathbf{n}_{1})=\gamma}\mathbbm{1} _{\mathcal{A}}\] \[= \sum_{\gamma}\sum_{\begin{subarray}{c}\partial(\mathbf{n}_{1})_{ \overline{\gamma}^{c}}=\{x,y\}\\ \partial(\mathbf{n}_{1})_{\overline{\gamma}^{c}}=\emptyset\end{subarray}}w_{ \beta}((\mathbf{n}_{1})_{\overline{\gamma}^{c}})w_{\beta}((\mathbf{n}_{1})_{ \overline{\gamma}^{c}})w_{\beta}(\mathbf{n}_{2})\mathbbm{1}_{\Gamma((\mathbf{n }_{1})_{\overline{\gamma}})=\gamma}\mathbbm{1}_{\mathcal{A}_{\gamma}}\] \[= \sum_{\gamma}Z_{\overline{\gamma},\beta}^{xy}[\Gamma(\mathbf{n}_ {\overline{\gamma}})=\gamma]Z_{\overline{\gamma}^{c},\mathbb{Z}^{d},\beta}^{ \emptyset,\emptyset}[\mathcal{A}_{\gamma}]\] \[= \sum_{\gamma}Z_{\beta}^{xy,\emptyset}[\Gamma(\mathbf{n}_{1})= \gamma]\mathbf{P}_{\overline{\gamma}^{c},\mathbb{Z}^{d},\beta}^{\emptyset, \emptyset}[\mathcal{A}_{\gamma}].\]
Using Lemma 6.3 as well as Griffiths' inequality, for any \(\gamma\) as above,
\[\mathbf{P}_{\overline{\gamma}^{c},\mathbb{Z}^{d},\beta}^{\emptyset, \emptyset}[\mathcal{A}_{\gamma}] \leq \sum_{\begin{subarray}{c}u\in\Lambda_{k}\\ v\notin\Lambda_{k+k^{\nu}}\end{subarray}}2\beta J_{u,v}\langle{}_{u}\sigma_{v }\rangle_{\beta}\leq Ck^{d(1-\nu)-\nu\varepsilon},\]
where the last inequality was obtained in the proof of Lemma 6.5. Hence,
\[Z_{\beta}^{xy,\emptyset}[\mathcal{A}]\leq Ck^{d(1-\nu)-\nu\varepsilon}Z_{\beta }^{xy,\emptyset},\]
which yields the result since \(\nu>\frac{d}{d+\varepsilon}\).
As before, having a control of the jump probability over an annulus of dimension \(<4\) (namely \(3+\nu\) for \(\nu\in(0,1)\)) has consequences on the geometry of the current. In the context of Lemma 6.13, it prevents \((\mathbf{n}_{1}+\mathbf{n}_{2})\setminus\overline{\Gamma(\mathbf{n}_{1})}\) to connect two (sufficiently) distanced scales. We begin with a definition.
**Definition 6.14** (Crossing event).: _For \(1\leq k\leq\ell\) and \(\mathbf{n}\) a current, we say that \(\mathbf{n}\) realises the event \(\mathsf{Cross}(k,L)\) if \(\mathbf{n}\) reinforces "crosses" \(\mathrm{Ann}(k,\ell)\), in the sense that there exists a cluster of \(\mathbf{n}\) containing both a point in \(\Lambda_{k}\) and in \(\Lambda_{\ell}^{c}\)._
**Corollary 6.15**.: _Let \(d=4\). Assume that \(J\) satisfies \((\mathbf{A1})\)-\((\mathbf{A6})\). Let \(\nu\in(0,1)\) be such that \(\nu>\frac{d}{d+\varepsilon}\). There exist \(C,\eta>0\) such that for all \(\beta\leq\beta_{c}\), for all \(k,\ell\geq 1\) with \(k^{8/(1-\nu)}\leq\ell\), for all \(x,u\in\mathbb{Z}^{d}\),_
\[\mathbf{P}_{\beta}^{xu}[\mathbf{n}\setminus\overline{\Gamma(\mathbf{n})}\in \mathsf{Cross}(k,\ell)]\leq\mathbf{P}_{\beta}^{xu,\emptyset}[(\mathbf{n}_{1}+ \mathbf{n}_{2})\setminus\overline{\Gamma(\mathbf{n}_{1})}\in\mathsf{Cross}(k, \ell)]\leq\frac{C}{\ell^{\eta}}.\]
Proof.: The first inequality follows by monotonicity. Notice that,
\[\left\{(\mathbf{n}_{1}+\mathbf{n}_{2})\setminus\overline{\Gamma( \mathbf{n}_{1})}\in\mathsf{Cross}(k,\ell)\right\}\subset\bigcup_{v\in\Lambda_{k },\,w\in\mathrm{Ann}(\ell,\ell+\ell^{\nu})}\left\{v\longleftrightarrow w\text{ in }(\mathbf{n}_{1}+\mathbf{n}_{2})\setminus\overline{\Gamma(\mathbf{n}_{1})}\right\} \\ \cup\left\{(\mathbf{n}_{1}+\mathbf{n}_{2})\setminus\overline{\Gamma (\mathbf{n}_{1})}\in\mathsf{Jump}(\ell,\ell+\ell^{\nu})\right\}.\]
The second event on the right-hand side above is handled using Lemma 6.13.
To handle the first event, we use the fact that the probability \(v\) and \(w\) are connected in \((\mathbf{n}_{1}+\mathbf{n}_{2})\setminus\overline{\Gamma(\mathbf{n}_{1})}\) can be bounded by \(\langle\sigma_{v}\sigma_{w}\rangle_{\beta}^{2}\). Indeed, this result follows from an generalisation of the switching lemma that can be found in [1, Lemma 2.2]. Proceeding
as in the proof of Lemma 6.13, we get
\[\mathbf{P}_{\beta}^{xu,\emptyset}[u\longleftrightarrow v\text{ in }( \mathbf{n}_{1}+\mathbf{n}_{2})\setminus\overline{\Gamma(\mathbf{n}_{1})}]\\ \leq\sum_{\gamma:x\to u\text{ consistent}}\mathbf{P}_{\beta}^{xu}[ \Gamma(\mathbf{n})=\gamma]\mathbf{P}_{\overline{\gamma}^{c},\mathbb{Z}^{d}, \beta}^{\emptyset,\emptyset}[v\longleftrightarrow w\text{ in }(\mathbf{m}_{1}+\mathbf{m}_{2}) \setminus\overline{\gamma}].\]
The above-mentioned generalisation of the switching lemma, together with Griffiths' inequality, yield
\[\mathbf{P}_{\overline{\gamma}^{c},\mathbb{Z}^{d},\beta}^{\emptyset,\emptyset}[v \longleftrightarrow w\text{ in }(\mathbf{m}_{1}+\mathbf{m}_{2})\setminus \overline{\gamma}]=\langle\sigma_{v}\sigma_{w}\rangle_{\overline{\gamma}^{c}, \beta}\langle\sigma_{v}\sigma_{w}\rangle_{\beta}\leq\langle\sigma_{v}\sigma_{ w}\rangle_{\beta}^{2}.\]
As a result,
\[\mathbf{P}_{\beta}^{xu,\emptyset}\left[\bigcup_{v\in\Lambda_{k},\, w\in\operatorname{Ann}(\ell,\ell+\ell^{\nu})}\Big{\{}v\longleftrightarrow w \text{ in }(\mathbf{n}_{1}+\mathbf{n}_{2})\setminus\overline{\Gamma(\mathbf{n}_{1})} \Big{\}}\right] \leq\sum_{\begin{subarray}{c}v\in\Lambda_{k}\\ w\in\operatorname{Ann}(\ell,\ell+\ell^{\nu})\end{subarray}}\langle\sigma_{v} \sigma_{w}\rangle_{\beta}^{2}\] \[\leq C_{1}\frac{k^{4}}{\ell^{1-\nu}}\leq\frac{C_{1}}{\ell^{(1-\nu)/2}}.\]
This concludes the proof.
### Proof of the intersection property
Let \(d=4\). Assume that \(J\) satisfies (**A1**)-(**A6**). Let \(\beta\leq\beta_{c}\). Recall the definition of \(\mathcal{L}=\mathcal{L}(\beta,D)\) given at the beginning of the section: \(\ell_{0}=0\) and
\[\ell_{k+1}=\inf\{\ell\geq\ell_{k},\,B_{\ell}(\beta)\geq DB_{\ell_{k}}(\beta)\}. \tag{6.4}\]
The existence of regular scales and the sliding-scale infrared bound have the following interesting consequence on how fast the bubble diagram grows from one scale to the other, it can be see as an improvement over the bound \(B_{L}(\beta)-B_{\ell}(\beta)\leq C_{0}\log(L/\ell)\).
**Lemma 6.16** (Scale to scale comparison of the bubble diagram, [1, Lemma 6.3]).: _Let \(d=4\). There exists \(C=C(d)>0\) such that for every \(\beta\leq\beta_{c}\), and for every \(1\leq\ell\leq L\leq L(\beta)\),_
\[B_{L}(\beta)\leq\left(1+C\frac{\log_{2}(L/\ell)}{\log_{2}(\ell)}\right)B_{ \ell}(\beta).\]
Proof.: If \(N\geq n\) and \(n\) is a weak regular scale, we may write,
\[B_{2N}(\beta)-B_{N}(\beta)\leq C_{1}N^{-4}\chi_{N/d}(\beta)^{2} \leq C_{2}\cdot n^{-4}\chi_{n}(\beta)^{2}\leq C_{3}\cdot n^{-4}\left(\chi_{2n} (\beta)-\chi_{n}(\beta)\right)^{2}\\ \leq C_{4}\left(B_{2n}(\beta)-B_{n}(\beta)\right),\]
where we successively used (**MMS2**), the sliding-scale infrared bound, the property (**P3**) of regular scales, and Cauchy-Schwarz inequality. There are \(\log_{2}(L/\ell)\) scales between \(\ell\) and \(L\), and at least \(c\log_{2}(\ell)\) regular scales between \(1\) and \(\ell\). Using the above computation,
\[B_{L}(\beta)-B_{\ell}(\beta)=\sum_{N\text{ scale between }\ell \text{ and}L/2}B_{2N}(\beta)-B_{N}(\beta)\\ \leq\frac{\log_{2}(L/\ell)}{c\log_{2}(\ell)}\sum_{\begin{subarray} {c}n\text{ regular scale}\\ \text{between }1\text{ and }\ell\end{subarray}}B_{2n}(\beta)-B_{n}(\beta)\leq\frac{\log_{2}(L/ \ell)}{c\log_{2}(\ell)}B_{\ell}(\beta).\]
The above property has the following important consequence which ensures that the scales explode sufficiently fast. This will be used later to make sure there is "enough room" in the annuli \(\operatorname{Ann}(\ell_{k},\ell_{k+1})\).
**Remark 6.17** (Growth of \(\mathcal{L}\)).: _Using Lemma 6.16 we get that,_
\[\log_{2}(\ell_{k})\leq C\log_{2}(\ell_{k+1}/\ell_{k})\frac{B_{\ell_{k}}(\beta)}{B _{\ell_{k+1}}(\beta)-B_{\ell_{k}}(\beta)}\leq\log_{2}(\ell_{k+1}/\ell_{k})\frac {C}{D-1},\]
_so that,_
\[\ell_{k+1}\geq\ell_{k}^{(D-1)/C}.\]
Recall that the event Jump was defined in Definition 6.18. In the remaining of this section, we fix \(\nu\in(0,1)\) with \(\nu>\frac{d}{d+\varepsilon}\) such that Lemmas 6.5, 6.11, and 6.13 hold.
**Definition 6.18** (Intersection event).: _Let \(k\geq 1\) and \(y\notin\Lambda_{\ell_{k+2}}\). A pair of currents \((\mathbf{n},\mathbf{m})\) with \((\partial\mathbf{n},\partial\mathbf{m})=(\{0,y\},\{0,y\})\) realises the event \(I_{k}\) if the following properties are satisfied:_
1. _The restrictions of_ \(\mathbf{n}\) _and_ \(\mathbf{m}\) _to edges with both endpoints in_ \(\operatorname{Ann}(\ell_{k},\ell_{k+1}+\ell_{k+1}^{\nu})\) _contain a unique cluster "strongly crossing"_ \(\operatorname{Ann}(\ell_{k},\ell_{k+1})\)_, in the sense that it contains a vertex in_ \(\operatorname{Ann}(\ell_{k},\ell_{k}+\ell_{k}^{\nu})\) _and a vertex in_ \(\operatorname{Ann}(\ell_{k+1},\ell_{k+1}+\ell_{k+1}^{\nu})\)_._
2. _The two clusters described in_ \((i)\) _intersect._
Note that the event \(I_{k}\) is measurable in term of edges with both endpoints in the annulus \(\operatorname{Ann}(\ell_{k},\ell_{k+1}+\ell_{k+1}^{\nu})\).
The following lemma shows that intersections occur at every scale with a uniformly positive probability.
**Lemma 6.19** (Intersection property).: _Let \(d=4\). For \(D\) large enough, there exists \(\kappa>0\) such that for every \(\beta\leq\beta_{c}\), every \(k\geq 2\), and every \(y\notin\Lambda_{\ell_{k+2}}\) in a regular scale with \(1\leq|y|\leq L(\beta)\),_
\[\mathbf{P}_{\beta}^{0y,0y,\emptyset,\emptyset}[(\mathbf{n}_{1}+\mathbf{n}_{3}, \mathbf{n}_{2}+\mathbf{n}_{4})\in I_{k}]\geq\kappa.\]
Proof.: We restrict ourselves to the case of \(y\) in a regular scale to be able to use the properties \((\mathbf{P1})\) and \((\mathbf{P2})\). Introduce intermediate scales \(\ell_{k}\leq n\leq m\leq M\leq N\leq\ell_{k+1}\) satisfying
\[\ell_{k}^{\frac{8}{1-\nu}+1}\geq n\geq\ell_{k}^{\frac{8}{1-\nu}},\qquad n^{ \frac{8}{1-\nu}+1}\geq m\geq n^{\frac{8}{1-\nu}},\]
\[M^{\frac{8}{1-\nu}+1}\geq N\geq M^{\frac{8}{1-\nu}},\qquad N^{\frac{8}{1-\nu} +1}\geq\ell_{k+1}\geq N^{\frac{8}{1-\nu}},\]
which is possible provided \(D\) is large enough by Remark 6.17. Define
\[\mathcal{M}:=\mathbf{C}_{\mathbf{n}_{1}+\mathbf{n}_{3}}(0)\cap\mathbf{C}_{ \mathbf{n}_{2}+\mathbf{n}_{4}}(0)\cap\operatorname{Ann}(m,M).\]
Using Cauchy-Schwarz inequality,
\[\mathbf{P}_{\beta}^{0y,0y,\emptyset,\emptyset}[|\mathcal{M}|>0]\geq\frac{ \mathbf{E}_{\beta}^{0y,0y,\emptyset,\emptyset}[|\mathcal{M}|]^{2}}{\mathbf{E} _{\beta}^{0y,0y,\emptyset,\emptyset}[|\mathcal{M}|^{2}]}.\]
One has for some \(c_{1}>0\),
\[\mathbf{E}_{\beta}^{0y,0y,\emptyset,\emptyset}[|\mathcal{M}|] = \sum_{u\in\operatorname{Ann}(m,M)}\left(\frac{\langle\sigma_{0} \sigma_{u}\rangle_{\beta}\langle\sigma_{u}\sigma_{y}\rangle_{\beta}}{ \langle\sigma_{0}\sigma_{y}\rangle_{\beta}}\right)^{2}\] \[\geq c_{1}(B_{M}(\beta)-B_{m-1}(\beta)),\]
where we used the regularity to compare \(\langle\sigma_{u}\sigma_{y}\rangle_{\beta}\) with \(\langle\sigma_{0}\sigma_{y}\rangle_{\beta}\). Using (4.4), for some \(c_{2}>0\),
\[\mathbf{E}_{\beta}^{0y,0y,\emptyset,\emptyset}[|\mathcal{M}|^{2}] \leq \sum_{u,v\in\operatorname{Ann}(m,M)}\left(\frac{\langle\sigma_{0} \sigma_{u}\rangle_{\beta}\langle\sigma_{u}\sigma_{v}\rangle_{\beta}\langle \sigma_{v}\sigma_{y}\rangle_{\beta}}{\langle\sigma_{0}\sigma_{y}\rangle_{ \beta}}+\frac{\langle\sigma_{0}\sigma_{v}\rangle_{\beta}\langle\sigma_{v} \sigma_{u}\rangle_{\beta}\langle\sigma_{u}\sigma_{y}\rangle_{\beta}}{\langle \sigma_{0}\sigma_{y}\rangle_{\beta}}\right)^{2}\] \[\leq c_{2}(B_{M}(\beta)-B_{m-1}(\beta))B_{2M}(\beta),\]
where we used once again the regularity assumption to compare \(\langle\sigma_{u}\sigma_{y}\rangle_{\beta}\) and \(\langle\sigma_{v}\sigma_{y}\rangle_{\beta}\) with \(\langle\sigma_{0}\sigma_{y}\rangle_{\beta}\). Using Lemma 6.16, we also get,
\[B_{M}(\beta)\geq\left(1+C\frac{\log_{2}(\ell_{k+1}/M)}{\log_{2}(M)}\right)^{-1} B_{\ell_{k+1}}(\beta)\geq\frac{1}{1+C_{1}}B_{\ell_{k+1}}(\beta),\]
where \(C\) is the constant of Lemma 6.16 and \(C_{1}=C_{1}(\nu)=C\left[\left(\frac{8}{1-\nu}+1\right)^{2}-1\right]>0\). Similarly,
\[B_{m-1}(\beta)\leq(1+C_{1})B_{\ell_{k}}(\beta)\leq\frac{1+C_{1}}{D}B_{\ell_{k+1 }}(\beta).\]
As a result, we obtain that for \(D\) large enough, there exists \(c_{3}>0\) such that
\[\mathbf{P}_{\beta}^{0y,0y,0\emptyset}[|\mathcal{M}|\neq 0]\geq c_{3}.\]
To conclude, we must prove uniqueness of the "crossing clusters" in \(\mathbf{n}_{1}+\mathbf{n}_{3}\) and \(\mathbf{n}_{2}+\mathbf{n}_{4}\). We reduce the argument to the one used in the nearest-neighbour case by not allowing the currents to make big jumps. More precisely, define the jump event \(\mathsf{J}\) by
\[\mathsf{J}:=\bigcup_{p\in\{\ell_{k},\ell_{k+1}\}}\left\{\mathbf{n}_{1}+ \mathbf{n}_{3}\in\mathsf{Jump}(p,p+p^{\nu})\right\}\cup\left\{\mathbf{n}_{2}+ \mathbf{n}_{4}\in\mathsf{Jump}(p,p+p^{\nu})\right\}.\]
Using Lemma 6.5, we get that for some \(\eta>0\) and some constant \(C_{2}>0\) (provided \(\ell_{k+1}^{1+4/\varepsilon}\leq\ell_{k+2}\), which might require to increase \(D\)),
\[\mathbf{P}_{\beta}^{0y,0y,\emptyset,\emptyset}[\mathsf{J}]\leq\frac{C_{2}}{ \ell_{k}^{\eta}}.\]
Note that on the complement of the above event, the cluster connecting \(0\) and \(y\) in \(\mathbf{n}_{1}+\mathbf{n}_{3}\) and \(\mathbf{n}_{2}+\mathbf{n}_{4}\) must go through \(\operatorname{Ann}(p,p+p^{\nu})\) for \(p\in\{\ell_{k},\ell_{k+1}\}\). In particular, it satisfies the "strong crossing" constraint in the definition of \(I_{k}\). Take \(D\) large enough so that there exists a constant \(c_{4}>0\) such that,
\[\mathbf{P}_{\beta}^{0y,0y,\emptyset,\emptyset}[|\mathcal{M}|\neq 0,\,\mathsf{J}^{c }]\geq c_{4}.\]
If the event \(\{\mathcal{M}\neq\emptyset\}\cap\mathsf{J}^{c}\) occurs but not \(I_{k}\), then (see Figure 5) one of the following events must occur for the pair \((\mathbf{n}_{1},\mathbf{n}_{3})\) (or \((\mathbf{n}_{2},\mathbf{n}_{4})\)):
* \(\mathcal{F}_{1}:=\) the backbone \(\Gamma(\mathbf{n}_{1})\) of \(\mathbf{n}_{1}\) does a zigzag between scales \(\ell_{k}\) and \(n\), i.e. it belongs to \(\mathsf{ZZ}(0,y;\ell_{k},n,\infty)\),
* \(\mathcal{F}_{2}:=(\mathbf{n}_{1}+\mathbf{n}_{3})\setminus\overline{\Gamma( \mathbf{n}_{1})}\) belongs to \(\mathsf{Cross}(n+n^{\nu},m)\),
* \(\mathcal{F}_{3}:=\) the backbone \(\Gamma(\mathbf{n}_{1})\) of \(\mathbf{n}_{1}\) does a zigzag between scales \(N+N^{\nu}\) and \(\ell_{k+1}\), i.e. it belongs to \(\mathsf{ZZ}(0,y;N+N^{\nu},\ell_{k+1},\infty)\),
* \(\mathcal{F}_{4}:=(\mathbf{n}_{1}+\mathbf{n}_{3})\setminus\overline{\Gamma( \mathbf{n}_{1})}\) belongs to \(\mathsf{Cross}(M,N)\),
* \(\mathcal{F}_{5}:=\mathsf{Jump}(n,n+n^{\nu})\cup\mathsf{Jump}(N,N+N^{\nu})\).
Note that the event \(\mathcal{F}_{5}\) takes into account situations in which (for instance) \((\mathbf{n}_{1}+\mathbf{n}_{3})\setminus\overline{\Gamma(\mathbf{n}_{1})}\) contains a cluster "almost realising" \(\mathsf{Cross}(M,N)\) and \(\Gamma(\mathbf{n}_{1})\) "almost" performs \(\mathsf{ZZ}(0,y;N+N^{\nu},\ell_{k+1},\infty)\), and these two pieces are connected by a long open edge of \(\overline{\Gamma(\mathbf{n}_{1})}\setminus\Gamma(\mathbf{n}_{1})\) which jumps over \(\operatorname{Ann}(N,N+N^{\nu})\).
Using Corollaries 6.9 and 6.15 we get the existence of \(C,\eta>0\) such that,
\[\mathbf{P}_{\beta}^{0y,\emptyset}[\mathcal{F}_{1}]\leq\frac{C}{n^{\eta}}, \qquad\mathbf{P}_{\beta}^{0y,\emptyset}[\mathcal{F}_{2}]\leq\frac{C}{m^{\eta}},\]
\[\mathbf{P}_{\beta}^{0y,\emptyset}[\mathcal{F}_{3}]\leq\frac{C}{\ell_{k+1}^{ \eta}},\qquad\mathbf{P}_{\beta}^{0y,\emptyset}[\mathcal{F}_{4}]\leq\frac{C}{N ^{\eta}}.\]
Moreover, as a consequence of Lemma 6.5, we also get that \(\mathbf{P}_{\beta}^{0y,\emptyset}[\mathcal{F}_{5}]\leq C/n^{\eta}\).
As a result, taking \(D\) large enough, we get that the sum of the probabilities of the five events for the pairs \((\mathbf{n}_{1},\mathbf{n}_{3})\) and \((\mathbf{n}_{2},\mathbf{n}_{4})\) does not exceed \(c_{4}/2\) so that, setting \(\kappa:=c_{4}/2\),
\[\mathbf{P}_{\beta}^{0y,0y,\emptyset,\emptyset}[I_{k}]\geq\kappa.\]
### Proof of the mixing
We now turn to the proof of the mixing property. Recall that \(J\) satisfies (**A1**)-(**A6**).
**Theorem 6.20** (Mixing property).: _Let \(d=4\) and \(s\geq 1\). There exist \(\gamma,c>0\) and \(C=C(s)>0\), such that for every \(1\leq t\leq s\), every \(\beta\leq\beta_{c}\), every \(1\leq n^{\gamma}\leq N\leq L(\beta)\), every \(x_{i}\in\Lambda_{n}\) and \(y_{i}\notin\Lambda_{N}\)\((i\leq t)\), and every events \(E\) and \(F\) depending on the restriction of \((\mathbf{n}_{1},\ldots,\mathbf{n}_{s})\) to edges with endpoints within \(\Lambda_{n}\) and outside \(\Lambda_{N}\) respectively,_
\[\left|\mathbf{P}_{\beta}^{x_{1}y_{1},\ldots,x_{t}y_{t},\emptyset,\ldots,\emptyset}[E\cap F]-\mathbf{P}_{\beta}^{x_{1}y_{1},\ldots,x_{t}y_{t}, \emptyset,\ldots,\emptyset}[E]\mathbf{P}_{\beta}^{x_{1}y_{1},\ldots,x_{t}y_{t}, \emptyset,\ldots,\emptyset}[F]\right|\\ \leq C\left(\log\frac{N}{n}\right)^{-1/2}. \tag{6.5}\]
_Furthermore, for every \(x_{1}^{\prime},\ldots,x_{t}^{\prime}\in\Lambda_{n}\) and \(y_{1}^{\prime},\ldots,y_{t}^{\prime}\notin\Lambda_{N}\), we have that_
\[\left|\mathbf{P}_{\beta}^{x_{1}y_{1},\ldots,x_{t}y_{t},\emptyset,\ldots,\emptyset}[E]-\mathbf{P}_{\beta}^{x_{1}y_{1}^{\prime},\ldots,x_{t}y_{ t}^{\prime},\emptyset,\ldots,\emptyset}[E]\right|\leq C\left(\log\frac{N}{n} \right)^{-1/2}, \tag{6.6}\]
\[\Big{|}\mathbf{P}_{\beta}^{x_{1}y_{1},\ldots,x_{t}y_{t},\emptyset,\ldots,\emptyset }[F]-\mathbf{P}_{\beta}^{x_{t}^{\prime}y_{1},\ldots,x_{t}^{\prime}y_{t},\emptyset,\ldots,\emptyset}[F]\Big{|}\leq C\left(\log\frac{N}{n}\right)^{-1/2}. \tag{6.7}\]
Fix \(\beta\leq\beta_{c}\). Fix two integers \(t,s\) satisfying \(1\leq t\leq s\). Introduce integers \(m,M\) such that \(n\leq m\leq M\leq N\), \(m/n=(N/n)^{\mu/2}\), and \(N/M=(N/n)^{1-\mu}\) for \(\mu\) small to be fixed. For \(\mathbf{x}=(x_{1},\ldots,x_{t})\) and \(\mathbf{y}=(y_{1},\ldots,y_{t})\), write:
\[\mathbf{P}_{\beta}^{\mathbf{xy}}:=\mathbf{P}_{\beta}^{x_{1}y_{1},\ldots,x_{t}y _{t},\emptyset,\ldots,\emptyset},\qquad\mathbf{P}_{\beta}^{\mathbf{xy}, \emptyset}:=\mathbf{P}_{\beta}^{\mathbf{xy}}\otimes\mathbf{P}_{\beta}^{ \emptyset,\ldots,\emptyset},\]
where \(\mathbf{P}_{\beta}^{\emptyset,\ldots,\emptyset}\) is the law of a sum of \(s\) independent sourceless currents that we denote by \((\mathbf{n}_{1}^{\prime},\ldots,\mathbf{n}_{s}^{\prime})\).
If \(p\geq 1\), define for \(y\notin\Lambda_{2dp}\),
\[\mathbb{A}_{y}(p):=\left\{u\in\operatorname{Ann}(p,2p)\::\:\forall x\in \Lambda_{p/d},\:\langle\sigma_{x}\sigma_{y}\rangle_{\beta}\leq\left(1+C_{0} \frac{|x-u|}{|y|}\right)\langle\sigma_{u}\sigma_{y}\rangle_{\beta}\right\},\]
where \(C_{0}\) is the constant in the definition of regular scales. Note that if \(y\) is in a regular scale, then \(\mathbb{A}_{y}(p)=\operatorname{Ann}(p,2p)\).
Now, introduce the set \(\mathcal{K}\) of regular scales \(k\) between \(m\) and \(M/2\) with every element of \(\mathcal{K}\) such that the \(2^{k}\) (\(k\in\mathcal{K}\)) differ by a multiplicative constant at least \(C_{0}\) (this will be useful later to apply \((\mathbf{P4})\)). By the existence of regular scales of Proposition 3.28, we may assume that \(|\mathcal{K}|\geq c_{1}\log(N/n)\) for a sufficiently small \(c_{1}=c_{1}(\mu)>0\). Introduce \(\mathbf{U}:=\prod_{i=1}^{t}\mathbf{U}_{i}\), where
\[\mathbf{U}_{i}:=\frac{1}{|\mathcal{K}|}\sum_{k\in\mathcal{K}}\frac{1}{A_{x_{i },y_{i}}(2^{k})}\sum_{u\in\mathbb{A}_{y_{i}}(2^{k})}\mathbb{1}[u\stackrel{{ \mathbf{n}_{i}+\mathbf{n}_{i}^{\prime}}}{{\longleftrightarrow}}x_{i}],\]
and,
\[a_{x,y}(u):=\frac{\langle\sigma_{x}\sigma_{u}\rangle_{\beta}\langle\sigma_{u} \sigma_{y}\rangle_{\beta}}{\langle\sigma_{x}\sigma_{y}\rangle_{\beta}},\qquad A _{x,y}(p):=\sum_{u\in\mathbb{A}_{y}(p)}a_{x,y}(u).\]
Using the switching lemma \((\mathbf{SL})\), we see that \(\mathbf{E}_{\beta}^{\mathbf{xy},\emptyset}[\mathbf{U}]=1\). We begin by importing a concentration inequality whose proof essentially relies on the definition of \(\mathbb{A}_{y_{i}}(2^{k})\) and the properties of regular scales15 (see [1, Proposition 6.6]).
Footnote 15: This is the only place where we need the property \((\mathbf{P4})\) of regular scales. It also heavily relies on \((\mathbf{P3})\).
**Lemma 6.21** (Concentration of \(\mathbf{U}\)).: _For all \(\gamma>2\), there exists \(C=C(d,t,\gamma)>0\) such that for all \(n\) sufficiently large satisfying \(n^{\gamma}\leq N\leq L(\beta)\),_
\[\mathbf{E}_{\beta}^{\mathbf{xy},\emptyset}[(\mathbf{U}-1)^{2}]\leq\frac{C}{ \log(N/n)}.\]
We now fix \(\gamma>2\) (it will be taken large enough later). Using Cauchy-Schwarz inequality together with Lemma 6.21, we find \(C_{1}=C_{1}(d,t,\gamma)>0\) such that
\[\Big{|}\mathbf{P}_{\beta}^{\mathbf{xy}}[E\cap F]-\mathbf{E}_{\beta }^{\mathbf{xy},\emptyset}\big{[}\mathbf{U}\mathbb{1}[(\mathbf{n}_{1},\ldots, \mathbf{n}_{s})\in E\cap F]\big{]}\Big{|}\\ \leq\sqrt{\mathbf{E}_{\beta}^{\mathbf{xy},\emptyset}[(\mathbf{U} -1)^{2}]}\leq\frac{C_{1}}{\sqrt{\log(N/n)}}. \tag{6.8}\]
At this stage of the proof, we need to analyse \(\mathbf{E}_{\beta}^{\mathbf{xy},\emptyset}\big{[}\mathbf{U}\mathbb{1}[( \mathbf{n}_{1},\ldots,\mathbf{n}_{s})\in E\cap F]\big{]}\). By definition of \(\mathbf{U}\), this term can be rewritten as a weighted sum of terms of the form \(\mathbf{E}_{\beta}^{\mathbf{xy},\emptyset}\big{[}\prod_{i=1}^{t}\mathbb{1}[u_{i }\stackrel{{\mathbf{n}_{i}+\mathbf{n}_{i}^{\prime}}}{{ \longleftrightarrow}}y_{i}]\mathbb{1}[(\mathbf{n}_{1},\ldots,\mathbf{n}_{s}) \in E\cap F]\big{]}\) where \(u_{i}\in\mathbb{A}_{y_{i}}(2^{k_{i}})\) for some \(k_{i}\in\mathcal{K}\). It would be very tempting to try to apply the switching lemma directly to turn the measure \(\mathbf{P}_{\beta}^{\mathbf{xy},\emptyset}\) into a measure \(\mathbf{P}_{\beta}^{\mathbf{xu},\mathbf{uy}}\) (up to a renormalisation weight). This would suggest that
the occurrences of \(E\) and \(F\) are essentially due to \((\mathbf{n}_{1},\ldots,\mathbf{n}_{s})\) and \((\mathbf{n}_{1}^{\prime},\ldots,\mathbf{n}_{s}^{\prime})\) respectively under the measure \(\mathbf{P}_{\beta}^{\mathbf{xu,uy}}\). However, the occurrence of \(E\cap F\) is only a function of \((\mathbf{n}_{1},\ldots,\mathbf{n}_{s})\) which makes impossible the use of the switching lemma. Nevertheless, we can still use the switching principle. This motivates the introduction of the following event.
**Definition 6.22**.: _Let \(\mathbf{u}=(u_{1},\ldots,u_{t})\) with \(u_{i}\in\operatorname{Ann}(m,M)\) for every \(i\). Introduce the event \(\mathcal{G}(u_{1},\ldots,u_{t})=\mathcal{G}(\mathbf{u})\) defined as follows: for every \(i\leq s\), there exists \(\mathbf{k}_{i}\leq\mathbf{n}_{i}+\mathbf{n}_{i}^{\prime}\) such that \(\mathbf{k}_{i}=0\) on \(\Lambda_{n}\), \(\mathbf{k}_{i}=\mathbf{n}_{i}+\mathbf{n}_{i}^{\prime}\) outside \(\Lambda_{N}\), \(\partial\mathbf{k}_{i}=\{u_{i},y_{i}\}\) for \(i\leq t\), and \(\partial\mathbf{k}_{i}=\emptyset\) for \(t<i\leq s\)._
By the switching principle (**SP**), one has
\[\mathbf{P}_{\beta}^{\mathbf{xy},\emptyset}\left[(\mathbf{n}_{1}, \ldots,\mathbf{n}_{s})\in E\cap F,u_{i}\stackrel{{\mathbf{n}_{i} +\mathbf{n}_{i}^{\prime}}}{{\longleftrightarrow}}y_{i},\,\forall 1\leq i\leq t,\, \mathcal{G}(\mathbf{u})\right]\\ =\left(\prod_{i=1}^{t}a_{x_{i},y_{i}}(u_{i})\right)\mathbf{P}_{ \beta}^{\mathbf{xu,uy}}\left[(\mathbf{n}_{1},\ldots,\mathbf{n}_{s})\in E,( \mathbf{n}_{1}^{\prime},\ldots,\mathbf{n}_{s}^{\prime})\in F,\,\mathcal{G}( \mathbf{u})\right].\]
The trivial identity,
\[\mathbf{P}_{\beta}^{\mathbf{xu,uy}}[(\mathbf{n}_{1},\ldots,\mathbf{ n}_{s})\in E,(\mathbf{n}_{1}^{\prime},\ldots,\mathbf{n}_{s}^{\prime})\in F]\\ =\mathbf{P}_{\beta}^{\mathbf{xu}}[(\mathbf{n}_{1},\ldots,\mathbf{ n}_{s})\in E]\mathbf{P}_{\beta}^{\mathbf{uy}}[(\mathbf{n}_{1}^{\prime},\ldots, \mathbf{n}_{s}^{\prime})\in F]\]
suggests to prove that under \(\mathbf{P}_{\beta}^{\mathbf{xu,uy}}\), the event \(\mathcal{G}(\mathbf{u})\) occurs with high probability. This motivates the following result.
**Lemma 6.23**.: _Let \(d=4\). There exist \(C,\epsilon>0\), \(\gamma=\gamma(\epsilon)>0\) large enough and \(\mu=\mu(\epsilon)>0\) small enough such that for every \(n^{\gamma}\leq N\leq L(\beta)\), and every \(\mathbf{u}\) with \(u_{i}\in\mathbb{A}_{y_{i}}(2^{k_{i}})\) with \(k_{i}\in\mathcal{K}\) for \(1\leq i\leq t\),_
\[\mathbf{P}_{\beta}^{\mathbf{xy},\emptyset}\left[u_{i}\stackrel{{ \mathbf{n}_{i}+\mathbf{n}_{i}^{\prime}}}{{\longleftrightarrow}}y_{i},\, \forall 1\leq i\leq t,\,\mathcal{G}(\mathbf{u})\right]\left(\prod_{i=1}^{t}a_{x_ {i},y_{i}}(u_{i})\right)^{-1}=\mathbf{P}_{\beta}^{\mathbf{xu,uy}}[\mathcal{G} (\mathbf{u})^{c}]\leq C\left(\frac{n}{N}\right)^{\epsilon}.\]
Proof.: Below, \(C=C(d)>0\) may change from line to line16. The equality follows from an application of the switching lemma (**SL**). If we write \(\mathcal{G}(\mathbf{u})=\cap_{1\leq i\leq s}G_{i}\) (where the definition of \(G_{i}\) is implicit), then \(H_{i}\cap F_{i}\subset G_{i}\) where,
Footnote 16: In fact, \(C\) will also depend on \(\beta_{c}\) (or more precisely on a lower bound on \(\beta_{c}\)).
\[H_{i}:=\{\text{Ann}(M,N)\text{ is not crossed by a cluster in }\mathbf{n}_{i}\}=\{\mathbf{n}_{i}\notin\mathsf{Cross}(M,N)\},\]
and
\[F_{i}:=\{\text{Ann}(n,m)\text{ is not crossed by a cluster in }\mathbf{n}_{i}^{\prime}\}=\{ \mathbf{n}_{i}^{\prime}\notin\mathsf{Cross}(n,m)\}.\]
Indeed, if \(H_{i}\cap F_{i}\) occurs, we may define \(\mathbf{k}_{i}\) as the sum of the restriction of \(\mathbf{n}_{i}\) to the clusters intersecting \(\Lambda_{N}^{c}\) and the restriction of \(\mathbf{n}_{i}^{\prime}\) to the clusters intersecting \(\Lambda_{m}^{c}\). In the following, we assume that \(1\leq i\leq t\). The argument for other values of \(i\) is easier and follows from the first case.
Introduce intermediate scales \(n\leq r\leq m\leq M\leq R\leq N\) with \(r,R\) chosen below.
\(\bullet\)**Bound on \(H_{i}.\)** Following the ideas developed in the proof of Lemma 6.19, we define,
\[\mathsf{J}_{i}:=\{\mathbf{n}_{i}\in\mathsf{Jump}(R-R^{\nu},R+R^{\nu})\}.\]
Notice that17,
Footnote 17: We need to create a “forbidden area” \(\operatorname{Ann}(R-R^{\nu},R+R^{\nu})\) between the two parts of the annulus \(\operatorname{Ann}(M,N)\) to rule out the situation in which \(\overline{\Gamma(\mathbf{n}_{i})}\setminus\Gamma(\mathbf{n}_{i})\) connects an “almost successful” excursion of \(\Gamma(\mathbf{n}_{i})\) (in the sense that it almost crossed \(\operatorname{Ann}(M,R-R^{\nu})\)), with an “almost successful” excursion of \(\mathbf{n}_{i}\setminus\overline{\Gamma(\mathbf{n}_{i})}\). This is very similar to the argument used in the proof of Lemma 6.19.
\[\mathbf{P}_{\beta}^{\mathbf{xu},\mathbf{uy}}[H_{i}^{c}]\leq \mathbf{P}_{\beta}^{\mathbf{xu}}[\Gamma(\mathbf{n}_{i})\in\mathsf{ZZ}(x_{i},u_ {i};M,R-R^{\nu},\infty)]\\ +\mathbf{P}_{\beta}^{\mathbf{xu}}[\mathbf{n}_{i}\setminus\overline {\Gamma(\mathbf{n}_{i})}\in\mathsf{Cross}(R+R^{\nu},N)]+\mathbf{P}_{\beta}^{ \mathbf{xu}}[\mathbf{J}_{i}].\]
Assume \(R=N^{\iota}\) where \(\iota>\mu\) is chosen in such a way that \(M^{6/(1-\nu)}\leq R\) and \(R^{8/(1-\nu)}\leq N\) (note that this is possible if we choose \(\mu\) sufficiently small and \(\gamma\) sufficiently large since \(M\leq N^{\mu+1/\gamma}\)). This choice allows us to use Corollaries 6.12 and 6.15 to get that for some \(\eta>0\),
\[\mathbf{P}_{\beta}^{\mathbf{xu}}[\mathbf{n}_{i}\in\mathsf{ZZ}(x_{i},u_{i};M,R -R^{\nu},\infty)]\leq\frac{C}{R^{\eta}},\qquad\mathbf{P}_{\beta}^{\mathbf{xu} }[\mathbf{n}_{i}\setminus\overline{\Gamma(\mathbf{n}_{i})}\in\mathsf{Cross}(R +R^{\nu},N)]\leq\frac{C}{N^{\eta}}.\]
Moreover, to the cost of diminishing the value of \(\iota\) (and hence also modifying \(\mu\) and \(\gamma\)) to ensure that \(R^{1+4/\varepsilon}\leq N\), we may use Lemma 6.5 to argue the existence of \(\eta^{\prime}>0\) such that
\[\mathbf{P}_{\beta}^{\mathbf{xu}}[\mathbf{J}_{i}]\leq\frac{C}{R^{\eta^{\prime}}}.\]
Putting all the pieces together, we get for some \(\epsilon>0\),
\[\mathbf{P}_{\beta}^{\mathbf{xu},\mathbf{uy}}[H_{i}^{c}]\leq C\left(\frac{n}{ N}\right)^{\epsilon}.\]
\(\bullet\)**Bound on \(F_{i}.\)** We proceed similarly for \(F_{i}\) although we now encounter an additional difficulty: we cannot rule out the possibility of \(\mathbf{n}_{i}^{\prime}\) to jump above the annulus \(\operatorname{Ann}(r-r^{\nu},r+r^{\nu})\), we can only rule it out in the complement of \(\overline{\Gamma(\mathbf{n}_{i}^{\prime})}\). We set \(r=(n^{2}m)^{1/3}\). Recall that \(m\geq N^{\mu/2}\). We claim that
\[\mathbf{P}_{\beta}^{\mathbf{xu},\mathbf{uy}}[F_{i}^{c}]\leq \mathbf{P}_{\beta}^{\mathbf{uy}}[\Gamma(\mathbf{n}_{i}^{\prime})\in\mathsf{ ZZ}(u_{i},y_{i};r+r^{\nu},m,\infty)]\\ +\mathbf{P}_{\beta}^{\mathbf{uy}}[\mathbf{n}_{i}^{\prime}\setminus \overline{\Gamma(\mathbf{n}_{i}^{\prime})}\in\mathsf{Cross}(n,r-r^{\nu})]+ \mathbf{P}_{\beta}^{\mathbf{uy}}[K_{i}], \tag{6.9}\]
where \(K_{i}\) is the event that there exists \(a\in\Lambda_{r-r^{\nu}}\) and \(b\notin\Lambda_{r+r^{\nu}}\) such that \((\mathbf{n}_{i}^{\prime})_{a,b}\geq 2\) and \(\{a,b\}\in\overline{\Gamma(\mathbf{n}_{i}^{\prime})}\setminus\Gamma(\mathbf{n }_{i}^{\prime})\).
Indeed, if none of the events corresponding to the two first probabilities on the right-hand side of (6.9) occur, the only way we can find a cluster crossing \(\operatorname{Ann}(n,m)\) is if \(\overline{\Gamma(\mathbf{n}_{i}^{\prime})}\setminus\Gamma(\mathbf{n}_{i}^{ \prime})\) has a long (even) open edge which jumps above \(\operatorname{Ann}(r-r^{\nu},r+r^{\nu})\) (see Figure 6).
Using (**IRB**) to get \(\langle\sigma_{u_{i}}\sigma_{v}\rangle_{\beta}\leq Cm^{-2}\) and the assumption that \(u_{i}\in\mathbb{A}_{y_{i}}(2^{k_{i}})\) to get that \(\langle\sigma_{v}\sigma_{y_{i}}\rangle_{\beta}\leq C\langle\sigma_{u_{i}} \sigma_{y_{i}}\rangle_{\beta}\), we obtain
\[\mathbf{P}_{\beta}^{\mathbf{uy}}[\Gamma(\mathbf{n}_{i}^{\prime })\in\mathsf{ZZ}(u_{i},y_{i};r+r^{\nu},m,\infty)] \leq \sum_{v\in\Lambda_{r+r^{\nu}}}\frac{\langle\sigma_{u_{i}}\sigma_ {v}\rangle_{\beta}\langle\sigma_{v}\sigma_{y_{i}}\rangle_{\beta}}{\langle \sigma_{u_{i}}\sigma_{y_{i}}\rangle_{\beta}}\] \[\leq C\frac{r^{4}}{m^{2}}=C\frac{n^{8/3}}{m^{2/3}}\leq C\frac{N^{ \frac{8}{3\gamma}}}{N^{\frac{\mu}{3}}}.\]
Moreover, using Corollary 6.15 (which requires that \(n^{8/(1-\nu)}\leq r\) and hence decreases the values of \(\mu\) and \(1/\gamma\)), there exist \(\zeta>0\) such that
\[\mathbf{P}_{\beta}^{\mathbf{uy}}[\mathbf{n}_{i}^{\prime}\setminus\overline{ \Gamma(\mathbf{n}_{i}^{\prime})}\in\mathsf{Cross}(n,r-r^{\nu})]\leq\frac{C}{ r^{\zeta}}.\]
We conclude the proof with the bound on \(K_{i}\). If \(\mathbf{n}_{i}^{\prime}\) satisfies that \((\mathbf{n}_{i}^{\prime})_{a,b}\geq 2\) where \(a\in\Lambda_{r-r^{\nu}}\) and \(b\notin\Lambda_{r+r^{\nu}}\) with \(\{a,b\}\in\overline{\Gamma(\mathbf{n}_{i}^{\prime})}\setminus\Gamma(\mathbf{n} _{i}^{\prime})\) being the earliest such edge. We consider the map \(\mathbf{n}_{i}^{\prime}\mapsto\mathbf{m}_{i}^{\prime}\) where \((\mathbf{m}_{i}^{\prime})_{a,b}=(\mathbf{n}_{i}^{\prime})_{a,b}-1\) and \(\mathbf{m}_{i}^{\prime}\) coincides with \(\mathbf{n}_{i}^{\prime}\) everywhere else. This maps \(\mathbf{n}_{i}^{\prime}\) to a current \(\mathbf{m}_{i}^{\prime}\) with sources \(\{u_{i},y_{i}\}\Delta\{a,b\}\) (\(b\) might coincide with \(u_{i}\) or \(y_{i}\)) such that the backbone \(\Gamma(\mathbf{m}_{i}^{\prime})\) always connects \(u_{i}\) and \(b\) (by definition of the exploration). Hence, using the chain rule for backbones,
\[\mathbf{P}_{\beta}^{\mathbf{u}\mathbf{y}}[K_{i}]\leq\sum_{ \begin{subarray}{c}a\in\Lambda_{r-r^{\nu}}\\ b\notin\Lambda_{r+r^{\nu}}\end{subarray}}\beta J_{a,b}\frac{\langle\sigma_{\{u _{i},y_{i}\}\Delta\{a,b\}}\rangle_{\beta}}{\langle\sigma_{u_{i}}\sigma_{y_{i} }\rangle_{\beta}}\mathbf{P}_{\beta}^{\{u_{i},y_{i}\}\Delta\{a,b\}}[\Gamma( \mathbf{m}_{i}^{\prime})\text{ connects }u_{i}\text{ and }b]\\ \leq\sum_{\begin{subarray}{c}a\in\Lambda_{r-r^{\nu}}\\ b\notin\Lambda_{r+r^{\nu}}\end{subarray}}\beta J_{a,b}\frac{\langle\sigma_{u _{i}}\sigma_{b}\rangle_{\beta}\langle\sigma_{a}\sigma_{y_{i}}\rangle_{\beta}} {\langle\sigma_{u_{i}}\sigma_{y_{i}}\rangle_{\beta}}.\]
Since \(u_{i}\in\mathbb{A}_{y_{i}}(2^{k_{i}})\), \(\langle\sigma_{a}\sigma_{y_{i}}\rangle_{\beta}\leq C\langle\sigma_{u_{i}} \sigma_{y_{i}}\rangle_{\beta}\). Hence,
\[\beta\sum_{\begin{subarray}{c}a\in\Lambda_{r-r^{\nu}}\\ b\notin\Lambda_{r+r^{\nu}}\end{subarray}}J_{a,b}\frac{\langle\sigma_{u_{i}} \sigma_{b}\rangle_{\beta}\langle\sigma_{a}\sigma_{y_{i}}\rangle_{\beta}}{ \langle\sigma_{u_{i}}\sigma_{y_{i}}\rangle_{\beta}}\leq C\beta\sum_{ \begin{subarray}{c}a\in\Lambda_{r-r^{\nu}}\\ b\notin\Lambda_{r+r^{\nu}}\end{subarray}}J_{a,b}\langle\sigma_{u_{i}}\sigma_{ b}\rangle_{\beta}.\]
We distinguish two-cases according to whether \(b\) is close to \(u_{i}\) or not. By (6.1) and (**IRB**),
\[\beta\sum_{\begin{subarray}{c}a\in\Lambda_{r-r^{\nu}}\\ b\in\Lambda_{|u_{i}|/2}(u_{i})\end{subarray}}J_{a,b}\langle\sigma_{u_{i}} \sigma_{b}\rangle_{\beta}\leq Cr^{4}\sum_{|x|\geq m/4}J_{0,x}\leq\frac{Cr^{4}} {m^{2+\varepsilon}}.\]
Again, using (6.1) and (**IRB**),
\[\sum_{\begin{subarray}{c}a\in\Lambda_{r-r^{\nu}}\\ b\notin\Lambda_{r+r^{\nu}}\cup\Lambda_{|u_{i}|/2}(u_{i})\end{subarray}}J_{a,b} \langle\sigma_{u_{i}}\sigma_{b}\rangle_{\beta}\leq\frac{Cr^{4}}{m^{2}}\sum_{ |x|\geq r^{\nu}}J_{0,x}\leq\frac{Cr^{4-\nu(2+\varepsilon)}}{m^{2}}.\]
By definition of \(r\) and \(m/n\), we get the existence of \(\zeta^{\prime}>0\) such that
\[\mathbf{P}_{\beta}^{\mathbf{u}\mathbf{y}}[K_{i}]\leq\frac{C}{m^{\zeta^{\prime}}}.\]
As a result, if we choose \(\mu\) and \(1/\gamma\) sufficiently small, we get that for some \(\epsilon^{\prime}>0\),
\[\mathbf{P}_{\beta}^{\mathbf{x}\mathbf{u},\mathbf{u}\mathbf{y}}[F_{i}^{c}]\leq C \left(\frac{n}{N}\right)^{\epsilon^{\prime}}.\]
We now turn to the proof of Theorem 6.20.
Proof of Theorem 6.20.: Introduce the coefficients \(\delta(\mathbf{u},\mathbf{x},\mathbf{y})\) defined by,
\[\delta(\mathbf{u},\mathbf{x},\mathbf{y}):=\mathbbm{1}[\exists(k_{1},\ldots,k_{t })\in\mathcal{K}^{t},\;\mathbf{u}\in\mathbb{A}_{y_{1}}(2^{k_{1}})\times\ldots \times\mathbb{A}_{y_{t}}(2^{k_{t}})]\prod_{i=1}^{t}\frac{a_{x_{i},y_{i}}(u_{i}) }{|\mathcal{K}|A_{x_{i},y_{i}}(2^{k_{i}})}.\]
Note that
\[\sum_{\begin{subarray}{c}(k_{1},\ldots,k_{t})\in\mathcal{K}^{t}\\ \mathbf{u}\in\mathbb{A}_{y_{1}}(2^{k_{1}})\times\ldots\times\mathbb{A}_{y_{t}}(2 ^{k_{t}})\end{subarray}}\delta(\mathbf{u},\mathbf{x},\mathbf{y})=1.\]
Equation (6.8) together with Lemma 6.23 yield,
\[\left|\mathbf{P}_{\beta}^{\mathbf{xy}}[E\cap F]-\sum_{\mathbf{u}} \delta(\mathbf{u},\mathbf{x},\mathbf{y})\mathbf{P}_{\beta}^{\mathbf{xu}}[E] \mathbf{P}_{\beta}^{\mathbf{uy}}[F]\right|\\ \leq\frac{C_{1}}{\sqrt{\log(N/n)}}+C_{2}\left(\frac{n}{N}\right)^ {\epsilon}\leq\frac{C_{3}}{\sqrt{\log(N/n)}}, \tag{6.10}\]
as long as \(N\geq n^{\gamma}\) where \(\gamma>2\) is given by Lemma 6.23. We begin by proving (6.6) when \(y_{i},y_{i}^{\prime}\) are in regular scales (but not necessarily the same ones). Applying the above inequality once for \(\mathbf{y}\) and once for \(\mathbf{y}^{\prime}\) with the event \(E\) and \(F=\Omega_{\mathbb{Z}^{d}}\),
\[\left|\mathbf{P}_{\beta}^{\mathbf{xy}}[E]-\mathbf{P}_{\beta}^{ \mathbf{xy}^{\prime}}[E]\right|\leq\left|\sum_{\mathbf{u}}(\delta(\mathbf{u},\mathbf{x},\mathbf{y})-\delta(\mathbf{u},\mathbf{x},\mathbf{y}^{\prime})) \mathbf{P}_{\beta}^{\mathbf{xu}}[E]\right|+\frac{2C_{3}}{\sqrt{\log(N/n)}}. \tag{6.11}\]
Since all the \(y_{i},y_{i}^{\prime}\) are in regular scales, one has \(\mathbb{A}_{y_{i}}(2^{k_{i}})=\mathbb{A}_{y_{i}^{\prime}}(2^{k_{i}})=\text{ Ann}(2^{k_{i}},2^{k_{i}+1})\). Moreover, using Property (**P2**) of regular scales,
\[\left|\delta(\mathbf{u},\mathbf{x},\mathbf{y})-\delta(\mathbf{u}, \mathbf{x},\mathbf{y}^{\prime})\right|\leq C_{4}\left(\frac{M}{N}\right) \delta(\mathbf{u},\mathbf{x},\mathbf{y})\leq C_{5}\left(\frac{n}{N}\right)^{1 -\mu}\delta(\mathbf{u},\mathbf{x},\mathbf{y}),\]
where \(\mu\) is also given by Lemma 6.23. Indeed, in that case \(\delta(\mathbf{u},\mathbf{x},\mathbf{y})\) and \(\delta(\mathbf{u},\mathbf{x},\mathbf{y}^{\prime})\) are both close to
\[\prod_{i\leq t}\frac{\langle\sigma_{x_{i}}\sigma_{u_{i}}\rangle}{|\mathcal{K }|\sum_{v_{i}\in\text{Ann}(2^{k_{i}},2^{k_{i}+1})}\langle\sigma_{x_{i}}\sigma _{v_{i}}\rangle}.\]
This gives (6.6) in that case. Now, assume that \(N\geq n^{2(\gamma/\mu)+1}\) (so that \(m\geq n^{\gamma}\)). Consider \(\mathbf{z}=(z_{1},\ldots,z_{t})\) with \(z_{i}\in\text{Ann}(m,M)\) in a regular scale. Also, pick \(\mathbf{y}\) on which we do not
assume anything. We have,
\[\left|\mathbf{P}_{\beta}^{\mathbf{xy}}[E]-\mathbf{P}_{\beta}^{ \mathbf{xz}}[E]\right| = \left|\mathbf{P}_{\beta}^{\mathbf{xy}}[E]-\sum_{\mathbf{u}}\delta( \mathbf{u},\mathbf{x},\mathbf{y})\mathbf{P}_{\beta}^{\mathbf{xz}}[E]\right|\] \[\leq \left|\mathbf{P}_{\beta}^{\mathbf{xy}}[E]-\sum_{\mathbf{u}}\delta( \mathbf{u},\mathbf{x},\mathbf{y})\mathbf{P}_{\beta}^{\mathbf{xu}}[E]\right|+ \frac{C_{6}}{\sqrt{\log(m/n)}}\] \[\leq \frac{C_{7}}{\sqrt{\log(N/n)}},\]
where in the second line we used (6.11) with \((\mathbf{y},\mathbf{y}^{\prime})=(\mathbf{u},\mathbf{z})\) together with the fact that \(m/n=(N/n)^{\mu/2}\geq n^{\gamma}\), and in the third line we used (6.10) with \(F=\Omega_{\mathbb{Z}^{d}}\). This gives (6.6).
The same argument works for (6.7) for every \(\mathbf{x},\mathbf{x}^{\prime},\mathbf{y}\), noticing that for every regular \(\mathbf{u}\) for which \(\delta(\mathbf{u},\mathbf{x},\mathbf{y})\neq 0\), using once again \((\mathbf{P}2)\),
\[\left|\delta(\mathbf{u},\mathbf{x},\mathbf{y})-\delta(\mathbf{u},\mathbf{x}^{ \prime},\mathbf{y})\right|\leq C_{8}\left(\frac{n}{m}\right)\delta(\mathbf{u}, \mathbf{x},\mathbf{y})\leq C_{9}\left(\frac{n}{N}\right)^{\mu/2}\delta( \mathbf{u},\mathbf{x},\mathbf{y}).\]
To get (6.5) we repeat the same line of reasoning. We start by applying (6.10). Then, we replace each \(\mathbf{P}_{\beta}^{\mathbf{xu}}[E]\) by \(\mathbf{P}_{\beta}^{\mathbf{xy}}[E]\) using the above reasoning. Finally, replace \(\mathbf{P}_{\beta}^{\mathbf{uy}}[F]\) by \(\mathbf{P}_{\beta}^{\mathbf{xy}}[F]\) using the same methods.
### Proof of the clustering bound
With the intersection property and the mixing statement, we are now in a position to conclude. We will use the following (natural) monotonicity property of the currents measures.
**Proposition 6.24** (Monotonicity in the number of sources, [1, Corollary A.2]).: _For every \(\beta>0\), for every \(x,y,z,t\in\mathbb{Z}^{d}\) and every set \(S\subset\mathbb{Z}^{d}\), one has,_
\[\mathbf{P}_{\beta}^{0x,0z,0y,0t}[\mathbf{C}_{\mathbf{n}_{1}+ \mathbf{n}_{3}}(0)\cap\mathbf{C}_{\mathbf{n}_{2}+\mathbf{n}_{4}}(0)\cap S =\emptyset]\\ \leq\mathbf{P}_{\beta}^{0x,0z,\emptyset,\emptyset}[\mathbf{C}_{ \mathbf{n}_{1}+\mathbf{n}_{3}}(0)\cap\mathbf{C}_{\mathbf{n}_{2}+\mathbf{n}_{4 }}(0)\cap S=\emptyset].\]
Proof of Proposition 6.2.: Fix \(\gamma>2\) sufficiently large so that Theorem 6.20 holds. Remember by Remark 6.17 that we may choose \(D=D(\gamma)\) sufficiently large in the definition of \(\mathcal{L}\) such that \(\ell_{k+1}\geq\ell_{k}^{\gamma}\).
We may assume \(u=0\). Since \(x,y\) are at distance at least \(2\ell_{K}\) of each other, one of them must be at distance at least \(\ell_{K}\) of \(u\). Without loss of generality we assume that this is the case of \(x\) and make the same assumption about \(z\). Let \(\delta>0\) to be fixed below.
Let \(\mathcal{S}_{K}^{(\delta)}\) denote the set of subsets of \(\{2\delta K,\ldots,K-3\}\) which contain only even integers. If \(\{\mathbf{M}_{u}(\mathcal{I};\mathcal{L},K)<\delta K\}\) occurs, then there must be \(S\in\mathcal{S}_{K}^{(\delta)}\) with \(|S|\geq(1/2-2\delta)K\) such that \(\mathfrak{B}_{S}\) occurs, where \(\mathfrak{B}_{S}\) is the event that the clusters of \(0\) in \(\mathbf{n}_{1}+\mathbf{n}_{3}\) and \(\mathbf{n}_{2}+\mathbf{n}_{4}\) do not intersect in any of the annuli \(\operatorname{Ann}(\ell_{i},\ell_{i+1})\) for \(i\in S\). Using the monotonicity property recalled above,
\[\mathbf{P}_{\beta}^{0x,0z,0y,0t}[\mathbf{M}_{u}(\mathcal{I}; \mathcal{L},K)<\delta K] \leq\sum_{\begin{subarray}{c}S\in\mathcal{S}_{K}^{(\delta)}\\ |S|\geq(1/2-2\delta)K\end{subarray}}\mathbf{P}_{\beta}^{0x,0z,0y,0t}[ \mathfrak{B}_{S}]\] \[\leq\sum_{\begin{subarray}{c}S\in\mathcal{S}_{K}^{(\delta)}\\ |S|\geq(1/2-2\delta)K\end{subarray}}\mathbf{P}_{\beta}^{0x,0z,\emptyset,\emptyset }[\mathfrak{B}_{S}]. \tag{6.12}\]
Let \(\mathsf{J}\) be the event defined by
\[\mathsf{J}:=\bigcup_{k=2\delta K}^{K-3}\{\mathbf{n}_{1}+\mathbf{n}_{3}\in \mathsf{Jump}(\ell_{k},\ell_{k+k^{\nu}})\}\cup\{\mathbf{n}_{2}+\mathbf{n}_{4} \in\mathsf{Jump}(\ell_{k},\ell_{k+k^{\nu}})\}.\]
Using Lemma 6.5 and Remark 6.17, if \(D\) is large enough, there exist \(C_{0},C_{1},\eta>0\) such that
\[\mathbf{P}_{\beta}^{0x,0z,0\emptyset,\emptyset}[\mathsf{J}]\leq\frac{C_{0}K}{ \ell_{\delta K}^{\eta}}\leq C_{1}e^{-\eta 2^{\delta K}}. \tag{6.13}\]
Fix some \(S\in\mathcal{S}_{K}^{(\delta)}\). Let \(\mathfrak{A}_{S}\) be the event that none of the events \(I_{k}\) (defined in Definition 6.18) occur for \(k\in S\).
We now make a crucial observation: if \(k\in S\) and \(\mathsf{J}^{c}\) occurs, the events \(I_{k}\) and \(\mathfrak{B}_{S}\) are incompatible. Indeed, The occurrence of \(\mathsf{J}^{c}\cap I_{k}\) ensures that the only cluster of \(\mathbf{n}_{1}+\mathbf{n}_{3}\) (resp. \(\mathbf{n}_{2}+\mathbf{n}_{4}\)) crossing \(\operatorname{Ann}(\ell_{k},\ell_{k+1})\) is the cluster of \(0\). Hence, \((\mathfrak{B}_{S}\cap\mathsf{J}^{c})\subset\mathfrak{A}_{S}\). Using (6.13),
\[\mathbf{P}_{\beta}^{0x,0z,0y,0t}[\mathbf{M}_{u}(\mathcal{I};\mathcal{L},K)< \delta K]\leq\sum_{\begin{subarray}{c}S\in\mathcal{S}_{K}^{(\delta)}\\ |S|\geq(1/2-2\delta)K\end{subarray}}\mathbf{P}_{\beta}^{0x,0z,\emptyset, \emptyset}[\mathfrak{A}_{S}]+C_{1}e^{-\eta 2^{\delta K}}\binom{(1/2-2\delta)K}{2 \delta K}.\]
Standard binomial estimates give that the second term on the right-hand side above is bounded by \(C_{2}2^{-\delta K}\). We are left with the study of \(\mathbf{P}_{\beta}^{0x,0z,\emptyset,\emptyset}[\mathfrak{A}_{S}]\).
Since \(\ell_{K}\leq L(\beta)\), we know by Proposition 3.28 that there exists \(y\in\operatorname{Ann}(\ell_{K-1},\ell_{K})\) in a regular scale. The event \(\mathfrak{A}_{S}\) depends on edges with both endpoints in \(\Lambda_{\ell_{K-2}}\). Using the mixing property together with Remark 6.17, we get that
\[\mathbf{P}_{\beta}^{0x,0z,\emptyset,\emptyset}[\mathfrak{A}_{S}]\leq\mathbf{ P}_{\beta}^{0y,0y,\emptyset,\emptyset}[\mathfrak{A}_{S}]+\frac{C_{3}}{(\log\ell_{K-1} )^{c}}\leq\mathbf{P}_{\beta}^{0y,0y,\emptyset,\emptyset}[\mathfrak{A}_{S}]+C_ {4}\left(\frac{D}{\log 2}\right)^{-c(K-1)}.\]
Let \(s=\max S\). Using once again the mixing property between \(n=\ell_{s-1}\) and \(N=\ell_{s}\), we get,
\[\mathbf{P}_{\beta}^{0y,0y,\emptyset,\emptyset}[\mathfrak{A}_{S}] \leq\mathbf{P}_{\beta}^{0y,0y,\emptyset,\emptyset}[I_{s}^{c}]\mathbf{P}_{ \beta}^{0y,0y,\emptyset,\emptyset}[\mathfrak{A}_{S\setminus\{s\}}]+\frac{C_{3 }}{(\log\ell_{s}/\ell_{s-1})^{c}}\\ \leq(1-\kappa)\mathbf{P}_{\beta}^{0y,0y,\emptyset,\emptyset}[ \mathfrak{A}_{S\setminus\{s\}}]+C_{5}(1-\kappa)^{|S|-1},\]
where \(\kappa\) is given by Lemma 6.19 and where we chose \(D\) large enough. Iterating the above yields,
\[\mathbf{P}_{\beta}^{0y,0y,\emptyset,\emptyset}[\mathfrak{A}_{S}]\leq C_{6}(1- \kappa)^{|S|}.\]
Going back to (6.12), we get that
\[\mathbf{P}_{\beta}^{0x,0z,0y,0t}[\mathbf{M}_{u}(\mathcal{I};\mathcal{L},K)< \delta K]\leq C_{7}\binom{(1/2-2\delta)K}{2\delta K}(1-\kappa)^{(1/2-2\delta) K}+C_{2}2^{-\delta K}\leq C_{8}2^{-\delta K},\]
for \(\delta>0\) sufficiently small.
### Proof of Corollary 1.8
Now that we were able to obtain the improved the tree diagram bound, the proof of Corollary 1.8 follows the strategy used in Section 5, although some additional technical difficulties appear due to the lack of knowledge on the growth of \(B_{L}(\beta)\). We refer to [1] for the details of the proof.
Proof of Corollary 1.8.: We import notations from the proof of Theorem 5.5. Applying the improved tree diagram bound we obtain
\[S(\beta,L,f)\leq C\sum_{\begin{subarray}{c}x\in\mathbb{Z}^{d}\\ x_{1},x_{2},x_{3},x_{4}\in\Lambda_{r_{f}L}\end{subarray}}\frac{\langle\sigma_{x }\sigma_{x_{1}}\rangle_{\beta}\langle\sigma_{x}\sigma_{x_{2}}\rangle_{\beta} \langle\sigma_{x}\sigma_{x_{3}}\rangle_{\beta}\langle\sigma_{x}\sigma_{x_{4}} \rangle_{\beta}}{B_{L(x_{1},x_{2},x_{3},x_{4})}(\beta)^{c}\Sigma_{L}(\beta)^{2 }}, \tag{6.14}\]
where \(L(x_{1},x_{2},x_{3},x_{4})\) is the minimal distance between the \(x_{i}\). We now fix \(a\in(0,1)\). The strategy consists in splitting the right-hand side of (6.14) according to the following four possibilities: \(x\in\Lambda_{dr_{f}L}\) and \(L(x_{1},x_{2},x_{3},x_{4})\leq L^{a}\), \(x\notin\Lambda_{dr_{f}L}\) and \(L(x_{1},x_{2},x_{3},x_{4})\leq L^{a}\), \(x\in\Lambda_{dr_{f}L}\) and \(L(x_{1},x_{2},x_{3},x_{4})>L^{a}\), \(x\notin\Lambda_{dr_{f}L}\) and \(L(x_{1},x_{2},x_{3},x_{4})>L^{a}\).
From there the conclusion builds on the same tools as the ones used in Section 5, and also on Lemma 6.16.
**Remark 6.25**.: _In [1], the proof of this result crucially relies on a sharp sliding-scale infrared bound which leads to a sharp bound in Lemma 6.16. This shows how remarkable this bound is since it yields a quantitative decay even though we do not even know that \(B_{L}(\beta_{c})\) explodes. This argument breaks down in the next section (for \(d\in\{2,3\}\)) due to the lack of such a sharp result._
## 7. Reflection positive Ising models in dimension \(1\leq d\leq 3\) satisfying \(d_{\rm eff}=4\)
We now explain how to adapt the above strategy to the remaining "marginal case": \(d_{\rm eff}=4>d\). As seen in Section 5 (and more precisely in Remark 5.3), for algebraically decaying RP interactions \(J_{x,y}=C|x-y|_{1}^{-d-\alpha}\), this corresponds to choosing \(d-2(\alpha\wedge 2)=0\).
We will assume that \(J\) satisfies (**A1**)-(**A5**) with the complementary assumption18 that there exist \(\mathbf{c},\mathbf{C}>0\) such that, for all \(x\in\mathbb{Z}^{d}\setminus\{0\}\),
Footnote 18: This assumption is a little more restrictive than before since we now require some regularity property on the interaction. This restriction is essentially technical and we believe that the proof which follows should hold under more general assumptions.
\[\frac{\mathbf{c}}{|x|^{d+\boldsymbol{\alpha}(d)}}\leq J_{0,x}\leq\frac{ \mathbf{C}}{|x|^{d+\boldsymbol{\alpha}(d)}}, \tag{7.1}\]
where \(\boldsymbol{\alpha}(d)=d/2\).
Our goal is to prove the following result, which is a slightly stronger version of Theorem 1.9.
**Theorem 7.1** (Improved tree diagram bound).: _Let \(1\leq d\leq 3\). Assume that \(J\) satisfies (**A1**)-(**A5**) together with (7.1). There exists \(C>0\) such that the following holds: for all \(\beta\leq\beta_{c}\), there exists an increasing function \(\phi_{\beta}:\mathbb{R}\to\mathbb{R}_{>}\) such that for all \(x,y,z,t\in\mathbb{Z}^{d}\) at mutual distance at least \(L\) of each other with \(1\leq L\leq L(\beta)\),_
\[|U_{4}^{\beta}(x,y,z,t)|\leq\frac{C}{\phi_{\beta}(B_{L}(\beta))}\sum_{u\in \mathbb{Z}^{d}}\langle\sigma_{x}\sigma_{u}\rangle_{\beta}\langle\sigma_{y} \sigma_{u}\rangle_{\beta}\langle\sigma_{z}\sigma_{u}\rangle_{\beta}\langle \sigma_{t}\sigma_{u}\rangle_{\beta}.\]
_If \(B(\beta_{c})=\infty\), one has \(\phi_{\beta_{c}}(t)\to\infty\) as \(t\to\infty\)._
**Remark 7.2**.: _We could replace \(\phi_{\beta}(B_{L}(\beta))\) by \(B_{L}(\beta)^{c}\) (as in Theorem 1.8) provided we could prove the following (under the assumptions of Theorem 7.1): there exists \(C>0\) such that, if \(1\leq\ell\leq L\leq L(\beta)\),_
\[\frac{\chi_{N}(\beta)}{N^{\gamma(d)}}\leq C\frac{\chi_{n}(\beta)}{n^{\gamma(d) }},\]
_where \(\gamma(2)=1\) and \(\gamma(3)=3/2\). This will become more transparent below._
_When \(d=1\), such a sharp result is obtained thanks to the exact knowledge on the decay of the two-point function._
From this result, we will obtain
**Corollary 7.3**.: _We keep the assumptions of Theorem 7.1. Then, for \(\sigma\in(0,d/2)\),_
\[\lim_{\beta\nearrow\beta_{c}}g_{\sigma}(\beta)=0.\]
_As a consequence, for \(\beta=\beta_{c}\), every sub-sequential scaling limit of the model is Gaussian._
To prove the improved tree diagram bound, we will extend the strategy of Section 6 and prove a mixing statement together with the intersection property. The proofs heavily relies on a finer analysis of the geometry of the clusters compared to what was done in Section 6.2 (see Remark 6.10), that is in fact valid in a wider generality (i.e. also when \(d_{\mathrm{eff}}>4\)). Hence, we begin by proving the mixing statement in its most general form.
In Sections 7.1, 7.2, and 7.3 we consider an interaction \(J\) on \(\mathbb{Z}^{d}\) (\(d\geq 1\)) satisfying (**A1**)-(**A5**) together with the following assumption: there exist \(\mathbf{c}_{1},\mathbf{C}_{1}>0\) such that, for all \(x\in\mathbb{Z}^{d}\setminus\{0\}\),
\[\frac{\mathbf{c}_{1}}{|x|^{d+\alpha}} \leq J_{0,x} \leq\frac{\mathbf{C}_{1}}{|x|^{d+\alpha}}, (\mathbf{Assumption}_{\alpha})\]
where \(\alpha>0\) will be specified below. By the results of Section 3, it has for consequence the existence of \(\mathbf{C}_{2}>0\) such that: for all \(\beta\leq\beta_{c}\), for all \(x\in\mathbb{Z}^{d}\setminus\{0\}\),
\[\langle\sigma_{0}\sigma_{x}\rangle_{\beta} \leq\frac{\mathbf{C}_{2}}{\beta_{c}|x|^{d-\alpha\wedge 2}(\log|x|)^{ \delta_{2,\alpha}}}.\] ( \[\mathbf{IRB}_{\alpha}\] )
Moreover, using Proposition 3.25, we find that there exists \(\mathbf{C}_{3}>0\) such that: for all \(\beta\leq\beta_{c}\), for all \(x\in\mathbb{Z}^{d}\setminus\{0\}\) with \(1\leq|x|\leq L(\beta)\),
\[\langle\sigma_{0}\sigma_{x}\rangle_{\beta} \geq\frac{\mathbf{C}_{3}}{\beta|x|^{d-1}f_{\alpha}(|x|+1)},\] ( \[\mathbf{LB}_{\alpha}\] )
where if \(t>1\)\(f_{\alpha}(t):=1\) if \(\alpha>1\), \(f_{1}(t):=\log t\), and \(f_{\alpha}(t):=t^{1-\alpha}\) for \(\alpha\in(0,1)\).
### Existence of weak regular scales
In the case \(d\geq 3\), the existence of regular scales was proved in Section 3. For \(d=2\), the proof of (**P4**) failed. The reason behind this is purely technical: the sliding-scale infrared bound is not optimal in dimension \(2\) since we expect the growth of \(\chi_{n}(\beta_{c})\) to be smaller than \(n^{2}\). We can circumvent this technical difficulty by allowing ourselves a "weaker" property (\(\mathbf{P4}^{\prime}\)) in the definition of a regular scale.
**Definition 7.4** (Weak regular scales).: _Fix \(c,C>0\). An annular region \(\mathrm{Ann}(n/2,8n)\) is said to be \((c,C)\)-weak regular if it satisfies the properties_ (**P1**)_-_(**P3**) _and_
* _For every_ \(x\in\Lambda_{n}\) _and_ \(y\notin\Lambda_{C(\log n)^{2}n}\)_,_ \(S_{\beta}(y)\leq\frac{1}{2}S_{\beta}(x)\)_._
_A scale \(k\) is said to be weak regular if \(n=2^{k}\) is such that \(\mathrm{Ann}(n/2,8n)\) is \((c,C)\)- weak regular, a vertex \(x\in\mathbb{Z}^{d}\) will be said to be in a weak regular scale if it belongs to an annulus \(\mathrm{Ann}(n,2n)\) with \(n=2^{k}\) and \(k\) a weak regular scale._
**Proposition 7.5** (Existence of weak regular scales).: _Let \(d\geq 1\). Assume that \(J\) satisfies_ (**A1**)_-_(**A5**) _and_ (**Assumption\({}_{\alpha}\)**) _where \(\alpha>0\) if \(d\geq 3\), \(\alpha\in(0,1]\) if \(d=2\), and \(\alpha\in(0,1)\) if \(d=1\). Let \(\gamma>2\). There exist \(c_{0},c_{1},C_{0}>0\) such that for every \(\beta\leq\beta_{c}\), and every \(1\leq n^{\gamma}\leq N\leq L(\beta)\), there are at least \(c_{1}\log_{2}\left(\frac{N}{n}\right)\)\((c_{0},C_{0})\)-weak regular scales between \(n\) and \(N\)._
Proof.: The cases \(d\geq 3\), and \(d\in\{1,2\}\) with \(\alpha\in(0,1)\) were already settled in Propositions 3.28 and 3.29 since (**P4\({}^{\prime}\)**) is weaker than (**P4**). We only need to take care of the case
\(d=2\) and \(\alpha=1\). Using \((\mathbf{LB}_{\alpha})\) together with \((\mathbf{IRB}_{\alpha})\), we get that for \(x\in\mathbb{Z}^{d}\) with \(2\leq|x|\leq L(\beta)\),
\[\frac{c_{1}}{\beta|x|(\log|x|)}\leq\langle\sigma_{0}\sigma_{x} \rangle_{\beta}\leq\frac{C_{1}}{\beta|x|}. \tag{7.2}\]
Using (7.2) and the assumption19\(1\leq n^{\gamma}\leq N\), we get the existence of \(c_{2},c_{3},c_{4}>0\) such that,
Footnote 19: In fact \(n\leq N/\log N\) is enough.
\[\chi_{N}(\beta)\geq\frac{c_{2}}{\beta}\frac{N}{\log N}=\frac{c_{2 }}{\beta}\left(\frac{N}{n}\right)\frac{n}{\log N}\geq c_{3}\left(\frac{N}{n} \right)\frac{1}{\log N}\chi_{n}(\beta)\geq c_{4}\left(\frac{N}{n}\right)^{1/2} \chi_{n}(\beta).\]
Using Theorem 3.18, we find \(r,c_{5}>0\) and independent of \(n,N\), such that there are at least \(c_{5}\log_{2}(N/n)\) scales \(m=2^{k}\) between \(n\) and \(N\) such that
\[\chi_{rm}(\beta)\geq\chi_{16dm}(\beta)+\chi_{m}(\beta). \tag{7.3}\]
We prove that such an \(m\) is a \((c_{0},C_{0})\)-weak regular scale for a good choice of \(c_{0},C_{0}\). The proof of \((\mathbf{P1})\)-\((\mathbf{P3})\) follows the same line as before. Now, using (7.2), we get that for \(1\leq\ell\leq L\) with \(\ell\leq L(\beta)\),
\[\frac{\chi_{L}(\beta)}{L}\leq C_{2}\frac{\log\ell}{\ell}\chi_{ \ell}(\beta). \tag{7.4}\]
Let \(R\geq 1\). Using the same strategy we used to obtain (3.12) and replacing the sliding-scale infrared bound by (7.4), we get for \(y\notin\Lambda_{dR(\log m)^{2}m}\) and \(x\in\Lambda_{m}\),
\[|\Lambda_{R(\log m)^{2}m}|S_{\beta}(y)\leq\chi_{R(\log m)^{2}m}( \beta)\leq C_{3}R(\log m)^{3}\chi_{m}(\beta)\leq C_{4}R(\log m)^{3}m^{2}S_{ \beta}(x),\]
which implies that
\[S_{\beta}(y)\leq\frac{C_{5}}{R\log m}S_{\beta}(x)\leq\frac{1}{ 2}S_{\beta}(x),\]
if \(R\) is large enough. This concludes the proof.
### Properties of the current
In this subsection, we let \(d\geq 1\) and assume that \(J\) satisfies \((\mathbf{A1})\)-\((\mathbf{A5})\) together with \((\mathbf{Assumption}_{\alpha})\) with \(d-2(\alpha\wedge 2)\geq 0\). As explained in Section 5, this choice corresponds to \(d_{\mathrm{eff}}\geq 4\).
We import the notations from Section 6.2. The main difficulty below will come from the fact that \(\mathsf{Jump}(k,k+k^{\nu})\) (for \(\nu<1\)) now occurs with high probability (if \(\alpha\in(0,2]\)). However, to the price of considering thicker annuli we can keep a similar statement.
Many of the computations done here are very similar to what was done in Section 6.2 so we only present the main changes and omit the trivial modifications.
**Lemma 7.6**.: _Let \(d\geq 1\). Assume that \(J\) satisfies \((\mathbf{A1})\)-\((\mathbf{A5})\) and \((\mathbf{Assumption}_{\alpha})\) with \(\alpha>0\). Let \(\epsilon>0\). There exist \(C,\eta>0\) such that for all \(\beta\leq\beta_{c}\), \(y\in\mathbb{Z}^{d}\) in a weak regular scale, with \(1\leq|y|\leq L(\beta)\), and for all \(k\geq 1\) such that \(k^{4}\leq|y|\),_
\[\mathbf{P}_{\beta}^{0y,\emptyset}[\mathsf{Jump}(k,k^{1+\epsilon})]\leq\frac{ C}{k^{\eta}}.\]
Proof.: We repeat the strategy of proof of Lemma 6.5. Lemma 6.3 yields
\[\mathbf{P}_{\beta}^{0y,\emptyset}[\mathsf{Jump}(k,k^{1+\epsilon} )]\leq 2\beta\sum_{u\in\Lambda_{k},\,v\notin\Lambda_{k^{1+\epsilon}}}J_{u,v} \Bigg{(}\langle\sigma_{u}\sigma_{v}\rangle_{\beta}\\ +\frac{\langle\sigma_{0}\sigma_{u}\rangle_{\beta}\langle\sigma_{ v}\sigma_{y}\rangle_{\beta}}{\langle\sigma_{0}\sigma_{y}\rangle_{\beta}}+\frac{ \langle\sigma_{0}\sigma_{v}\rangle_{\beta}\langle\sigma_{u}\sigma_{y}\rangle_{ \beta}}{\langle\sigma_{0}\sigma_{y}\rangle_{\beta}}\Bigg{)}=:A_{1}+A_{2}+A_{3}.\]
Using \((\mathbf{IRB}_{\alpha})\) and \((\mathbf{Assumption}_{\alpha})\),
\[A_{1}\leq C_{1}\sum_{u\in\Lambda_{k},\,v\notin\Lambda_{k+\epsilon}}\frac{1}{|u- v|^{d+\alpha+d-\alpha\wedge 2}}\leq C_{2}\frac{k^{d}}{k^{(1+\epsilon)(d+\alpha-\alpha \wedge 2)}}.\]
Using \((\mathbf{P1})\) of weak regular scales, together with \((\mathbf{IRB}_{\alpha})\) and \((\mathbf{Assumption}_{\alpha})\), we similarly obtain
\[A_{2}\leq C_{3}\frac{k^{d}}{k^{(1+\epsilon)(d+\alpha-\alpha\wedge 2)}}.\]
Finally, proceeding as in the proof of Lemma 6.5, using additionally \((\mathbf{LB}_{\alpha})\),
\[\beta\sum_{u\in\Lambda_{k},\,v\in\Lambda_{|y|/2}(y)}J_{u,v}\frac{\langle \sigma_{0}\sigma_{u}\rangle_{\beta}\langle\sigma_{v}\sigma_{y}\rangle_{\beta} }{\langle\sigma_{0}\sigma_{y}\rangle_{\beta}}\leq C_{5}\frac{k^{\alpha\wedge 2 }|y|^{d-1}f_{\alpha}(|y|)|y|^{\alpha\wedge 2}}{|y|^{d+\alpha}},\]
where the right-hand side is bounded by \(C_{5}k^{\alpha}/|y|^{\alpha}\) for \(\alpha\in(0,1)\), by \(C_{5}k\log(|y|)/|y|\) for \(\alpha=1\), and by \(C_{5}k^{\alpha\wedge 2}/|y|\) for \(\alpha>1\). Hence, if \(k^{4}\leq|y|\), we always have that it is bounded by \(C_{4}/|y|^{\delta}\) for some \(\delta=\delta(\alpha)>0\).
Using again \((\mathbf{P1})\) as in Lemma 6.5,
\[\beta\sum_{u\in\Lambda_{k},\,v\notin\Lambda_{k^{1+\epsilon}}\cup\Lambda_{|y|/2 }(y)}J_{u,v}\frac{\langle\sigma_{0}\sigma_{u}\rangle_{\beta}\langle\sigma_{v} \sigma_{y}\rangle_{\beta}}{\langle\sigma_{0}\sigma_{y}\rangle_{\beta}}\leq C_{ 6}\frac{k^{\alpha\wedge 2}}{k^{(1+\epsilon)\alpha}}.\]
This concludes the proof.
**Remark 7.7**.: _The main contribution to the above probability comes from the term \(A_{2}\) pictured in Figure 4._
Despite being equipped with a very weak version of Lemma 6.5, we can still obtain a version of Corollary 6.9 in our context.
**Corollary 7.8** (No zigzag for the backbone).: _Let \(d\geq 1\). Assume that \(J\) satisfies \((\mathbf{A1})\)-\((\mathbf{A5})\) and \((\mathbf{Assumption}_{\alpha})\) with \(d-2(\alpha\wedge 2)\geq 0\). Fix \(\nu\in(0,1)\) and \(\epsilon>0\). There exist \(C,\eta>0\) such that, for all \(\beta\leq\beta_{c}\), for all \(k,\ell\geq 1\) and \(y\in\mathbb{Z}^{d}\) in a weak regular scale with \(k^{[2d/(1-\nu)]\vee[2d/(\nu\alpha)]}\leq\ell\) and \(\ell^{4}\leq|y|\),_
\[\mathbf{P}_{\beta}^{0y}[\Gamma(\mathbf{n}_{1})\in\mathsf{ZZ}(0,y;k,\ell,\infty )]\leq\frac{C}{\ell\eta}.\]
Proof.: Notice that
\[\mathsf{ZZ}(0,y;k,\ell,\infty)\subset\mathsf{ZZ}(0,y;k,\ell,\ell^{1+\epsilon} )\cup\mathsf{Jump}(\ell,\ell^{1+\epsilon}).\]
Using Lemma 7.6 we find \(C_{1},\eta>0\) such that
\[\mathbf{P}_{\beta}^{0y}[\mathsf{Jump}(\ell,\ell^{1+\epsilon})]\leq\frac{C_{1} }{\ell\eta}.\]
If \(\mathsf{ZZ}(0,y;k,\ell,\ell^{1+\epsilon})\) occurs, there are two possibilities: either the backbone actually visits \(\mathrm{Ann}(\ell,\ell+\ell^{\nu})\) before hitting \(\Lambda_{k}\), an event we denote by \(\mathsf{B}_{1}\); or it does not in which case there must be an open edge which jumps from \(\Lambda_{\ell}\) to \(\mathrm{Ann}(\ell+\ell^{\nu},\ell^{1+\epsilon})\), an event we denote by \(\mathsf{B}_{2}\). By the chain rule for the backbone, we find that,
\[\mathbf{P}_{\beta}^{0y}[\mathsf{B}_{1}]\leq\sum_{\begin{subarray}{c}u\in \mathrm{Ann}(\ell,\ell+\ell^{\nu})\\ v\in\Lambda_{k}\end{subarray}}\frac{\langle\sigma_{0}\sigma_{u}\rangle_{\beta} \langle\sigma_{u}\sigma_{v}\rangle_{\beta}\langle\sigma_{v}\sigma_{y}\rangle_ {\beta}}{\langle\sigma_{0}\sigma_{y}\rangle_{\beta}}\leq C_{2}\frac{k^{d}\ell^{d- 1+\nu}}{\ell^{2d-2(\alpha\wedge 2)}}\leq\frac{C_{2}}{\ell^{(1-\nu)/2}},\]
where we used \((\mathbf{IRB}_{\alpha})\), the property \((\mathbf{P2})\) of weak regular scales to compare \(\langle\sigma_{v}\sigma_{y}\rangle_{\beta}\) and \(\langle\sigma_{0}\sigma_{y}\rangle_{\beta}\), and the hypothesis \(d-2(\alpha\wedge 2)\geq 0\). Using \((\mathbf{B}_{2})\), we see that for a one-step walk
\(\gamma:a\to b\), one has \(\rho(\gamma)\leq\rho_{\{a,b\}}(\gamma)=\tanh(\beta J_{a,b})\). Combining this observation with the chain rule,
\[\mathbf{P}_{\beta}^{0y}[\mathbf{B}_{2}]\leq\sum_{\begin{subarray}{c}a\in\Lambda_{ t}\\ b\in\operatorname*{Ann}(\ell+\ell^{\nu},\ell^{1+\epsilon})\\ c\in\Lambda_{k}\end{subarray}}\frac{\langle\sigma_{0}\sigma_{a}\rangle_{ \beta}\tanh(\beta J_{a,b})\langle\sigma_{b}\sigma_{c}\rangle_{\beta}\langle \sigma_{c}\sigma_{y}\rangle_{\beta}}{\langle\sigma_{0}\sigma_{y}\rangle_{\beta} }\leq\frac{C_{3}k^{d}\ell^{\alpha\wedge 2}}{\ell^{d-\alpha\wedge 2}\ell^{\nu\alpha}},\]
where we used (**P2**) to compare \(\langle\sigma_{c}\sigma_{y}\rangle_{\beta}\) and \(\langle\sigma_{0}\sigma_{y}\rangle_{\beta}\), \((\mathbf{IRB}_{\alpha})\) to argue that
\[\sum_{c\in\Lambda_{k}}\langle\sigma_{b}\sigma_{c}\rangle_{\beta}\leq C_{4} \frac{k^{d}}{\ell^{d-\alpha\wedge 2}},\]
and \((\mathbf{IRB}_{\alpha})\) once again with \((\mathbf{Assumption}_{\alpha})\) to get
\[\sum_{\begin{subarray}{c}a\in\Lambda_{\ell}\\ b\notin\Lambda_{\ell+\ell^{\nu}}\end{subarray}}\langle\sigma_{0}\sigma_{a} \rangle_{\beta}\tanh(\beta J_{a,b})\leq C_{5}\frac{\ell^{\alpha\wedge 2}}{\ell^{ \alpha\nu}}.\]
This concludes the proof.
We can also obtain the corresponding modification of Lemma 6.11.
**Lemma 7.9**.: _Let \(d\geq 1\). Assume that \(J\) satisfies \((\mathbf{A1})\)-\((\mathbf{A5})\) and \((\mathbf{Assumption}_{\alpha})\) with \(d-2(\alpha\wedge 2)\geq 0\). Let \(\epsilon>0\). There exist \(C,\eta>0\) such that, for all \(\beta\leq\beta_{c}\), for \(n<m\leq M\leq k\) with \(1\leq M^{2/\epsilon}\leq k\leq L(\beta)\), for all \(x\in\Lambda_{n}\), and all \(u\in\operatorname*{Ann}(m,M)\),_
\[\mathbf{P}_{\beta}^{xu,\emptyset}[\operatorname*{Jump}(k,k^{1+\epsilon})]\leq \frac{C}{k^{\eta}}.\]
Proof.: We repeat the proof of Lemma 6.11. Using Lemma 6.3,
\[\sum_{w\in\Lambda_{k},\,v\notin\Lambda_{k^{1+\epsilon}}}\mathbf{ P}_{\beta}^{xu,\emptyset}[\mathbf{n}_{w,v}\geq 1]\\ \leq 2\beta\sum_{w\in\Lambda_{k},\,v\notin\Lambda_{k^{1+ \epsilon}}}J_{w,v}\left(\langle\sigma_{w}\sigma_{v}\rangle_{\beta}+\frac{ \langle\sigma_{x}\sigma_{w}\rangle_{\beta}\langle\sigma_{v}\sigma_{u}\rangle_ {\beta}}{\langle\sigma_{x}\sigma_{u}\rangle_{\beta}}+\frac{\langle\sigma_{x} \sigma_{v}\rangle_{\beta}\langle\sigma_{w}\sigma_{u}\rangle_{\beta}}{\langle \sigma_{x}\sigma_{u}\rangle_{\beta}}\right).\]
Using \((\mathbf{Assumption}_{\alpha})\) and \((\mathbf{IRB}_{\alpha})\),
\[\beta\sum_{w\in\Lambda_{k},\,v\notin\Lambda_{k^{1+\epsilon}}}J_{w,v}\langle \sigma_{w}\sigma_{v}\rangle_{\beta}\leq C_{1}k^{d}\sum_{p\geq k^{1+\epsilon}} \frac{p^{d-1}}{p^{d+\alpha+d-(\alpha\wedge 2)}}\leq\frac{C_{2}}{k^{d\epsilon}}. \tag{7.5}\]
Then, using \((\mathbf{LB}_{\alpha})\) (which is licit since \(1\leq|x-u|\leq L(\beta)\)) together with \((\mathbf{Assumption}_{\alpha})\) and \((\mathbf{IRB}_{\alpha})\), we get
\[\beta\sum_{w\in\Lambda_{k},\,v\notin\Lambda_{k^{1+\epsilon}}}J_{ w,v}\frac{\langle\sigma_{x}\sigma_{v}\rangle_{\beta}\langle\sigma_{w}\sigma_{u} \rangle_{\beta}}{\langle\sigma_{x}\sigma_{u}\rangle_{\beta}} \leq \beta^{2}C_{3}M^{d-1}f_{\alpha}(M)\sum_{\begin{subarray}{c}w\in \Lambda_{k}\\ v\notin\Lambda_{k^{1+\epsilon}}\end{subarray}}J_{w,v}\langle\sigma_{x}\sigma_{ v}\rangle_{\beta}\langle\sigma_{w}\sigma_{u}\rangle_{\beta}\] \[\leq C_{4}M^{d-1}f_{\alpha}(M)k^{\alpha\wedge 2}\sum_{v\notin\Lambda_{k^{1+ \epsilon}}}|v|^{-(d-\alpha\wedge 2)}J_{0,v}\] \[\leq C_{5}M^{d}k^{\alpha\wedge 2-d(1+\epsilon)}.\]
Finally, with the same reasoning we also get
\[\sum_{w\in\Lambda_{k},\,v\notin\Lambda_{k^{1+\epsilon}}}J_{w,v}\frac{\langle \sigma_{x}\sigma_{w}\rangle_{\beta}\langle\sigma_{v}\sigma_{u}\rangle_{\beta} }{\langle\sigma_{x}\sigma_{u}\rangle_{\beta}}\leq C_{6}M^{d}k^{\alpha\wedge 2-d(1+ \epsilon)}.\]
Now, clearly, if \(M^{2/\epsilon}\leq k\), using that \(d=2(\alpha\wedge 2)\geq 0\), we can choose \(\eta=d\epsilon/2\) and \(C\) a sufficiently large constant.
**Corollary 7.10**.: _Let \(d\geq 1\). Assume that \(J\) satisfies \((\mathbf{A1})\)-\((\mathbf{A5})\) and \((\mathbf{Assumption}_{\alpha})\) with \(d-2(\alpha\wedge 2)\geq 0\). There exist \(\nu\in(0,1)\) and \(C,\epsilon,\eta>0\) such that for all \(\beta\leq\beta_{c}\), for all \(n<m\leq M\leq k\) with \(1\leq M^{[2(1+\epsilon)/\epsilon]\vee[2d/(1-\nu)]}\leq k\leq L(\beta)\), for all \(x\in\Lambda_{n}\) and all \(u\in\mathrm{Ann}(m,M)\),_
\[\mathbf{P}_{\beta}^{xu}[\mathsf{ZZ}(x,u;M,k,\infty)]\leq\frac{C}{k^{\eta}}.\]
Proof.: We repeat the strategy used to get Corollary 7.8. Let \(\nu\in(0,1)\) and \(\epsilon>0\) to be fixed below. Notice that,
\[\mathsf{ZZ}(x,u;M,k,\infty)\subset\mathsf{ZZ}(x,u;M,k,k^{1+ \epsilon})\cap(\mathsf{Jump}(k^{1/(1+\epsilon)},k))^{c}\\ \cup\mathsf{Jump}(k^{1/(1+\epsilon)},k)\cup\mathsf{Jump}(k,k^{1 +\epsilon}).\]
Using Lemma 7.9 together with the hypothesis on \(M\) and \(k\), we find \(C_{1},\eta>0\) such that
\[\mathbf{P}_{\beta}^{xu}[\mathsf{Jump}(k^{1/(1+\epsilon)},k)\cup\mathsf{ Jump}(k,k^{1+\epsilon})]\leq\frac{C_{1}}{k^{\eta}}.\]
We handle \(\mathsf{ZZ}(x,u;M,k,k^{1+\epsilon})\cap(\mathsf{Jump}(k^{1/(1+\epsilon)},k))^ {c}\) as we did in the proof of Corollary 7.8 by splitting it into two events \(\mathsf{B}_{1}\) and \(\mathsf{B}_{2}\) according to whether or not the backbone reaches \(\mathrm{Ann}(k,k+k^{\nu})\). Using the chain rule together with \((\mathbf{LB}_{\alpha})\) and \((\mathbf{IRB}_{\alpha})\),
\[\mathbf{P}_{\beta}^{xu}[\mathsf{B}_{1}] \leq \sum_{v\in\mathrm{Ann}(k,k+k^{\nu})}\frac{\langle\sigma_{x} \sigma_{v}\rangle_{\beta}\langle\sigma_{v}\sigma_{u}\rangle_{\beta}}{\langle \sigma_{x}\sigma_{u}\rangle_{\beta}}\] \[\leq \frac{C_{2}M^{d-1}f_{\alpha}(M)k^{d-1+\nu}}{k^{2d-2(\alpha\wedge 2 )}}\] \[\leq \frac{C_{2}M^{d}}{k^{1-\nu}}\leq\frac{C_{2}}{k^{(1-\nu)/2}},\]
where we used the assumption that \(d\geq 2(\alpha\wedge 2)\).
Figure 7. An illustration of the occurrence of the event \(\mathsf{B}_{2}\) defined in the proof of Corollary 7.10. The ”exclusion zone“ \(\mathrm{Ann}(k,k+k^{\nu})\) is represented in red. The green dashed line illustrates the long open edge which jumps above it.
It remains to analyse \(\mathsf{B}_{2}\). In that case, the backbone has to jump above \(\operatorname{Ann}(k,k+k^{\nu}):\) it goes from \(x\) to a point in \(\operatorname{Ann}(k^{1/(1+\epsilon)},k)\) (recall that we excluded jumps above \(\operatorname{Ann}(k^{1/(1+\epsilon)},k)\)), then jumps in \(\operatorname{Ann}(k+k^{\nu},k^{1+\epsilon})\), before finally hitting \(u\) (see Figure 7).
Using the chain rule for the backbone as we did in the proof of Corollary 7.8 together with \((\mathbf{Assumption}_{\alpha})\), \((\mathbf{LB}_{\alpha})\), and \((\mathbf{IRB}_{\alpha})\), we get
\[\mathbf{P}_{\beta}^{xu}[\mathsf{B}_{2}] \leq \sum_{\begin{subarray}{c}a\in\operatorname{Ann}(k^{1/(1+\epsilon )},k)\\ b\in\operatorname{Ann}(k+k^{\nu},k^{1+\epsilon})\end{subarray}}\frac{\langle \sigma_{x}\sigma_{a}\rangle_{\beta}\tanh(\beta J_{a,b})\langle\sigma_{b}\sigma _{u}\rangle_{\beta}}{\langle\sigma_{x}\sigma_{u}\rangle_{\beta}}\] \[\leq \frac{C_{3}M^{d-1}f_{\alpha}(M)k^{d}k^{d(1+\epsilon)}}{k^{\frac {d-\alpha\alpha 2}{1+\epsilon}}k^{\nu(d+\alpha)k^{d-\alpha 2}}}.\]
Now, recall that \(M^{d}\leq k^{d\epsilon}\), so that we may find \(\zeta=\zeta(\epsilon,\nu)>0\) such that
\[\mathbf{P}_{\beta}^{xu}[\mathsf{B}_{2}]\leq\frac{C_{3}k^{2d+3d\epsilon}}{k^{ \frac{d-\alpha\alpha 2}{1+\epsilon}}k^{\nu(d+\alpha)k^{d-\alpha\wedge 2}}} \leq\frac{C_{3}}{k^{\zeta}},\]
provided \(\epsilon>0\) is small enough, and \(\nu\) is sufficiently close to \(1\). This concludes the proof.
Finally, as we did in Section 6.2, we conclude this subsection with some properties concerning the current \(\mathbf{n}\setminus\overline{\Gamma(\mathbf{n})}\).
**Lemma 7.11**.: _Let \(d\geq 1\). Assume that \(J\) satisfies \((\mathbf{A1})\)-\((\mathbf{A5})\) and \((\mathbf{Assumption}_{\alpha})\) with \(\alpha>0\). Let \(\epsilon>0\). There exist \(C,\eta>0\) such that for all \(\beta\leq\beta_{c}\), for all \(k\geq 1\), for all \(x,y\in\mathbb{Z}^{d}\),_
\[\mathbf{P}_{\beta}^{xy}[\mathbf{n}\setminus\overline{\Gamma(\mathbf{n})}\in \mathsf{Jump}(k,k^{1+\epsilon})]\leq\mathbf{P}_{\beta}^{xy,\emptyset}[( \mathbf{n}_{1}+\mathbf{n}_{2})\setminus\overline{\Gamma(\mathbf{n}_{1})}\in \mathsf{Jump}(k,k^{1+\epsilon})]\leq\frac{C}{k^{\eta}}.\]
Proof.: Proceeding exactly as in the proof of Corollary 6.15 we find that
\[\mathbf{P}_{\beta}^{xy,\emptyset}[(\mathbf{n}_{1}+\mathbf{n}_{2})\setminus \overline{\Gamma(\mathbf{n}_{1})}\in\mathsf{Jump}(k,k^{1+\epsilon})]\leq 2\sum_{u \in\Lambda_{k},\;v\not\in\Lambda_{k^{1+\epsilon}}}\beta J_{u,v}\langle\sigma_{ u}\sigma_{v}\rangle_{\beta}.\]
This last sum is then smaller than \(Ck^{-d\epsilon}\) as shown in (7.5).
Recall that the event \(\mathsf{Cross}\) was defined in Definition 6.14.
**Corollary 7.12**.: _Let \(d\geq 1\). Assume that \(J\) satisfies \((\mathbf{A1})\)-\((\mathbf{A5})\) and \((\mathbf{Assumption}_{\alpha})\) with \(d-2(\alpha\wedge 2)\geq 0\). There exist \(\nu\in(0,1)\) and \(C,\epsilon,\eta>0\) such that for all \(\beta\leq\beta_{c}\), for all \(k,\ell\geq 1\) with \(k^{2d/(1-\nu)}\leq\ell\), for all \(x,u\in\mathbb{Z}^{d}\),_
\[\mathbf{P}_{\beta}^{xu}[\mathbf{n}\setminus\overline{\Gamma(\mathbf{n})}\in \mathsf{Cross}(k,\ell)]\leq\mathbf{P}_{\beta}^{xu,\emptyset}[(\mathbf{n}_{1}+ \mathbf{n}_{2})\setminus\overline{\Gamma(\mathbf{n}_{1})}\in\mathsf{Cross}(k, \ell)]\leq\frac{C}{\ell^{\eta}}.\]
Proof.: We use the ideas developed in the proofs of Lemma 6.13 and Corollary 7.10. Let \(\nu\in(0,1)\) and \(\epsilon>0\) to be fixed below. Start by writing,
\[\{(\mathbf{n}_{1}+\mathbf{n}_{2})\setminus\overline{\Gamma(\mathbf{n}_{1})} \in\mathsf{Cross}(k,\ell)\}=[(\mathsf{B}_{1}\cup\mathsf{B}_{2})\cap\mathsf{J}^ {c}]\cup\mathsf{J},\]
where \(\mathsf{B}_{1}\) is the event that \((\mathbf{n}_{1}+\mathbf{n}_{2})\setminus\overline{\Gamma(\mathbf{n}_{1})}\) crosses \(\operatorname{Ann}(k,\ell)\) by passing through \(\operatorname{Ann}(\ell,\ell+\ell^{\nu})\), \(\mathsf{B}_{2}\) is the complement of \(\mathsf{B}_{1}\) in \(\{(\mathbf{n}_{1}+\mathbf{n}_{2})\setminus\overline{\Gamma(\mathbf{n}_{1})}\in \mathsf{Cross}(k,\ell)\}\), and \(\mathsf{J}=\{(\mathbf{n}_{1}+\mathbf{n}_{2})\setminus\overline{\Gamma( \mathbf{n}_{1})}\in\mathsf{Jump}(\ell^{1/(1+\epsilon)},\ell)\}\cup\{(\mathbf{n} _{1}+\mathbf{n}_{2})\setminus\overline{\Gamma(\mathbf{n}_{1})}\in\mathsf{Jump}( \ell,\ell^{1+\epsilon})\}\).
Using Lemma 7.11, we get the existence of \(C_{1},\eta_{1}>0\) such that
\[\mathbf{P}_{\beta}^{xu,\emptyset}[\mathsf{J}]\leq\frac{C_{1}}{\ell^{\eta_{1}}}.\]
Using \((\mathbf{IRB}_{\alpha})\) together with the what was done in the proof of Corollary 6.15,
\[\mathbf{P}^{xu,\emptyset}_{\beta}[\mathsf{B}_{1},\,\,\mathrm{J}^{c}]\leq\sum_{ \begin{subarray}{c}v\in\mathrm{Ann}(\ell,\ell+\ell^{\nu})\\ w\in\Lambda_{k}\end{subarray}}\langle\sigma_{w}\sigma_{v}\rangle_{\beta}^{2} \leq\frac{C_{2}\ell^{d-1+\nu}k^{d}}{\ell^{2d-2(\alpha\wedge 2)}}\leq\frac{C_{2}}{\ell^{(1- \nu)/2}},\]
where we used that \(d-2(\alpha\wedge 2)\geq 0\). Finally, using a similar strategy as in the proof of Lemma 6.13,
\[\mathbf{P}^{xu,\emptyset}_{\beta}[\mathsf{B}_{2},\,\,\mathrm{J}^{c}]\leq\sum_{ \begin{subarray}{c}\gamma:\mathbb{z}\to u\\ \mathrm{consistent}\end{subarray}}\sum_{\begin{subarray}{c}a\in\Lambda_{k}\\ b\in\mathrm{Ann}(\ell^{1/(1+\epsilon)},\ell)\\ c\in(\ell+\ell^{\nu},\ell^{1+\epsilon})\end{subarray}}\mathbf{P}^{xu}_{ \beta}[\Gamma(\mathbf{n})=\gamma]\mathbf{P}^{\emptyset,\emptyset}_{\overline{ \gamma}^{c},\,\mathbb{Z}^{d},\beta}[a\leftrightarrow b\,\,\mathrm{in}\,\, \overline{\gamma}^{c},\,(\mathbf{n}_{1}+\mathbf{n}_{2})_{b,c}\geq 1].\]
Using the generalisation of the switching lemma mentioned in the proof of Corollary 6.15 together with Griffith's inequality,
\[\mathbf{P}^{\emptyset,\emptyset}_{\overline{\gamma}^{c},\, \mathbb{Z}^{d},\beta}[a\leftrightarrow b\,\,\mathrm{in}\,\,\overline{\gamma}^ {c},\,(\mathbf{n}_{1}+\mathbf{n}_{2})_{b,c}\geq 1] = \langle\sigma_{a}\sigma_{b}\rangle_{\overline{\gamma}^{c},\, \beta}\langle\sigma_{a}\sigma_{b}\rangle_{\beta}\mathbf{P}^{ab,ab}_{\overline{ \gamma}^{c},\,\mathbb{Z}^{d},\beta}[(\mathbf{n}_{1}+\mathbf{n}_{2})_{b,c} \geq 1]\] \[\leq \beta J_{b,c}\langle\sigma_{a}\sigma_{c}\rangle_{\overline{\gamma }^{c},\,\beta}\langle\sigma_{a}\sigma_{b}\rangle_{\beta}+\beta J_{b,c}\langle \sigma_{a}\sigma_{b}\rangle_{\overline{\gamma}^{c},\,\beta}\langle\sigma_{a} \sigma_{c}\rangle_{\beta}\] \[\leq 2\langle\sigma_{a}\sigma_{b}\rangle_{\beta}\beta J_{b,c} \langle\sigma_{c}\sigma_{a}\rangle_{\beta}.\]
We obtained,
\[\mathbf{P}^{xu,\emptyset}_{\beta}[\mathsf{B}_{2},\,\,\mathrm{J}^{c}]\leq 2 \sum_{\begin{subarray}{c}a\in\Lambda_{k}\\ b\in\mathrm{Ann}(\ell^{1/(1+\epsilon)},\ell)\\ c\in\mathrm{Ann}(\ell+\ell^{\nu},\ell^{1+\epsilon})\end{subarray}}\langle \sigma_{a}\sigma_{b}\rangle_{\beta}\beta J_{b,c}\langle\sigma_{c}\sigma_{a} \rangle_{\beta}.\]
Now, using \((\mathbf{Assumption}_{\alpha})\) and \((\mathbf{IRB}_{\alpha})\),
\[\mathbf{P}^{xu,\emptyset}_{\beta}[\mathsf{B}_{2},\,\,\mathrm{J}^{c}]\leq \frac{C_{3}k^{d}\ell^{d}\ell^{d(1+\epsilon)}}{\ell^{(d-\alpha\wedge 2)}\ell^{\frac{d- \alpha\wedge 2}{1+\epsilon}}\ell^{\nu(d+\alpha)}}.\]
We can now use that \(k^{d}\leq\ell^{(1-\nu)}\) and choose \(\epsilon>0\) sufficiently small, and \(\nu\) sufficiently close to \(1\) to conclude.
### Mixing property for \(d_{\text{eff}}\geq 4\)
The goal of this subsection is to prove the following result.
**Theorem 7.13** (Mixing property for \(d_{\text{eff}}\geq 4\)).: _Let \(d\geq 1\) and \(s\geq 1\). Assume that \(J\) satisfies \((\mathbf{A1})\)-\((\mathbf{A5})\) and \((\mathbf{Assumption}_{\alpha})\) with \(d-2(\alpha\wedge 2)\geq 0\). There exist \(\gamma,c,C>0\), such that for every \(1\leq t\leq s\), every \(\beta\leq\beta_{c}\), every \(1\leq n^{\gamma}\leq N\leq L(\beta)\), every \(x_{i}\in\Lambda_{n}\) and \(y_{i}\notin\Lambda_{N}\)\((i\leq t)\), and every events \(E\) and \(F\) depending on the restriction of \((\mathbf{n}_{1},\ldots,\mathbf{n}_{s})\) to edges with endpoints within \(\Lambda_{n}\) and outside \(\Lambda_{N}\) respectively,_
\[\left|\mathbf{P}^{x_{1}y_{1},\ldots,x_{t}y_{t},\emptyset,\ldots, \emptyset}_{\beta}[E\cap F]-\mathbf{P}^{x_{1}y_{1},\ldots,x_{t}y_{t},\emptyset,\ldots,\emptyset}_{\beta}[E]\mathbf{P}^{x_{1}y_{1},\ldots,x_{t}y_{t}, \emptyset,\ldots,\emptyset}_{\beta}[F]\right|\\ \leq C\left(\frac{\log(N/n)}{\log\log(N/n)}\right)^{-1/2}. \tag{7.6}\]
_Furthermore, for every \(x_{1}^{\prime},\ldots,x_{t}^{\prime}\in\Lambda_{n}\) and \(y_{1}^{\prime},\ldots,y_{t}^{\prime}\notin\Lambda_{N}\), we have that_
\[\left|\mathbf{P}^{x_{1}y_{1},\ldots,x_{t}y_{t},\emptyset,\ldots, \emptyset}_{\beta}[E]-\mathbf{P}^{x_{1}y_{1}^{\prime},\ldots,x_{t}y_{t}^{ \prime},\emptyset,\ldots,\emptyset}_{\beta}[E]\right|\leq C\left(\frac{\log(N/ n)}{\log\log(N/n)}\right)^{-1/2}, \tag{7.7}\]
\[\left|\mathbf{P}^{x_{1}y_{1},\ldots,x_{t}y_{t},\emptyset,\ldots, \emptyset}_{\beta}[F]-\mathbf{P}^{x_{1}^{\prime}y_{1},\ldots,x_{t}^{\prime}y_{t },\emptyset,\ldots,\emptyset}_{\beta}[F]\right|\leq C\left(\frac{\log(N/n)}{\log \log(N/n)}\right)^{-1/2}. \tag{7.8}\]
We follow the strategy employed above and import all the notations from Section 6.4. Fix \(\beta\leq\beta_{c}\). Fix two integers \(t,s\) satisfying \(1\leq t\leq s\). Introduce integers \(m,M\) such that \(n\leq m\leq M\leq N\), \(m/n=(N/n)^{\mu/2}\), and \(N/M=(N/n)^{1-\mu}\) for \(\mu\) small to be fixed.
We fix \(\nu\in(0,1)\) and \(\epsilon>0\) such that Corollaries 7.10 and 7.12 hold.
Introduce the set \(\mathcal{K}\) of \((c_{0},C_{0})\)-weak regular scales \(k\) between \(m\) and \(M/2\) with every \(2^{k}\) for \(k\in\mathcal{K}\) differing by a multiplicative factor at least \(2C_{0}\log(N/n)^{2}\). By Proposition 7.5, we may assume that \(|\mathcal{K}|\geq c_{1}\frac{\log(N/n)}{\log\log(N/n)}\) for a sufficiently small \(c_{1}=c_{1}(\mu)>0\). Recall that \(\mathbf{U}\) was defined in Section 6.4.
The property \((\mathbf{P4^{\prime}})\) of weak regular scales allows us to prove,
**Lemma 7.14** (Concentration of \(\mathbf{U}\)).: _For all \(\gamma>2\), there exists \(C_{1}=C_{1}(d,t,\gamma)>0\) such that for all \(n\) sufficiently large satisfying \(n^{\gamma}\leq N\leq L(\beta)\),_
\[\mathbf{E}_{\beta}^{\mathbf{xy},\emptyset}[(\mathbf{U}-1)^{2}]\leq C_{1}\frac {\log\log(N/n)}{\log(N/n)}.\]
The only place where the argument needs to be adapted is located in the proof of Lemma 6.23, which bounds the occurrence of \(\mathcal{G}(\mathbf{u})^{c}\) (where \(\mathcal{G}(\mathbf{u})\) was defined in Definition 6.22). The remainder of the subsection concerns the extension of this lemma to our setup.
**Lemma 7.15**.: _We keep the assumptions of Theorem 7.13. There exist \(C,\delta>0\), \(\gamma=\gamma(\delta)>0\) large enough and \(\mu=\mu(\delta)>0\) small enough such that for every \(n^{\gamma}\leq N\leq L(\beta)\), and every \(\mathbf{u}\) with \(u_{i}\in\mathbb{A}_{y_{i}}(2^{k_{i}})\) with \(m\leq 2^{k_{i}}\leq M/2\) for every \(1\leq i\leq t\),_
\[\mathbf{P}_{\beta}^{\mathbf{xu},\mathbf{uy}}[\mathcal{G}(\mathbf{u})^{c}]\leq C \left(\frac{n}{N}\right)^{\delta}.\]
Proof.: Recall that we have fixed the values of \(\epsilon\) and \(\nu\). We follow the notations used in the proof of Lemma 6.23.
Recall that \(\mathcal{G}(\mathbf{u})=\cap_{1\leq i\leq s}G_{i}\), and \(H_{i}\cap F_{i}\subset G_{i}\) where
\[H_{i}=\{\mathbf{n}_{i}\notin\mathsf{Cross}(M,N)\},\qquad F_{i}=\{\mathbf{n}_{i }^{\prime}\notin\mathsf{Cross}(n,m)\}.\]
Introduce intermediate scales \(n\leq r\leq m\leq M\leq R\leq N\) with \(r,R\) chosen below.
\(\bullet\)**Bound on \(H_{i}\)**. Define,
\[\mathsf{J}_{i}:=\bigcup_{p\in\left\{R,N^{1/(1+\epsilon)},N\right\}}\{\mathbf{ n}_{i}\in\mathsf{Jump}(p,p^{1+\epsilon})\}.\]
Notice that,
\[\mathbf{P}_{\beta}^{\mathbf{xu},\mathbf{uy}}[H_{i}^{c}]\leq \mathbf{P}_{\beta}^{\mathbf{xu}}[\Gamma(\mathbf{n}_{i})\in\mathsf{ZZ}(x_{i},u_ {i};M,R,\infty)]\\ +\mathbf{P}_{\beta}^{\mathbf{xu}}[\mathbf{n}_{i}\setminus \overline{\Gamma(\mathbf{n}_{i})}\in\mathsf{Cross}(R^{1+\epsilon},N)]+\mathbf{ P}_{\beta}^{\mathbf{xu}}[\mathsf{J}_{i}].\]
Assume \(R=N^{\iota}\) where \(\iota>\mu\) will be fixed below, and recall that \(M\leq N^{\mu+1/\gamma}\). We might decrease \(\mu\) and increase \(\gamma\) to ensure that \((2/\epsilon)(\mu+1/\gamma)\leq\iota\). As a result, we may use Lemma 7.9 to obtain \(C_{1},\eta_{1}>0\) such that
\[\mathbf{P}_{\beta}^{\mathbf{xu}}[\mathsf{J}_{i}]\leq\frac{C_{1}}{R^{\eta_{1}}}.\]
Moreover, thanks to Corollaries 7.10 and 7.12, if we additionally require that20:
Footnote 20: This might decrease \(\mu\) and increase \(\gamma\).
\[M^{[2(1+\epsilon)/\epsilon]\cup[2d/(1-\nu)]}\leq R,\qquad R^{2d(1+\epsilon)/(1 -\nu)}\leq N,\]
we find \(C_{2},\eta_{2}>0\) such that,
\[\mathbf{P}_{\beta}^{\mathbf{xu}}[\Gamma(\mathbf{n}_{i})\in\mathsf{ZZ}(x_{i},u_ {i};M,R,\infty)]\leq\frac{C_{2}}{R^{\eta_{2}}},\qquad\mathbf{P}_{\beta}^{ \mathbf{xu}}[\mathbf{n}_{i}\setminus\overline{\Gamma(\mathbf{n}_{i})}\in \mathsf{Cross}(R^{1+\epsilon},N)]\leq\frac{C_{2}}{R^{\eta_{2}}}.\]
\(\bullet\)**Bound on \(F_{i}.\)** We follow the exact same strategy as in the proof of Lemma 6.23. The modifications are similar to what was done for the bound on \(H_{i}\). Again, we replace Corollary 6.15 by Corollary 7.12 and choose accordingly the values of \(\mu\) and \(\gamma\).
We set \(r=m^{\kappa}\) with \(2\kappa d(1+\epsilon)\leq\alpha\wedge 2\). Recall that \(m\geq N^{\mu/2}\). We find that
\[\mathbf{P}_{\beta}^{\mathbf{xu},\mathbf{uy}}[F_{i}^{c}]\leq \mathbf{P}_{\beta}^{\mathbf{uy}}[\Gamma(\mathbf{n}_{i}^{\prime})\in\mathsf{ ZZ}(u_{i},y_{i};r^{1+\epsilon},m,\infty)]\\ +\mathbf{P}_{\beta}^{\mathbf{uy}}[\mathbf{n}_{i}^{\prime}\setminus \overline{\Gamma(\mathbf{n}_{i}^{\prime})}\in\mathsf{Cross}(n,r)]+\mathbf{P}_ {\beta}^{\mathbf{uy}}[\widetilde{K}_{i}],\]
where \(\widetilde{K}_{i}\) is the event that there exists \(a\in\Lambda_{r}\) and \(b\notin\Lambda_{r^{1+\epsilon}}\) such that \((\mathbf{n}_{i}^{\prime})_{a,b}\geq 2\) and \(\{a,b\}\in\overline{\Gamma(\mathbf{n}_{i}^{\prime})}\setminus\Gamma( \mathbf{n}_{i}^{\prime})\).
Using \((\mathbf{IRB}_{\alpha})\) and the assumption that \(u_{i}\in\mathbb{A}_{y_{i}}(2^{k_{i}})\) to get that \(\langle\sigma_{v}\sigma_{y_{i}}\rangle_{\beta}\leq C_{3}\langle\sigma_{u_{i}} \sigma_{y_{i}}\rangle_{\beta}\), we obtain
\[\mathbf{P}_{\beta}^{\mathbf{uy}}[\Gamma(\mathbf{n}_{i}^{\prime}) \in\mathsf{ZZ}(u_{i},y_{i};r^{1+\epsilon},m,\infty)] \leq \sum_{v\in\Lambda_{r^{1+\epsilon}}}\frac{\langle\sigma_{u_{i}} \sigma_{v}\rangle_{\beta}\langle\sigma_{v}\sigma_{y_{i}}\rangle_{\beta}}{ \langle\sigma_{u_{i}}\sigma_{y_{i}}\rangle_{\beta}}\] \[\leq C_{4}\frac{r^{d(1+\epsilon)}}{m^{d-\alpha\wedge 2}}\leq\frac{C_{4} }{m^{(\alpha\wedge 2)/2}},\]
where we used that \(d-\alpha\wedge 2\geq\alpha\wedge 2\). Moreover, using Corollary 7.12 (which requires that \(n^{2d/(1-\nu)}\leq r\) and hence decreases the values of \(\mu\) and \(1/\gamma\)), there exist \(\zeta>0\) such that
\[\mathbf{P}_{\beta}^{\mathbf{uy}}[\mathbf{n}_{i}^{\prime}\setminus\overline{ \Gamma(\mathbf{n}_{i}^{\prime})}\in\mathsf{Cross}(n,r)]\leq\frac{C_{5}}{r^{ \zeta}}.\]
We conclude the proof with the bound on \(\widetilde{K}_{i}\). Proceeding as in the proof of Lemma 6.23,
\[\mathbf{P}_{\beta}^{\mathbf{uy}}[\widetilde{K_{i}}]\leq C_{6}\sum_{\begin{subarray} {c}a\in\Lambda_{r}\\ b\notin\Lambda_{r^{1+\epsilon}}\end{subarray}}\beta J_{a,b}\langle\sigma_{u_{i} }\sigma_{b}\rangle_{\beta}.\]
Using \((\mathbf{Assumption}_{\alpha})\) and \((\mathbf{IRB}_{\alpha})\) we obtain \(\zeta^{\prime}>0\) such that
\[\mathbf{P}_{\beta}^{\mathbf{uy}}[K_{i}]\leq\frac{C_{7}}{m^{\zeta^{\prime}}}.\]
This concludes the proof
We are now in a position to conclude.
Proof of Theorem 7.13.: The proof follows the exact same lines as the proof of Theorem 6.20 except that we replace Lemma 6.23 by Lemma 7.15.
### Proof of Theorem 7.1
The lack of precision of the sliding-scale infrared bound in comparison to the situation in \(d=4\) slightly weakens the result. The three cases of interest are a little different and thus are treated in different sections.
#### 7.4.1. The case \(d=3\)
We assume that \(d=3\) and that \(J\) satisfies \((\mathbf{A1})\)-\((\mathbf{A5})\) and \((\mathbf{Assumption}_{\alpha})\) with \(\alpha=3/2\).
Let us first observe that the sliding-scale infrared bound of Theorem 3.18 is not sharp in this setup. Indeed, we expect the finite-volume susceptibility to grow like \(n^{3/2}\) (below \(L(\beta)\)). \((\mathbf{LB}_{\alpha})\) and \((\mathbf{IRB}_{\alpha})\) can (almost) make up for this lack of precision as they yield the existence of \(C>0\) such that for \(1\leq n\leq N\leq L(\beta)\),
\[\frac{\chi_{N}(\beta)}{N^{3/2}}\leq C\sqrt{n}\frac{\chi_{n}(\beta)}{n^{3/2}}. \tag{7.9}\]
Recall that \((\mathbf{IRB}_{\alpha})\) still gives \(B_{L}(\beta)-B_{\ell}(\beta)\leq C_{0}\log(L/\ell)\) in our setup. However, (7.9) not being sharp, we modify Lemma 6.16 accordingly.
**Lemma 7.16**.: _There exists \(C>0\) such that for every \(\beta\leq\beta_{c}\), and for every \(1\leq\ell\leq L\leq L(\beta)\),_
\[B_{L}(\beta)\leq\left(1+C\frac{\log_{2}(L/\ell)}{\log_{2}(\ell)}\ell\right)B_{ \ell}(\beta).\]
Proof.: We repeat the proof of Lemma 6.16 except that we replace the use of Theorem 3.18 by (7.9).
We define a (possibly finite) sequence \(\mathcal{L}_{3}=\mathcal{L}_{3}(\beta,D)\) by \(\ell_{0}=0\) and
\[\ell_{k+1}=\inf\left\{\ell\geq\ell_{k},\;B_{\ell}(\beta)\geq D\cdot(\ell_{k}+1 )\cdot B_{\ell_{k}}(\beta)\right\}.\]
We also define a sequence \(\mathcal{U}_{3}=\mathcal{U}_{3}(\beta,D)\) by \(u_{k}=\ell_{3k}\) for \(k\geq 0\).
**Remark 7.17**.: _Note that the sequence \(\mathcal{L}_{3}\) grows much faster than \(\mathcal{L}\) introduced in Section 6. The reason why we need an additional sequence \(\mathcal{U}_{3}\) is technical and will become transparent later._
We begin with a technical result on \(\mathcal{L}_{3}\).
**Lemma 7.18** (Growth of \(\mathcal{L}_{3}\)).: _There exist \(c,C>0\) such that, for all \(k\geq 1\),_
\[\prod_{i=0}^{k-1}[D\cdot(\ell_{i}+1)]\leq B_{\ell_{k}}(\beta)\leq C\prod_{i=0} ^{k-1}[D\cdot(\ell_{i}+1)],\]
_and, as long as \(\ell_{k+1}\leq L(\beta)\),_
\[\ell_{k+1}\geq\ell_{k}^{cD}.\]
Proof.: We repeat the argument used to study \(\mathcal{L}\) in Section 6. The lower bound is immediate and for the upper bound, using that \(B_{L}(\beta)-B_{\ell}(\beta)\leq C_{0}\log(L/\ell)\), for \(k\geq 1\),
\[B_{\ell_{k}-1}(\beta) \leq D(\ell_{k-1}+1)B_{\ell_{k-1}}(\beta)\leq D(\ell_{k-1}+1)\left(B_ {\ell_{k-1}-1}(\beta)-C_{0}\log\left(1-\frac{1}{\ell_{k-1}}\right)\right)\] \[\leq \left(\prod_{i=1}^{k-1}[D\cdot(\ell_{i}+1)]\right)B_{\ell_{1}-1}( \beta)+C_{0}\sum_{i=1}^{k-1}\frac{\prod_{j=k-i}^{k-1}[D\cdot(\ell_{j}+1)]}{ \ell_{k-i}}\] \[\leq C\left(\prod_{i=0}^{k-1}[D\cdot(\ell_{i}+1)]\right).\]
for \(C\) large enough (independent of \(D\) and \(k\)). We conclude by noticing that
\[B_{\ell_{k}}(\beta)\leq B_{\ell_{k}-1}(\beta)+C_{0}\log 2.\]
As for the growth of \(\mathcal{L}_{3}\), we use Lemma 7.18 and proceed as in Remark 6.17 to get that
\[\log_{2}(\ell_{k})\leq C\ell_{k}\log_{2}(\ell_{k+1}/\ell_{k})\frac{B_{\ell_{k} }(\beta)}{B_{\ell_{k+1}}(\beta)-B_{\ell_{k}}(\beta)}\leq C\ell_{k}\log_{2}( \ell_{k+1}/\ell_{k})\frac{1}{(D\ell_{k}-1)},\]
which yields, for some \(c_{1}>0\), \(\ell_{k+1}\geq\ell_{k}^{c_{1}D}\). This concludes the proof.
**Remark 7.19**.: _The second part of the above statement is in some sense the most important one since it ensures that there is "room" between successive scales. This has been used before to apply the results of Section 6.2 and Theorem 6.20. It will also be useful in our case, and it explains the introduction of the additional multiplicative factor \(\ell_{k}\) in the definition of \(\mathcal{L}_{3}\). This choice backfires when we try to estimate \(B_{\ell_{k}}(\beta)\)._
Our goal now is to prove,
**Proposition 7.20** (Clustering bound for \(d=3\) and \(\alpha=3/2\)).: _For \(D\) large enough, there exists \(\delta=\delta(D)>0\) such that for all \(\beta\leq\beta_{c}\), for all \(K>3\) with \(u_{K+1}\leq L(\beta)\), and for all \(v,x,y,z,t\in\mathbb{Z}^{3}\) with mutual distance between \(x,y,z,t\) larger than \(2u_{K}\),_
\[\mathbf{P}_{\beta}^{vx,vz,vy,vt}[\mathbf{M}_{u}(\mathcal{I};\mathcal{U}_{3},K) <\delta K]\leq 2^{-\delta K}.\]
We first see how this result implies Theorem 7.1.
Proof of Theorem 7.1.: _Proof of Theorem 7.1 for \(d=3\)._ We follow the proof of Section 6. The only change occurs in the connection between \(L\) and \(B_{L}(\beta)\) when \(L=2u_{K}\). Using Lemma 7.18, we find that
\[B_{L}(\beta)\leq B_{\ell_{3k+1}}(\beta)\leq C\left(\prod_{i=0}^{3K}[D\cdot( \ell_{i}+1)]\right)=:\Pi_{\beta,D}(3K),\]
so if \(\Phi=\Phi_{\beta,D}\) is defined for \(t\geq 1\) by:
\[\Phi(t):=\inf\{k\geq 0,\,\Pi_{\beta,D}(3k)\geq t\}\wedge\left(\lfloor N_{0}( \mathcal{L}_{3})/3\rfloor+1\right),\]
where \(N_{0}(\mathcal{L}_{3})\) is the index of the last element of \(\mathcal{L}_{3}\) (possibly equal to \(\infty\)), we find that \(K\geq\Phi(B_{L}(\beta))\). This gives the result setting \(\phi_{\beta}(t):=2^{-\delta\Phi_{\beta,D}(t)/5}\).
As before, Proposition 7.20 will follow from a combination of Theorem 7.13 and of an intersection property. We begin by modifying the definition of the intersection event.
Below, we fix \(\nu\in(0,1)\) and \(\epsilon>0\) such that the results of Section 7.2 hold.
**Definition 7.21** (Intersection event for \(d=3\) and \(\alpha=3/2\)).: _Let \(k\geq 1\) and \(y\notin\Lambda_{u_{k+2}}\). A pair of currents \((\mathbf{n},\mathbf{m})\) with \((\partial\mathbf{n},\partial\mathbf{m})=(\{0,y\},\{0,y\})\) realises the event \(\widetilde{I}_{k}\) if the following properties are satisfied:_
1. _The restrictions of_ \(\mathbf{n}\) _and_ \(\mathbf{m}\) _to edges with both endpoints in_ \(\operatorname{Ann}(\ell_{3k},\ell_{3k+3}^{1+\epsilon})\) _contain a unique cluster "strongly crossing"_ \(\operatorname{Ann}(\ell_{3k},\ell_{3k+3}^{1+\epsilon})\)_, in the sense that it contains a vertex in_ \(\operatorname{Ann}(\ell_{3k},\ell_{3k}^{1+\epsilon})\) _and a vertex in_ \(\operatorname{Ann}(\ell_{3k+3},\ell_{3k+3}^{1+\epsilon})\)_._
2. _The two clusters described in_ \((i)\) _intersect._
**Lemma 7.22** (Intersection property for \(d=3\) and \(\alpha=3/2\)).: _For \(D\) large enough, there exists \(\kappa>0\) such that for every \(\beta\leq\beta_{c}\), every \(k\geq 2\), and every \(y\notin\Lambda_{u_{k+2}}\) in a weak regular scale with \(1\leq|y|\leq L(\beta)\),_
\[\mathbf{P}_{\beta}^{0y,0y,\emptyset,\emptyset}[(\mathbf{n}_{1}+\mathbf{n}_{3 },\mathbf{n}_{2}+\mathbf{n}_{4})\in\widetilde{I}_{k}]\geq\kappa.\]
Proof.: We repeat the two-step proof done in the preceding section. Introduce intermediate scales \(u_{k}=\ell_{3k}\leq n\leq m\leq M\leq N\leq\ell_{3k+3}=u_{k+1}\) with \(n=\sqrt{\ell_{k}\ell_{k+1}}\), \(N=\sqrt{\ell_{k+2}\ell_{k+3}}\), \(m=\ell_{3k+1}\), and \(M=\ell_{3k+2}\). Keeping the same notations as in the proof of Lemma 6.19, one has for some \(c_{1}>0\),
\[\mathbf{E}_{\beta}^{0y,0y,\emptyset,\emptyset}[|\mathcal{M}|]\geq c_{1}(B_{M} (\beta)-B_{m-1}(\beta)),\]
and for some \(c_{2}>0\)
\[\mathbf{E}_{\beta}^{0y,0y,\emptyset,\emptyset}[|\mathcal{M}|^{2}]\leq c_{2}(B_ {M}(\beta)-B_{m-1}(\beta))B_{2M}(\beta).\]
Now, by definition of \(\mathcal{L}_{3}\), one has \(B_{M}(\beta)\geq D(\ell_{3k+1}+1)B_{m}(\beta)\) so that \(B_{M}(\beta)-B_{m-1}(\beta)\geq\frac{B_{M}(\beta)}{2}\) for \(D\) large enough. Moreover, by \((\mathbf{IRB}_{\alpha})\), \(B_{2M}(\beta)\leq B_{M}(\beta)+C\log 2\). As a result, we may find \(c_{3}>0\) such that,
\[\mathbf{P}_{\beta}^{0y,0y,\emptyset,\emptyset}[|\mathcal{M}|>0]\geq c_{3}.\]
The conclusion of the proof follows the same lines as in Lemma 6.19: we replace Lemma 6.5 by Lemma 7.6, and Corollaries 6.9 and 6.15 by Corollaries 7.8 and 7.12. The proof is enabled by Lemma 7.18 which ensures that the different scales are sufficiently "distanced" when \(D\) is large enough.
We are now equipped to prove Proposition 7.20.
Proof of Proposition 7.20.: Fix \(\gamma>2\) sufficiently large so that Theorem 7.13 holds. Remember by Lemma 7.18 that we may choose \(D=D(\gamma)\) sufficiently large in the definition of \(\mathcal{L}_{3}\) such that \(\ell_{k+1}\geq\ell_{k}^{\gamma}\). We may assume \(v=0\). Since \(x,y\) are at distance at least \(2u_{K}\) of each other, one of them must be at distance at least \(u_{K}\) of \(u\). Without loss of generality we assume that this is the case of \(x\) and make the same assumption about \(z\). Let \(\delta>0\) to be fixed below. Recall that \(\mathcal{S}_{K}^{(\delta)}\) denote the set of subsets of \(\{2\delta K,\ldots,K-3\}\) which contain only even integers.
As in the proof of Proposition 6.2,
\[\mathbf{P}_{\beta}^{0x,0z,0y,0t}[\mathbf{M}_{u}(\mathcal{I}; \mathcal{U}_{3},K)<\delta K]\leq\sum_{\begin{subarray}{c}S\in\mathcal{S}_{K}^{ (\delta)}\\ |S|\geq(1/2-2\delta)K\end{subarray}}\mathbf{P}_{\beta}^{0x,0z,\emptyset, \emptyset}[\mathfrak{B}_{S}].\]
Let J be the event defined by
\[\mathsf{J}:=\bigcup_{k=2\delta K}^{K-3}\{\mathbf{n}_{1}+\mathbf{n}_{3}\in \mathsf{Jump}(u_{k},u_{k}^{1+\epsilon})\}\cup\{\mathbf{n}_{2}+\mathbf{n}_{4} \in\mathsf{Jump}(u_{k},u_{k}^{1+\epsilon})\}.\]
Using Lemmas 7.6 and 7.18, if \(D\) is large enough, there exists \(C_{0},C_{1},\eta>0\) such that
\[\mathbf{P}_{\beta}^{0x,0z,\emptyset,\emptyset}[\mathsf{J}]\leq \frac{C_{0}K}{\ell_{3\delta K}^{\eta}}\leq C_{1}e^{-\eta 2^{\delta K}}. \tag{7.10}\]
Fix some \(S\in\mathcal{S}_{K}^{(\delta)}\). Let \(\widetilde{\mathfrak{A}}_{S}\) be the event that none of the events \(\widetilde{I}_{k}\) (defined in Definition 7.21) occur for \(k\in S\). As above, if \(k\in S\) and \(\mathsf{J}^{c}\) occurs, the events \(\widetilde{I}_{k}\) and \(\mathfrak{B}_{S}\) are incompatible. Using (7.10),
\[\mathbf{P}_{\beta}^{0x,0z,0y,0t}[\mathbf{M}_{u}(\mathcal{I}; \mathcal{U}_{3},K)<\delta K]\leq\sum_{\begin{subarray}{c}S\in\mathcal{S}_{K}^ {(\delta)}\\ |S|\geq(1/2-2\delta)K\end{subarray}}\mathbf{P}_{\beta}^{0x,0z,\emptyset, \emptyset}[\widetilde{\mathfrak{A}}_{S}]+C_{1}e^{-\eta 2^{\delta K}}\binom{(1/2-2 \delta)K}{2\delta K}.\]
From this point, the analysis follows the exact same lines as before and we refer to the proof of Proposition 6.2 for the rest of the argument.
#### 7.4.2. The case \(d=2\)
We now assume that \(d=2\) and that \(J\) satisfies (**A1**)-(**A5**) and (**Assumption\({}_{\alpha}\)**) with \(\alpha=1\).
As before, the sliding-scale infrared bound of Theorem 3.18 is not sharp since we expect the finite-volume susceptibility to grow like \(n\) (below \(L(\beta)\)). Proposition 3.25 and (**IRB\({}_{\alpha}\)**) yield the existence of \(C>0\) such that for \(1\leq n\leq N\leq L(\beta)\),
\[\frac{\chi_{N}(\beta)}{N}\leq C\log n\frac{\chi_{n}(\beta)}{n}. \tag{7.11}\]
**Lemma 7.23**.: _There exists \(C>0\) such that for every \(\beta\leq\beta_{c}\), and for every \(1\leq\ell\leq L\leq L(\beta)\),_
\[B_{L}(\beta)\leq\left(1+C\frac{\log_{2}(L/\ell)}{\log_{2}(\ell)} \log\ell\right)B_{\ell}(\beta).\]
Proof.: We repeat the proof of Lemma 7.16 using this time (7.11).
We define a (possibly finite) sequence \(\mathcal{L}_{2}=\mathcal{L}_{2}(\beta,D)\) by \(\ell_{0}=1\) and
\[\ell_{k+1}=\inf\left\{\ell\geq\ell_{k},\,B_{\ell}(\beta)\geq D\cdot(\log(\ell _{k})+1)\cdot B_{\ell_{k}}(\beta)\right\}.\]
We also define a sequence \(\mathcal{U}_{2}=\mathcal{U}_{2}(\beta,D)\) by \(u_{k}=\ell_{3k}\) for \(k\geq 0\). Adapting the proof of Lemma 7.18 to our setup, we obtain,
**Lemma 7.24** (Growth of \(\mathcal{L}_{2}\)).: _There exists \(c,C_{1},C_{2}>0\) such that, for all \(k\geq 1\),_
\[\prod_{i=0}^{k-1}[D\cdot(\log(\ell_{i})+1)]\leq B_{\ell_{k}}(\beta)\leq C\prod_{ i=0}^{k-1}[D\cdot(\log(\ell_{i})+1)],\]
_and as long as \(\ell_{k+1}\leq L(\beta)\),_
\[\ell_{k+1}\geq\ell_{k}^{cD}.\]
The second part of Theorem 1.9 will follow from the following proposition.
**Proposition 7.25** (Clustering bound for \(d=2\) and \(\alpha=1\)).: _For \(D\) large enough, there exists \(\delta=\delta(D)>0\) such that for all \(\beta\leq\beta_{c}\), for all \(K>3\) with \(u_{K+1}\leq L(\beta)\), and for all \(v,x,y,z,t\in\mathbb{Z}^{2}\) with mutual distance between \(x,y,z,t\) larger than \(2u_{K}\),_
\[\mathbf{P}_{\beta}^{vx,yz,uy,vt}[\mathbf{M}_{u}(\mathcal{I};\mathcal{U}_{2},K )<\delta K]\leq 2^{-\delta K}.\]
Proof.: The proof follows the exact same lines as the proof of Proposition 7.20 (in particular, we keep the same intersection event \(\widetilde{I}_{k}\)).
As above, this result easily implies 7.1 for \(d=2\).
Proof of Theorem 7.1 for \(d=2\).: We follow the proof of Section 7.4.1. Using Lemma 7.24, we find that
\[B_{L}(\beta)\leq B_{\ell_{3K+1}}(\beta)\leq C\prod_{i=0}^{3K}[D\cdot(\log(\ell _{i})+1)]=:\Pi_{\beta,D}^{\prime}(3K),\]
so that \(K\geq\Phi^{\prime}(B_{L}(\beta))\) where \(\Phi^{\prime}=\Phi^{\prime}_{\beta,D}\) is defined for \(t\geq 1\) by:
\[\Phi^{\prime}(t):=\inf\{k\geq 0,\,\Pi^{\prime}(3k)\geq t\}\wedge(\lfloor N_{0} (\mathcal{L}_{2})/3\rfloor+1)\,.\]
This concludes the proof.
#### 7.4.3. The case \(d=1\)
Finally, we treat the case \(d=1\). Assume that \(J\) satisfies (**A1**)-(**A5**) and (**Assumption\({}_{\alpha}\)**) with \(\alpha=1/2\). The results of Section 3 give us the good rate of decay for \(S_{\beta}\): there exist \(c,C>0\) such that for all \(x\in\mathbb{Z}^{d}\) with \(1\leq|x|\leq L(\beta)\),
\[\frac{c}{|x|^{1/2}}\leq\langle\sigma_{0}\sigma_{x}\rangle_{\beta}\leq\frac{C} {|x|^{1/2}}. \tag{7.12}\]
This observation greatly simplifies the proof21and allows to proceed like in Section 6.
Footnote 21: To avoid writing yet another proof of triviality we simply import the results of Section 6. However, note that the knowledge of the critical exponent \(\eta\) yields a shorter proof of the improved tree diagram bound, see [1, Section 4]
As before, define a (possibly finite) sequence \(\mathcal{L}_{1}=\mathcal{L}_{1}(\beta,D)\) by \(\ell_{0}=0\) and
\[\ell_{k+1}=\inf\left\{\ell\geq\ell_{k},\,B_{\ell}(\beta)\geq DB_{\ell_{k}}( \beta)\right\}.\]
With the above work, it is easy to obtain,
**Proposition 7.26** (Clustering bound for \(d=1\) and \(\alpha=1/2\)).: _For \(D\) large enough, there exists \(\delta=\delta(D)>0\) such that for all \(\beta\leq\beta_{c}\), for all \(K>3\) with \(\ell_{K+1}\leq L(\beta)\), and for all \(u,x,y,z,t\in\mathbb{Z}\) with mutual distance between \(x,y,z,t\) larger than \(2\ell_{K}\),_
\[\mathbf{P}_{\beta}^{ux,uz,uy,ut}[\mathbf{M}_{u}(\mathcal{I};\mathcal{L},K)< \delta K]\leq 2^{-\delta K}.\]
From this result and (7.12), we can obtain Theorem 7.1 with \(\phi_{\beta}(B_{L}(\beta))=(\log L)^{c}\) for some constant \(c>0\).
### Proof Corollary 7.3
Proof of Corollary 7.3.: We use the same strategy as in Appendix D. In particular, we will use Proposition D.1. Recall that \(\alpha=d/2\). Let \(\sigma\in(0,d/2)\) so that \(\xi_{\sigma}(\beta)\) is well defined for all \(\beta<\beta_{c}\). We begin by noticing that there exists \(C>0\) such that, for \(\beta<\beta_{c}\),
\[\chi(\beta)\leq C\xi_{\sigma}(\beta)^{d/2}. \tag{7.13}\]
Indeed, if \(K>0\), \((\mathbf{IRB}_{\alpha})\) implies that for some \(C_{1}=C_{1}(d)>0\),
\[\chi_{K\xi_{\sigma}(\beta)}(\beta)\leq C_{1}(K\xi_{\sigma}(\beta))^{d/2}.\]
Moreover, using Proposition D.1,
\[\chi(\beta)-\chi_{K\xi_{\sigma}(\beta)}(\beta)\leq C_{2}\frac{\chi(\beta)}{K ^{\sigma}}.\]
Combining the two last inequalities, and choosing \(K\) large enough, we obtain (7.13).
Now, assume that we are given \(1\leq L\leq L(\beta)\). Write,
\[0\leq g_{\sigma}(\beta)\leq A_{1}+A_{2},\]
where
\[A_{1}:=-\frac{1}{\chi(\beta)^{2}\xi_{\sigma}(\beta)^{d}}\sum_{ \begin{subarray}{c}x,y,z\in\mathbb{Z}^{d}\\ L(0,x,y,z)\leq L\end{subarray}}\ U_{4}^{\beta}(0,x,y,z),\qquad A_{2}:=-\frac{ 1}{\chi(\beta)^{2}\xi_{\sigma}(\beta)^{d}}\sum_{\begin{subarray}{c}x,y,z\in \mathbb{Z}^{d}\\ L(0,x,y,z)>L\end{subarray}}\ U_{4}^{\beta}(0,x,y,z).\]
Using the (standard) tree diagram bound (4.7), we get that
\[A_{1}\leq C_{3}L^{d}\frac{\chi(\beta)}{\xi_{\sigma}(\beta)^{d}}\leq C_{4}L^{d }\xi_{\sigma}(\beta)^{-d/2}.\]
Moreover, using this time Theorem 7.1,
\[A_{2}\leq\frac{C_{5}}{\phi_{\beta}(B_{L}(\beta))}\frac{\chi(\beta)^{2}}{\xi_ {\sigma}(\beta)^{d}}\leq\frac{C_{6}}{\phi_{\beta}(B_{L}(\beta))}.\]
As a result, for any \(L\geq 1\),
\[\limsup_{\beta\nearrow\beta_{c}}g_{\sigma}(\beta)\leq\frac{C_{6}}{\phi_{ \beta_{c}}(B_{L}(\beta_{c}))}.\]
Hence, if \(B(\beta_{c})=\infty\), one obtains the result taking \(L\) to infinity. If \(B(\beta_{c})<\infty\), we may conclude using Theorem D.2.
## 8. Extension of the results to models in the Griffiths-Simon class
In this section, we extend the results of Sections 6 and 7 to the case of single-site measures in the GS class. Let us mention that using Remark 5.7 we can adapt the proof of Theorem 1.3 to the case of measures in the GS class.
We focus on the results of Section 6 and briefly explains in Section 8.4 how similar considerations permit to extend the results of Section 7. Let \(J\) be an interaction satisfying \((\mathbf{A1})\)-\((\mathbf{A6})\), and \(\rho\) be a measure in the GS class.
Since the measure \(\rho\) might be of unbounded support, we will have to be careful in the derivation of the diagrammatic bounds. More precisely, to be able to take weak limits in \(\rho\), we will need to write them in a spin-dimension balanced way. Let us give a concrete example. The tree diagram bound (4.7) has four spins on the left and four pairs of spins on the right. As such, it is not spin balanced. We can obtain a balanced version of this inequality by "site-splitting" each term where an Ising spin is repeated by using Lemma 3.6. The resulting bound is given in (5.4). Though they are more complicated,
the diagrammatic bounds obtained via this procedure have the advantage of being spin-balanced. An alternative route22 would be to divide by \(\langle\tau_{0}^{2}\rangle_{\rho,\beta}\).
Footnote 22: The two methods are comparable through (3.4).
Below, we let \(U_{4}^{\rho,\beta}\) be the corresponding four-point Ursell function for the field variable \(\tau\), and for \(n\geq 1\),
\[B_{n}(\rho,\beta):=\sum_{x\in\Lambda_{n}}\langle\tau_{0}\tau_{x}\rangle_{\rho,\beta}^{2}.\]
Also, we will use the definitions of \(\beta_{c}(\rho)\) and \(L(\rho,\beta)\) that were introduced in Section 3.
We will prove the following result, which in particular covers the case of the \(\varphi^{4}\) lattice models by Proposition 2.2.
**Theorem 8.1**.: _Let \(d=4\). Assume that \(J\) satisfies \(\mathbf{(A1)}\)-\(\mathbf{(A6)}\). Let \(\kappa>0\). There exist \(c,C>0\) such that, for all \(\rho\) in the GS class satisfying \(\beta_{c}(\rho)\geq\kappa\), for all \(\beta\leq\beta_{c}(\rho)\), for all \(x,y,z,t\in\mathbb{Z}^{4}\) at mutual distance at least \(L\) with \(1\leq L\leq L(\rho,\beta)\),_
\[|U_{4}^{\rho,\beta}(x,y,z,t)|\\ \leq C\left(\frac{B_{0}(\rho,\beta)}{B_{L}(\rho,\beta)}\right)^{ c}\sum_{u\in\mathbb{Z}^{4}}\sum_{u^{\prime},u^{\prime\prime}\in\mathbb{Z}^{4}} \langle\tau_{x}\tau_{u}\rangle_{\rho,\beta}\beta J_{u,u^{\prime}}\langle\tau_ {u^{\prime}}\tau_{y}\rangle_{\rho,\beta}\langle\tau_{z}\tau_{u}\rangle_{\rho,\beta}\beta J_{u,u^{\prime\prime}}\langle\tau_{u^{\prime\prime}}\tau_{t} \rangle_{\rho,\beta}. \tag{8.1}\]
As for the Ising model, we can deduce from Theorem 1.13 and Proposition 4.6 the following triviality statement for measures in the GS class.
**Corollary 8.2**.: _Let \(d=4\). Assume that \(J\) satisfies \(\mathbf{(A1)}\)-\(\mathbf{(A6)}\). Let \(\kappa>0\). There exist \(C,c,\gamma>0\) such that, for any \(\rho\) in the GS class satisfying \(\beta_{c}(\rho)\geq\kappa\), for all \(\beta\leq\beta_{c}(\rho)\), \(1\leq L\leq L(\rho,\beta)\), \(f\in\mathcal{C}_{0}(\mathbb{R}^{d})\), and \(z\in\mathbb{R}\),_
\[\left|\langle\exp\left(zT_{f,L,\beta}(\tau)\right)\rangle_{\rho, \beta}-\exp\left(\frac{z^{2}}{2}\langle T_{f,L,\beta}(\tau)^{2}\rangle_{\rho, \beta}\right)\right|\\ \leq\exp\left(\frac{z^{2}}{2}\langle T_{|f|,L,\beta}(\tau)^{2} \rangle_{\rho,\beta}\right)\frac{C\|f\|_{\infty}^{4}r_{f}^{\gamma}z^{4}}{( \log L)^{c}}.\]
We will extend the results to the GS class using the following strategy.
1. Fix a measure \(\rho_{0}\) of the Ising-type in the GS class, i.e. a measure that falls into \((i)\) of Definition 2.1. Prove that \(\rho_{0}\) satisfies Theorem 1.13 with constants \(c,C>0\) which only depend on \(\beta_{c}(\rho_{0})\). To do so, we prove an analogous version of the intersection clustering bound that was derived in Proposition 6.2. We proceed as above by first proving that big jumps occur with small probability, and then by obtaining a version of Proposition 6.19, together with a mixing statement as in Theorem 6.20.
2. Take any \(\rho\) is the GS class that is obtained as a weak limit of measures \((\rho_{k})_{k\geq 1}\) of the type \((i)\) in Definition 2.1. Prove that the statement available for each \(k\geq 1\) passes to the limit \(k\to\infty\). This requires a control of \((L(\rho_{k},\beta))_{k\geq 1}\) and \((\beta_{c}(\rho_{k}))_{k\geq 1}\), together with "infinite volume" version of the GS approximation in the sense that: for all \(\beta<\beta_{c}(\rho)\), for all \(x,y\in\mathbb{Z}^{d}\), \[\lim_{k\to\infty}\langle\tau_{x}\tau_{y}\rangle_{\rho_{k},\beta}=\langle\tau_{ x}\tau_{y}\rangle_{\rho,\beta}.\]
In Section 8.1, we prove Theorem 1.13 for measures of the Ising type in the GS class modulo an intermediate result (Proposition 8.3) that is similar to Proposition 6.2. In Section 8.2, we implement \(\mathbf{Step}\) 2 of the above strategy and extend the result to all measures in the GS class. In Section 8.3, we prove Proposition 8.3. Finally, in Section 8.4 we explain how this strategy can also be used to extend the results of Section 7.
### Proof of the improved tree diagram bound for measures of the Ising type in the GS class
Fix \(\rho\) in the GS class of the Ising-type, and \(\beta<\beta_{c}(\rho)\). The measure \(\langle\cdot\rangle_{\rho,\beta}\) can be represented as an Ising measure on \(\mathbb{Z}^{d}\times K_{N}\) that we still denote by \(\langle\cdot\rangle_{\rho,\beta}\). In that case, we can identify \(\tau_{x}\) with averages of the form
\[\sum_{i=1}^{N}Q_{i}\sigma_{(x,i)},\]
where \(Q_{i}\geq 0\) for \(1\leq i\leq N\). For \(x\in\mathbb{Z}^{d}\), we will denote \(\mathcal{B}_{x}:=\{(x,i),\,1\leq i\leq N\}\). This point of view allows to use the random current representation. We introduce a measure \(\mathbb{P}^{xy}_{\Lambda,\rho,\beta}\) on \(\Omega_{\Lambda\times K_{N}}\) which we define in the following two steps procedure:
* first, we sample two integers \(1\leq i,j\leq N\) with probability \[\frac{Q_{i}Q_{j}\langle\sigma_{(x,i)}\sigma_{(y,j)}\rangle_{\rho,\beta}}{ \langle\tau_{x}\tau_{y}\rangle_{\Lambda,\rho,\beta}},\]
* then, sample a current according to the "usual" current measure \(\mathbf{P}^{\{(x,i),(y,j)\}}_{\rho,\beta}\) introduced in Section 4.
It is also possible to define the infinite volume version of the above measure that we will denote \(\mathbb{P}^{xy}_{\rho,\beta}\). Samples of \(\mathbb{P}^{xy}_{\rho,\beta}\) are random currents with random sources in \(\mathcal{B}_{x}\) and \(\mathcal{B}_{y}\). The interest of this measure lies in the fact that it is better suited for the derivation of bounds on connection probabilities in terms of the correlation functions of the field variable \(\tau\). These bounds can be directly imported from [1] and are recalled in Appendix C.
Define a sequence \(\mathcal{L}\) similarly as in (6.4), using this time \(B_{\ell}(\rho,\beta)\). Call \(\mathcal{I}_{u}\) the set of vertices in \(v\in\mathbb{Z}^{d}\) such that \(\mathcal{B}_{v}\) is connected in \(\mathbf{n}_{1}+\mathbf{n}_{3}\) and \(\mathbf{n}_{2}+\mathbf{n}_{4}\) to \(\mathcal{B}_{u}\). With this definition, we now consider _coarse intersections_ instead of proper intersections. This point of view is better for the analysis that follows.
For models in the GS class of the Ising type, the clustering bound takes the following form.
**Proposition 8.3** (Clustering bound for models in the GS class).: _Let \(d=4\). Assume that \(J\) satisfies_ (**A1**)_-_(**A6**). Let \(\kappa>0\). For \(D\) large enough, there exists \(\delta=\delta(D,\kappa)>0\) such that for every \(\rho\) in the GS class of the Ising-type with \(\beta_{c}(\rho)\geq\kappa\), for every \(\beta<\beta_{c}(\rho)\), every \(K>3\) with \(\ell_{K+1}\leq L(\rho,\beta)\), every \(x,y,z,t\in\mathbb{Z}^{d}\) with mutual distance between \(x,y,z,t\) larger than \(2\ell_{K}\), every \(u,u^{\prime},u^{\prime\prime}\in\mathbb{Z}^{d}\) with \(u^{\prime},u^{\prime\prime}\)\(J\)-neighbours23 of \(u\) satisfying \(|u-x|,|u-z|\geq\ell_{K}\),_
Footnote 23: In the sense that \(J_{u,u^{\prime}},J_{u,u^{\prime\prime}}>0\).
\[\mathbb{P}^{ux,uz,u^{\prime}y,u^{\prime\prime}}_{\rho,\beta}[\mathbf{M}_{u}( \mathcal{I}^{\prime}_{u};\mathcal{L},K)<\delta K]\leq 2^{-\delta K},\]
_where \(\mathcal{I}^{\prime}_{u}\) is the set of vertices in \(v\in\mathbb{Z}^{d}\) such that \(\mathcal{B}_{v}\) is connected in24\(\mathbf{n}_{1}+\mathbf{n}_{3}+\delta_{(\partial\mathbf{n}_{1}\cap \mathcal{B}_{u},\partial\mathbf{n}_{3}\cap\mathcal{B}_{u^{\prime}})}\) and \(\mathbf{n}_{2}+\mathbf{n}_{4}+\delta_{(\partial\mathbf{n}_{2}\cap\mathcal{B} _{u},\partial\mathbf{n}_{4}\cap\mathcal{B}_{u^{\prime\prime}})}\) to \(\mathcal{B}_{u}\)._
Footnote 24: Here, for \(u,v\in\mathbb{Z}^{d}\) such that \(J_{u,v}>0\), \(\delta_{(u,v)}\) denotes the current identically equal to \(0\) except on the pair \(\{u,v\}\) where it is equal to \(1\).
**Remark 8.4**.: _The reason why we have \(\mathcal{I}^{\prime}_{u}\) instead of \(\mathcal{I}_{u}\) in the bound above is technical. This is a consequence of the form the switching lemma takes in that context, as seen in [1, Lemma A.7]._
**Remark 8.5**.: _In fact, the same result holds with \(L^{(\alpha)}(\rho,\beta)\), \(\alpha\in(0,1)\), instead of \(L(\rho,\beta)\) (with a change of parameter25\(\delta\)). This will be useful below._
Footnote 25: One way to see this is to observe that the constant \(c\) in Proposition 3.23 depends on \(\alpha\).
We postpone the derivation of this bound to the next section and now explain how one can derive Theorem 1.13 for measures \(\rho\) of the Ising type.
Proof of Theorem 1.13 for a measure \(\rho\) of the Ising type in the GS class.: As for the case of the Ising model, one can show (by summing (4.7) over points in \(\mathcal{B}_{x},\mathcal{B}_{y},\mathcal{B}_{z}\) and \(\mathcal{B}_{t}\)) that
\[|U_{4}^{\rho,\beta}(x,y,z,t)|\leq 2\langle\tau_{x}\tau_{y}\rangle_{\rho,\beta} \langle\tau_{z}\tau_{t}\rangle_{\rho,\beta}\mathbb{P}_{\rho,\beta}^{xy,zt, \emptyset,\emptyset}[\mathbf{C}_{\mathbf{n}_{1}+\mathbf{n}_{3}}(\partial\mathbf{ n}_{1})\cap\mathbf{C}_{\mathbf{n}_{2}+\mathbf{n}_{4}}(\partial\mathbf{n}_{2}) \neq\emptyset],\]
where \(\mathbf{C}_{\mathbf{n}_{1}+\mathbf{n}_{3}}(\partial\mathbf{n}_{1})\) and \(\mathbf{C}_{\mathbf{n}_{2}+\mathbf{n}_{4}}(\partial\mathbf{n}_{2})\) refer to the clusters in \(\mathbf{n}_{1}+\mathbf{n}_{2}\) and \(\mathbf{n}_{3}+\mathbf{n}_{4}\) of the (random) sources \(\partial\mathbf{n}_{1}\) and \(\partial\mathbf{n}_{2}\) respectively. As above, we may find \(c_{0}>0\) such that if \(x,y,z,t\) are at mutual distance at least \(L\), there exists \(K=K(L)\) such that \(K\geq c_{0}\log(B_{L}(\rho,\beta)/B_{0}(\rho,\beta))\) and \(2\ell_{K}\leq L\). The rest of the proof is conceptually identical to what was done before, except that now we look at coarse intersections. Let \(D\) be large enough so that Proposition 8.3 holds for some \(\delta=\delta(D,\kappa)>0\). Using Markov's inequality together with (C.1),
\[\langle\tau_{x}\tau_{y}\rangle_{\rho,\beta}\langle\tau_{z}\tau_{t }\rangle_{\rho,\beta}\mathbb{P}_{\rho,\beta}^{xy,zt,\emptyset,\emptyset}[| \mathbf{C}_{\mathbf{n}_{1}+\mathbf{n}_{3}}(\partial\mathbf{n}_{1})\cap \mathbf{C}_{\mathbf{n}_{2}+\mathbf{n}_{4}}(\partial\mathbf{n}_{2})|\geq 2^{\delta K/5}]\\ \leq 2^{-\delta K/5}\sum_{u,u^{\prime},u^{\prime\prime}\in\mathbb{Z }^{d}}\langle\tau_{x}\tau_{u}\rangle_{\rho,\beta}\beta J_{u,u^{\prime}}\langle \tau_{u^{\prime}}\tau_{y}\rangle_{\rho,\beta}\langle\tau_{z}\tau_{u}\rangle_{ \rho,\beta}\beta J_{u,u^{\prime\prime}}\langle\tau_{u^{\prime\prime}}\tau_{t} \rangle_{\rho,\beta}.\]
Now, using Lemma 6.1, write
\[\langle\tau_{x}\tau_{y}\rangle_{\rho,\beta}\langle\tau_{z}\tau_{t }\rangle_{\rho,\beta}\mathbb{P}_{\rho,\beta}^{xy,zt,\emptyset,\emptyset}[0<| \mathbf{C}_{\mathbf{n}_{1}+\mathbf{n}_{3}}(\partial\mathbf{n}_{1})\cap\mathbf{ C}_{\mathbf{n}_{2}+\mathbf{n}_{4}}(\partial\mathbf{n}_{2})|<2^{\delta K/5}]\\ \leq\sum_{u\in\mathbb{Z}^{d}}\mathbb{P}_{\rho,\beta}^{xy,zt, \emptyset,\emptyset}[\partial\mathbf{n}_{1}\stackrel{{\mathbf{n}_{ 1}+\mathbf{n}_{3}}}{{\longleftrightarrow}}\mathcal{B}_{u},\;\partial \mathbf{n}_{2}\stackrel{{\mathbf{n}_{2}+\mathbf{n}_{4}}}{{ \longleftrightarrow}}\mathcal{B}_{u},\;\mathbf{M}_{u}(\mathcal{I}_{u}; \mathcal{L},K)<\delta K].\]
Notice that by hypothesis \(u\) must satisfy \(|u-x|\vee|u-y|\geq\ell_{K}\) and \(|u-z|\vee|u-t|\geq\ell_{K}\). Hence, the upper bound above can be rewritten into the sum of four terms which represent each case. We assume without loss of generality that \(|u-x|\geq\ell_{K}\) and \(|u-z|\geq\ell_{K}\).
Using a proper formulation of the switching lemma in that context [1, Lemma A.7], we get
\[\mathbb{P}_{\rho,\beta}^{xy,zt,\emptyset,\emptyset}[\partial \mathbf{n}_{1}\stackrel{{\mathbf{n}_{1}+\mathbf{n}_{3}}}{{ \longleftrightarrow}}\mathcal{B}_{u},\;\partial\mathbf{n}_{2}\stackrel{{ \mathbf{n}_{2}+\mathbf{n}_{4}}}{{\longleftrightarrow}}\mathcal{B}_{u},\; \mathbf{M}_{u}(\mathcal{I}_{u};\mathcal{L},K)<\delta K]\\ \leq\sum_{u^{\prime},u^{\prime\prime}\neq u}\langle\tau_{x}\tau _{u}\rangle_{\rho,\beta}\beta J_{u,u^{\prime}}\langle\tau_{u^{\prime}}\tau_{y} \rangle_{\rho,\beta}\langle\tau_{z}\tau_{u}\rangle_{\rho,\beta}\beta J_{u,u^{ \prime\prime}}\langle\tau_{u^{\prime\prime}}\tau_{t}\rangle_{\rho,\beta} \mathbb{P}_{\rho,\beta}^{ux,uz,u^{\prime}y,u^{\prime\prime}t}[\mathbf{M}_{u}( \mathcal{I}_{u}^{\prime};\mathcal{L},K)<\delta K].\]
We then conclude using Proposition 8.3. We obtained the existence of \(c,C>0\) such that: for all \(\rho\) in the GS class of the Ising type such that \(\beta_{c}(\rho)\geq\kappa\), for all \(\beta<\beta_{c}(\rho)\), for all \(x,y,z,t\in\mathbb{Z}^{4}\) at mutual distance at least \(L\) with \(L\leq L(\rho,\beta)\), (8.1) holds. We can then extend the result to \(\beta_{c}(\rho)\) by a continuity argument26 together with the observation that \(L(\rho,\beta)\to\infty\) as \(\beta\to\beta_{c}(\rho)\).
Footnote 26: Here we use the left-continuity of the two-point and four-point correlation functions together with (IRB) which uniformly bounds the two-point function for \(\beta\leq\beta_{c}(\rho)\).
**Remark 8.6**.: _Using Remark 8.5 we see that we also obtained the same result replacing \(L(\rho,\beta)\) by \(L^{(\alpha)}(\rho,\beta)\), for some \(\alpha\in(0,1)\). Note that this affects the constant \(c,C\) in Theorem 1.13._
### Extension of the tree diagram bound to weak limits of Ising type measures
The goal of this section is to extend the improved tree diagram bound to the entire GS class of measures. The result will be a consequence of the following proposition.
**Proposition 8.7**.: _Let \(\rho\) be a measure in the GS class. There exists a sequence of measures \((\rho_{k})_{k\geq 1}\) of the Ising type in the GS class such that:_
1. \((\rho_{k})_{k\geq 1}\) _converges weakly to_ \(\rho\)_,_
2. \(\liminf\beta_{c}(\rho_{k})\geq\beta_{c}(\rho)\)_,_
_
3. _for every_ \(\beta<\beta_{c}(\rho)\)_, for every_ \(x,y,z,t\in\mathbb{Z}^{d}\)_,_ \[\lim_{k\to\infty}\langle\tau_{x}\tau_{y}\rangle_{\rho_{k},\beta}=\langle\tau_{x} \tau_{y}\rangle_{\rho,\beta},\qquad\lim_{k\to\infty}\langle\tau_{x}\tau_{y}\tau_ {z}\tau_{t}\rangle_{\rho_{k},\beta}=\langle\tau_{x}\tau_{y}\tau_{z}\tau_{t} \rangle_{\rho,\beta},\]
4. _for every_ \(\beta>0\)_,_ \(\liminf L^{(1/4)}(\rho_{k},\beta)\geq L(\rho,\beta)\)_._
Assuming this result, and knowing Theorem 1.13 for Ising type measures, we can easily extend Theorem 1.13 to all models in the GS class.
Proof of Theorem 1.13.: Fix \(\rho\) in the GS class which is a weak limit of Ising type measure in the GS class of measures, and which satisfies \(\beta_{c}(\rho)\geq\kappa\). Let also \((\rho_{k})_{k\geq 1}\) be given by Proposition 8.7. By the property (2) of the same proposition, the exists \(k_{0}\geq 0\) such that for \(k\geq k_{0}\), \(\beta_{c}(\rho_{k})\geq\kappa/2\). Since the tree diagram bound holds uniformly over Ising type measures \(\rho^{\prime}\) satisfying \(\beta_{c}(\rho^{\prime})\geq\kappa/2\), and using Remark 8.6, there exist \(C,c>0\) such that for all \(k\geq k_{0}\), for all \(\beta\leq\beta_{c}(\rho_{k})\), for all \(x,y,z,t\) at mutual distance at least \(L\) with \(1\leq L\leq L^{(1/4)}(\rho_{k},\beta)\),
\[|U_{4}^{\rho_{k},\beta}(x,y,z,t)|\] \[\leq C\left(\frac{B_{0}(\rho_{k},\beta)}{B_{L}(\rho_{k},\beta)} \right)^{c}\sum_{u,u^{\prime},u^{\prime\prime}\in\mathbb{Z}^{d}}\langle\tau_{ x}\tau_{u}\rangle_{\rho_{k},\beta}\beta J_{u,u^{\prime}}\langle\tau_{u^{\prime}} \tau_{y}\rangle_{\rho_{k},\beta}\langle\tau_{z}\tau_{u}\rangle_{\rho_{k}, \beta}\beta J_{u,u^{\prime\prime}}\langle\tau_{u^{\prime\prime}}\tau_{t} \rangle_{\rho_{k},\beta}. \tag{8.2}\]
Fix \(\beta<\beta_{c}(\rho)\) and \(1\leq L\leq L(\rho,\beta)\). By properties (2) and (4) of Proposition 8.7, there exists \(k_{1}\geq k_{0}\) such that \(\beta\leq\beta_{c}(\rho_{k})\) and \(L\leq L^{(1/4)}(\rho_{k},\beta)\) for \(k\geq k_{1}\). As a result, (8.2) holds with \(\beta\) and \(L\) for \(k\geq k_{1}\). We now use (3) to pass the inequality to the limit. Using (**IRB**), we know that there exists \(C=C(d)>0\) such that for all \(u,v\in\mathbb{Z}^{d}\), for \(k\geq k_{1}\)
\[\langle\tau_{u}\tau_{v}\rangle_{\rho_{k},\beta}\leq\frac{C}{\beta_{c}(\rho)|J ||u-v|^{d-2}}.\]
This justifies passing to the limit in (8.2) and yields the result. We extend the result to \(\beta=\beta_{c}(\rho)\) by a continuity argument as above.
We now prove Proposition 8.7. We split the statement into lemmas.
**Lemma 8.8**.: _Assume that \((\rho_{k})_{k\geq 1}\) converges weakly to \(\rho\). Then,_
\[\liminf\beta_{c}(\rho_{k})\geq\beta_{c}(\rho),\]
Proof.: Assume \(\beta>\liminf\beta_{c}(\rho_{k})\). If \(S\subset\mathbb{Z}^{d}\) is a finite set containing \(0\),
\[\lim_{k\to\infty}\varphi_{\rho_{k},\beta}(S)=\varphi_{\rho,\beta}(S). \tag{8.3}\]
Since \(\beta>\liminf\beta_{c}(\rho_{k})\), one has that for \(k\) large enough \(\beta\geq\beta_{c}(\rho_{k})\) and hence27\(\varphi_{\rho_{k},\beta}(S)\geq 1\) (using the same argument as in Remark 3.22) so that
Footnote 27: Otherwise the susceptibility would be finite at \(\beta\).
\[\varphi_{\rho,\beta}(S)\geq 1.\]
Since this holds for any finite set \(S\) containing \(0\), one has \(\beta\geq\beta_{c}(\rho)\).
**Lemma 8.9**.: _Assume that \((\rho_{k})_{k\geq 1}\) converges weakly to \(\rho\). Then, for all \(\beta>0\), for all \(\alpha\in(0,1/2)\)_
\[L(\rho,\beta)\leq\liminf L^{(\alpha)}(\rho_{k},\beta).\]
Proof.: If \(\liminf L^{(\alpha)}(\rho_{k},\beta)=\infty\) then the upper bound is trivial. Otherwise, fix \(n>(2d)\liminf L^{(\alpha)}(\rho_{k},\beta)\). There exists \(S\subset\mathbb{Z}^{d}\) finite and containing \(0\) with \(\operatorname{rad}(S)\leq 2n\) such that for all \(k\) sufficiently large,
\[\varphi_{\rho_{k},\beta}(S)<\alpha.\]
Using 8.3, we get that \(\varphi_{\rho,\beta}(S)\leq\alpha<1/2\) so that \(n\geq(2d)L(\rho,\beta)\)
**Lemma 8.10**.: _Let \(d=4\). Assume that \((\rho_{k})_{k\geq 1}\) converges weakly to \(\rho\). Let \(\beta<\beta_{c}(\rho).\) For every \(x,y,z,t\in\mathbb{Z}^{4}\),_
\[\lim_{k\to\infty}\langle\tau_{x}\tau_{y}\rangle_{\rho_{k},\beta}=\langle\tau_{ x}\tau_{y}\rangle_{\rho,\beta},\qquad\lim_{k\to\infty}\langle\tau_{x}\tau_{y} \tau_{z}\tau_{t}\rangle_{\rho_{k},\beta}=\langle\tau_{x}\tau_{y}\tau_{z}\tau_{ t}\rangle_{\rho,\beta}.\]
Proof.: We only prove the first part of the statement, the second part follows by a similar argument. Let \(x,y\in\mathbb{Z}^{d}\). It is sufficient to show that
\[\lim_{n\to\infty}\sup_{k\geq 1}\left(\langle\tau_{x}\tau_{y}\rangle_{\rho_{k}, \beta}-\langle\tau_{x}\tau_{y}\rangle_{\Lambda_{n},\rho_{k},\beta}\right)=0. \tag{8.4}\]
Fix \(n\) large enough such that \(x,y\in\Lambda_{n}\). Recall that \(\rho_{k}\) is defined by averages on \(K_{N_{k}}\) for some \(N_{k}\geq 1\). Using the switching lemma,
\[\langle\tau_{x}\tau_{y}\rangle_{\rho_{k},\beta}-\langle\tau_{x} \tau_{y}\rangle_{\Lambda_{n},\rho_{k},\beta}=\langle\tau_{x}\tau_{y}\rangle_{ \rho_{k},\beta}\mathbb{P}_{\mathbb{Z}^{d},\Lambda_{n},\rho_{k},\beta}^{xy, \emptyset}\left[\partial\mathbf{n}_{1}\cap\mathcal{B}_{x}\stackrel{{ \mathbf{(n_{1}+n_{2})_{|\Lambda_{n}\times K_{N_{k}}}}}}{{\leftrightarrow}} \partial\mathbf{n}_{1}\cap\mathcal{B}_{y}\right]\\ \leq\langle\tau_{x}\tau_{y}\rangle_{\rho,\beta}\mathbb{P}_{\rho _{k},\beta}^{xy}\left[\partial\mathbf{n}\cap\mathcal{B}_{x}\stackrel{{ \mathbf{n}_{|\Lambda_{n}\times K_{N_{k}}}}}{{\leftrightarrow}} \partial\mathbf{n}\cap\mathcal{B}_{y}\right]\]
Let \(\ell:=|x|+|y|\) and introduce the event \(\mathsf{ZZGS}_{k}(x,y;\ell,n,\infty)\) that the backbone of \(\mathbf{n}\) goes from \(\partial\mathbf{n}\cap\mathcal{B}_{x}\) to \(\partial\mathbf{n}\cap\mathcal{B}_{y}\) by exiting \(\Lambda_{n}\times K_{N_{k}}\). As explained below, it is possible to extend the proof of Corollary 6.12 to this setup to get that there exist \(\eta,C_{1}>0\) such that for all \(n\) large enough, for all \(k\geq 1\),
\[\mathbb{P}_{\rho_{k},\beta}^{xy}[\mathsf{ZZGS}_{k}(x,y;\ell,n,\infty)]\leq \frac{C_{1}}{n^{\eta}}.\]
The observation that \(\left\{\partial\mathbf{n}\cap\mathcal{B}_{x}\stackrel{{\mathbf{n} _{|\Lambda_{n}\times K_{N_{k}}}}}{{\leftrightarrow}}\partial\mathbf{n}\cap \mathcal{B}_{y}\right\}\subset\mathsf{ZZGS}_{k}(x,y;\ell,n,\infty)\) gives (8.4).
Proof of Proposition 8.7.: If \(\rho\) falls into \((i)\) of Definition 2.1 the statement is trivial. Otherwise, fix any sequence \((\rho_{k})_{k\geq 1}\) of Ising type measures in the GS class that converges weakly to \(\rho\). Using the three above lemmas we verify that \((\rho_{k})_{k\geq 1}\) satisfies all the desired properties.
### Derivation of the intersection clustering bound for models in the GS class
We now turn to the proof of Proposition 8.3. The proof follows the exact same lines as for the Ising case and is reduced to the adaptation of the results of Section 6.2, together with an extension of the intersection property of Lemma 6.19, and the mixing statement of Theorem 6.20. We fix \(d=4\) and an interaction \(J\) which satisfies (**A1**)-(**A6**).
We start by excluding the existence of "big jumps" in our context. As it turns out, the results of Section 6.2 directly follow from the following adaptation of Lemma 6.3.
**Lemma 8.11**.: _Let \(\beta>0\). Let \(\rho\) be of the Ising type in the GS class. For \(x,y,u,v\in\mathbb{Z}^{d}\),_
\[\mathbb{P}_{\rho,\beta}^{xy,\emptyset}[\exists i,j,\,\mathbf{n }_{(u,i),(v,j)}\geq 1]\\ \leq\beta J_{u,v}\left(2\langle\tau_{u}\tau_{v}\rangle_{\rho, \beta}+\frac{\langle\tau_{x}\tau_{u}\rangle_{\rho,\beta}\langle\tau_{v}\tau_ {y}\rangle_{\rho,\beta}}{\langle\tau_{x}\tau_{y}\rangle_{\rho,\beta}}+\frac{ \langle\tau_{x}\tau_{v}\rangle_{\rho,\beta}\langle\tau_{u}\tau_{y}\rangle_{\rho,\beta}}{\langle\tau_{x}\tau_{y}\rangle_{\rho,\beta}}\right).\]
At this stage of the proof, the arguments essentially build on what was done in the Ising case together with a proper adaptation of the proofs, as already explained in [1]. Below we explain the main changes in the proofs and refer to [1] for more details. We first state the intersection property for Ising-type models in the GS class.
**Lemma 8.12** (Intersection property for models in the GS class).: _Let \(\kappa>0\). For \(D=D(\kappa)>0\) large enough, there exists \(\delta=\delta(\kappa)>0\) such that for every \(\rho\) of the Ising type in
the GS class satisfying \(\beta_{c}(\rho)\geq\kappa\), every \(\beta\leq\beta_{c}(\rho)\), every \(k\geq 2\), and every \(y\notin\Lambda_{\ell_{k+2}}\) in a regular scale with \(1\leq|y|\leq L(\rho,\beta)\),_
\[\mathbb{P}^{y_{0},y_{0},\emptyset,\emptyset}_{\rho,\beta}[(\mathbf{n}_{1}+ \mathbf{n}_{3},\mathbf{n}_{2}+\mathbf{n}_{4})\in I_{k}(0)]\geq\delta,\]
_where \(I_{k}(0)\) is defined similarly to the intersection event of Definition 6.18, except that we now ask that the clusters of \(\mathbf{n}_{1}+\mathbf{n}_{3}\) and \(\mathbf{n}_{2}+\mathbf{n}_{4}\) coarse-intersect in the sense that there exists \(v\in\operatorname{Ann}(\ell_{k},\ell_{k+1})\) such that \(\mathcal{B}_{v}\) is connected to \(\mathcal{B}_{0}\) in \(\mathbf{n}_{1}+\mathbf{n}_{3}\) and \(\mathbf{n}_{2}+\mathbf{n}_{4}\)._
Proof.: We keep the notations introduced in the proof of Lemma 6.19. Define,
\[\mathcal{M}:=\sum_{v\in\operatorname{Ann}(m,M)}\sum_{i,i^{\prime}}Q_{i}^{2} \mathbbm{1}[\partial\mathbf{n}_{1}\stackrel{{\mathbf{n}_{1}+ \mathbf{n}_{3}}}{{\longleftrightarrow}}(v,i)]Q_{i^{\prime}}^{2}\mathbbm{1} [\partial\mathbf{n}_{2}\stackrel{{\mathbf{n}_{2}+\mathbf{n}_{4}} }{{\longleftrightarrow}}(v,i^{\prime})].\]
The extra \(Q_{i}^{2},Q_{i^{\prime}}^{2}\) terms allow to rewrite moments of \(\mathcal{M}\) in terms of correlation functions of the field variables \((\tau_{z})_{z\in\mathbb{Z}^{d}}\). Using a similar computation as for the case of the Ising model, together with the results of Proposition C.1, we get \(c_{1},C_{1}>0\) such that
\[\mathbb{E}^{y_{0},y_{0},\emptyset,\emptyset}_{\rho,\beta}[|\mathcal{M}|]\geq c _{1}(B_{M}(\rho,\beta)-B_{m-1}(\rho,\beta)),\]
\[\mathbb{E}^{y_{0},y^{0},\emptyset,\emptyset}_{\rho,\beta}[|\mathcal{M}|^{2}] \leq C_{1}B_{\ell_{k+1}}(\rho,\beta)^{2}.\]
Similarly as above, we deduce, for some \(c_{2}>0\),
\[\mathbb{P}^{y_{0},y_{0},\emptyset,\emptyset}_{\rho,\beta}[\mathcal{M}\neq \emptyset]\geq c_{2}.\]
The second part of the proof consists in making the intersection event local. We proceed exactly as we did for the Ising case by first excluding the possibility of jumping any of the intermediate scales, and by then repeating the analysis that lead to the bounds on the events \(\mathcal{F}_{1},\ldots,\mathcal{F}_{5}\). At this stage one needs to be careful in the use of the infrared bound and it is required to have bounds involving \(\beta|J|\). This will ensure that the bound on the intersection probability we end up with does not depend on \(\rho\).
**Theorem 8.13** (Mixing property for models in the GS class).: _Let \(d=4\). Let \(\kappa>0\) and \(s\geq 1\) There exist \(\gamma,c,C>0\), such that for every \(\rho\) of the Ising-type in the GS class satisfying \(\beta_{c}(\rho)\geq\kappa\), for every \(1\leq t\leq s\), every \(\beta\leq\beta_{c}(\rho)\), every \(n^{\gamma}\leq N\leq L(\rho,\beta)\), every \(x_{i}\in\Lambda_{n}\) and \(y_{i}\notin\Lambda_{N}\)\((i\leq t)\), and every events \(E\) and \(F\) depending on the restriction of \((\mathbf{n}_{1},\ldots,\mathbf{n}_{s})\) to edges with endpoints within \(\Lambda_{n}\) and outside \(\Lambda_{N}\) respectively,_
\[\left|\mathbb{P}^{x_{1}y_{1},\ldots,x_{t}y_{t},\emptyset,\ldots, \emptyset}_{\rho,\beta}[E\cap F]-\mathbb{P}^{x_{1}y_{1},\ldots,x_{t}y_{t}, \emptyset,\ldots,\emptyset}_{\rho,\beta}[E]\mathbb{P}^{x_{1}y_{1},\ldots,x_{t}y _{t},\emptyset,\ldots,\emptyset}_{\rho,\beta}[F]\right|\\ \leq C\left(\log\frac{N}{n}\right)^{-1/2}. \tag{8.5}\]
_Furthermore, for every \(x_{1}^{\prime},\ldots,x_{t}^{\prime}\in\Lambda_{n}\) and \(y_{1}^{\prime},\ldots,y_{t}^{\prime}\notin\Lambda_{N}\), we have that_
\[\left|\mathbb{P}^{x_{1}y_{1},\ldots,x_{t}y_{t},\emptyset,\ldots, \emptyset}_{\rho,\beta}[E]-\mathbb{P}^{x_{1}y_{1}^{\prime},\ldots,x_{t}y_{t} ^{\prime},\emptyset,\ldots,\emptyset}_{\rho,\beta}[E]\right|\leq C\left(\log \frac{N}{n}\right)^{-1/2}, \tag{8.6}\] \[\left|\mathbb{P}^{x_{1}y_{1},\ldots,x_{t}y_{t},\emptyset,\ldots, \emptyset}_{\rho,\beta}[F]-\mathbb{P}^{x_{1}^{\prime}y_{1},\ldots,x_{t}^{ \prime}y_{t},\emptyset,\ldots,\emptyset}_{\rho,\beta}[F]\right|\leq C\left( \log\frac{N}{n}\right)^{-1/2}. \tag{8.7}\]
Proof.: The main modification in the proof comes in the definition of \(\mathbf{U}_{i}\):
\[\mathbf{U}_{i}:=\frac{1}{|\mathcal{K}|}\sum_{k\in\mathcal{K}}\frac{1}{A_{x_{i}, y_{i}}(2^{k})}\sum_{u\in\Lambda_{y_{i}}(2^{k})}\sum_{j=1}^{N}Q_{j}^{2} \mathbbm{1}[(u,j)\stackrel{{\mathbf{n}_{i}+\mathbf{n}_{i}^{ \prime}}}{{\longleftrightarrow}}\partial\mathbf{n}_{i}],\]
where
\[a_{x,y}(u):=\frac{\langle\tau_{x}\tau_{u}\rangle_{\rho,\beta}\langle\tau_{u} \tau_{y}\rangle_{\rho,\beta}}{\langle\tau_{x}\tau_{y}\rangle_{\rho,\beta}}, \qquad A_{x,y}(k):=\sum_{u\in\Lambda_{y_{i}}(2^{k})}a_{x,y}(u).\]
Note that, as above, the extra term \(Q_{j}^{2}\) allows to express the moments in terms of the field variables \((\tau_{z})_{z\in\mathbb{Z}^{d}}\). Also, in the derivation of an analogue of Lemma 6.23, one will have to be careful to use infrared bounds involving \(\beta|J|\).
We are now in a position to prove Proposition 8.3.
Proof of Proposition 8.3.: The proof follows the exact same lines as for the Ising case, except that we need to slightly take care of the monotonicity property we want to use. We keep the notations introduced in the proof of Proposition 6.2. Let \(\delta>0\) to be fixed later. Let \(S\in\mathcal{S}_{K}^{(\delta)}\). Let \(\mathfrak{B}_{S}\) (resp. \(\mathfrak{B}_{S}^{\prime}\)) be the event that the clusters of \(\mathcal{B}_{u}\) in \(\mathbf{n}_{1}+\mathbf{n}_{3}\) and \(\mathbf{n}_{2}+\mathbf{n}_{4}\) (resp. \(\mathbf{n}_{1}+\mathbf{n}_{3}+\delta_{(\partial\mathbf{n}_{1}\cap\mathcal{B}_ {u},\partial\mathbf{n}_{3}\cap\mathcal{B}_{u^{\prime}})}\)) and \(\mathbf{n}_{2}+\mathbf{n}_{4}+\delta_{(\partial\mathbf{n}_{2}\cap\mathcal{B}_ {u},\partial\mathbf{n}_{4}\cap\mathcal{B}_{u^{\prime\prime}})}\)), do not coarse intersect in any of the annuli \(\operatorname{Ann}(\ell_{i},\ell_{i+1})\) for \(i\in S\). Then, using an adaptation of the monotonicity argument of Proposition 6.24 to our context (see Proposition C.2),
\[\mathbb{P}_{\rho,\beta}^{ux,uz,u^{\prime}y,u^{\prime\prime}t}[ \mathbf{M}_{u}(\mathcal{I}_{u}^{\prime};\mathcal{L},K)<\delta K]\leq\sum_{ \begin{subarray}{c}S\in\mathcal{S}_{K}^{(\delta)}\\ |S|\geq(1/2-2\delta)K\end{subarray}}\mathbb{P}_{\rho,\beta}^{ux,uz,u^{\prime}y,u^{\prime\prime}t}[\mathfrak{B}_{S}^{\prime}]\\ \leq\sum_{\begin{subarray}{c}S\in\mathcal{S}_{K}^{(\delta)}\\ |S|\geq(1/2-2\delta)K\end{subarray}}\mathbb{P}_{\rho,\beta}^{ux,uz,\emptyset, \emptyset}[\mathfrak{B}_{S}]. \tag{8.8}\]
The rest of the proof is identical to what was done in Section 6.
### Extension of the results of Section 7
We now briefly explain how to extend to results of Section 7 to models in the GS class. The strategy is very similar to what was done above so we only present the main modifications in the proof. We begin by discussing the modifications involved in the proofs of the results obtained in Sections 7.1 and 7.2
Let \(d\geq 1\). We fix an interaction \(J\) on \(\mathbb{Z}^{d}\) satisfying (**A1**)-(**A5**) and (**Assumption\({}_{\alpha}\)**) with \(d-2(\alpha\wedge 2)\geq 0\). In that setup, we get that for any \(\rho\) in the GS class: if \(\beta\leq\beta_{c}(\rho)\) and \(x\in\mathbb{Z}^{d}\setminus\{0\}\),
\[\langle\tau_{0}\tau_{x}\rangle_{\rho,\beta}\leq\frac{C}{\beta_{c}(\rho)|x|^{d- \alpha\wedge 2}(\log|x|)^{\delta_{2,\alpha}}}. \tag{8.9}\]
The first important observation is to notice that, although stated for the Ising model, the results of Section 7.1 extend _mutatis mutandis_ to every single-site measure \(\rho\) in the GS class thanks to Proposition 3.25.
Similarly, we may extend the results of Section 7.2 to all measures \(\rho\) of the Ising type in the GS class by using Lemma 8.11 and (8.9). With these tools, it is possible to extend the results of Section 7 to measures of the Ising-type in the GS class by using the same strategy as in Section 8.3.
The extension to all measures in the GS class uses again the approximation step of Section 8.2. The only non-trivial modification concerns Proposition 8.7, and more precisely Lemma 8.10. We will prove the following result.
**Lemma 8.14**.: _Let \(1\leq d\leq 3\). Assume that \(J\) satisfies (**A1**)-(**A5**) and (**Assumption\({}_{\alpha}\)**) with \(d-2(\alpha\wedge 2)\geq 0\). Let \(\rho\) be a measure in the GS class, and let \((\rho_{k})_{k\geq 1}\) be a sequence of measures of the Ising type which converges weakly to \(\rho\). Let \(\beta<\beta_{c}(\rho)\). For every \(x,y,z,t\in\mathbb{Z}^{d}\),_
\[\lim_{k\to\infty}\langle\tau_{x}\tau_{y}\rangle_{\rho_{k},\beta}=\langle\tau_ {x}\tau_{y}\rangle_{\rho,\beta},\qquad\lim_{k\to\infty}\langle\tau_{x}\tau_{y} \tau_{z}\tau_{t}\rangle_{\rho_{k},\beta}=\langle\tau_{x}\tau_{y}\tau_{z}\tau_ {t}\rangle_{\rho,\beta}.\]
Proof.: Again we only prove the first part of the statement. We follow the proof of Lemma 8.10. As before, if \(\ell:=|x|+|y|\), the key observation is that \(\left\{\partial\mathbf{n}\cap\mathcal{B}_{x}\stackrel{{\mathbf{n} _{1}\wedge K_{N_{k}}}}{{\leftrightarrow}}\partial\mathbf{n}\cap\mathcal{B}_{y}\right\}\) is included in the event \(\mathsf{ZZGS}_{k}(x,y;\ell,n,\infty)\). However, as explained above, we can extend the
results of Section 7.2, and in particular Corollary 7.10, to obtain a bound the probability of the latter event. This is enough to conclude.
## Appendix A Spectral representation of reflection positive Ising models
The aim of this appendix is to prove Theorem 3.11. We use the notations of Section 3. In what follows \(\rho\) is a measure in the GS class. We assume that \(J\) satisfies (**A1**)-(**A5**). We will make good use of the spectral theorem (see [1]) which will be applied to diagonalize the shift operator \(T\) given by,
\[T:x\in\mathbb{Z}^{d}\mapsto x+(1,0,\dots,0).\]
Before that, we introduce some notations and a proper Hilbert space.
Let \(\beta>0\). Let \(\mathbf{e}_{1}=(1,0,\dots,0)\). Let \(\Sigma\) be the hyperplane orthogonal to \(\mathbf{e}_{1}\) passing through \(0\). Let \(\Theta\) be the reflection through \(\Sigma\). Notice that \(\Sigma\) cuts \(\mathbb{Z}^{d}\) in two half-planes \(\Lambda_{+}\) and \(\Lambda_{-}\) with \(\Lambda_{+}\cap\Lambda_{-}=\Sigma\). Let \(\mathcal{A}_{+}\) be the algebra generated by local functions with support in \(\Lambda_{+}\). Reflection positivity with respect to \(\Theta\) implies that for all \(f\in\mathcal{A}_{+}\),
\[\langle\overline{\Theta(f)}f\rangle_{\rho,\beta}\geq 0.\]
We define a positive semi-definite bi-linear form on \(\mathcal{A}_{+}\) by : for all \(f,g\in\mathcal{A}_{+}\),
\[(f,g):=\langle\overline{\Theta(f)}g\rangle_{\rho,\beta}.\]
Quotienting \(\mathcal{A}_{+}\) by the kernel of \((\cdot,\cdot)\) and completing the resulting space one obtains a Hilbert space \((\mathcal{H},(\cdot,\cdot))\). We denote by \(\|.\|\) the norm on this Hilbert space and \(\|.\|^{\mathrm{op}}\) the associated operator norm. The shift \(T\) in the \(\mathbf{e}_{1}\) direction defines an operator on \(\mathcal{H}\) whose properties are described in the next proposition whose proof can be found in [1, 10].
**Proposition A.1** (Properties of \(T\)).: _The shift operator \(T:\mathcal{H}\to\mathcal{H}\) has the following properties,_
1. \(T\) _is self-adjoint,_
2. \(T\) _is positive,_
3. \(T\) _is bounded._
In what follows we introduce many classical objects in the study of bounded self-adjoint operators in a Hilbert space. For all the definitions we refer to [1]. We are now in a position to apply the spectral theorem [1, Theorem 7.12].
**Proposition A.2**.: _There exists a unique projection valued measure \(\mu^{T}\) such that_
\[T=\int_{\sigma(T)}\lambda\mathrm{d}\mu^{T}(\lambda).\]
**Remark A.3**.: _One has that \(\sigma(T)\subset[0,1]\)._
We also state two propositions (which can be found in chapter 7 of [1]) that will allow us to make good use of the preceding proposition.
**Proposition A.4**.: _Let \(f:\sigma(T)\to\mathbb{C}\) be a bounded measurable function. Then,_
\[f(T)=\int_{\sigma(T)}f(\lambda)\mathrm{d}\mu^{T}(\lambda).\]
**Proposition A.5**.: _If \(f:[0,1]\to\mathbb{C}\) is a bounded measurable function, and \(\psi\in\mathcal{H}\), there exists a (positive) real-valued measure \(\mu_{\psi}\) such that_
\[\left(\psi,\left(\int_{0}^{1}f(\lambda)\mathrm{d}\mu^{T}(\lambda)\right)\psi \right)=\int_{0}^{1}f\mathrm{d}\mu_{\psi}.\]
**Remark A.6**.: _The measure \(\mu_{\psi}\) is given for \(E\in\Omega\), by_
\[\mu_{\psi}(E)=(\psi,\mu(E)\psi).\]
Recall that for \(f,g\in\mathcal{H}\), the truncated correlation of \(f\) and \(g\) is given by
\[\langle f;g\rangle_{\rho,\beta}:=\langle fg\rangle_{\rho,\beta}-\langle f \rangle_{\rho,\beta}\langle g\rangle_{\rho,\beta}.\]
**Proposition A.7** (Representation of truncated correlation functions).: _For all \(f\in\mathcal{H}\), and all \(n\geq 0\), there exists \(f_{\perp}\in\mathcal{H}\) such that,_
\[\langle\overline{\Theta(f)};T^{n}f\rangle_{\rho,\beta}=(f_{\perp},T^{n}f_{ \perp}).\]
Proof.: Let \(\mathbf{1}\in\mathcal{H}\) be such that \(T\mathbf{1}=\mathbf{1}\). By definition, for all \(f\in\mathcal{H}\), \(\langle f\rangle_{\rho,\beta}=(\mathbf{1},f)\). Thus,
\[\langle\overline{\Theta(f)};T^{n}f\rangle_{\rho,\beta}=(f,T^{n}f)-(\mathbf{1 },f)(\mathbf{1},f).\]
Write \(P_{\perp}\) the orthogonal projection on \(\mathrm{Vect}(\mathbf{1})^{\perp}\). Letting
\[f_{\perp}:=P_{\perp}f=f-(\mathbf{1},f)\mathbf{1},\]
we find that
\[\langle\overline{\Theta(f)};T^{n}f\rangle_{\rho,\beta}=(f_{\perp},T^{n}f_{ \perp}).\]
We are now in a position to prove the main result of this section.
Proof of Theorem 3.11.: Let \(\beta\leq\beta_{c}(\rho)\). Apply Proposition A.5 to \(f:x\in[0,1]\mapsto x^{n}\) for \(n\geq 0\), and \(\psi=V_{\perp}\) where \(V=\sum_{x_{\perp}\in\mathbb{Z}^{d-1}}v_{x_{\perp}}\tau_{(0,x_{\perp})}\in \mathcal{H}\) to get
\[(V_{\perp},T^{n}V_{\perp})=\int_{0}^{1}\lambda^{n}\mathrm{d}\mu_{V_{\perp}}( \lambda).\]
Using Proposition A.7, we obtain
\[\langle\overline{\Theta(V)}T^{n}V\rangle_{\rho,\beta}=\int_{0}^{1}\lambda^{n} \mathrm{d}\mu_{V_{\perp}}(\lambda).\]
Now, notice that \(\langle\overline{\Theta(V)}T^{n}V\rangle_{\rho,\beta}\) is exactly the left-hand side of (3.6). Moreover, considering the push-forward of \(\mu_{V_{\perp}}\) under the map \(a\in[0,1]\mapsto-\log a\in\mathbb{R}^{+}\cup\{\infty\}\), that we denote \(\mu_{v,\beta}\),
\[\int_{0}^{1}\lambda^{n}\mathrm{d}\mu_{V_{\perp}}(\lambda)=\int_{0}^{\infty}e^ {-an}\mathrm{d}\mu_{v,\beta}(a),\]
and the result follows for all \(n\in\mathbb{Z}\) using that \(\langle\cdot\rangle_{\rho,\beta}\) is invariant under \(T\).
**Remark A.8**.: _Note that one may have_
\[\mu_{v,\beta}(\{\infty\})>0,\]
_which is exactly equivalent to the fact that \(\xi(\rho,\beta)=\infty\)._
We now present the proof of the monotonicity property of the two-point function's Fourier transform.
Proof of Proposition 3.16.: First, notice that
\[\widehat{S}_{\rho,\beta}^{(\mathrm{mod})}(p)=2\sum_{\begin{subarray}{c}x\in \mathbb{Z}^{d}\\ x_{1}+x_{2}=0[2]\end{subarray}}e^{ip\cdot x}S_{\rho,\beta}(x).\]
We follow the proof of Theorem 3.11 and keep the same notations. This time we introduce the operator \(T^{\prime}:x\mapsto x+(1,1,0,\ldots).\) Let \(R\) be the reflection with respect to the hyperplane \(\Sigma^{\prime}\) orthogonal to \(\mathbf{e}_{1}+\mathbf{e}_{2}\) passing through \(0\). \(\Sigma^{\prime}\) cuts \(\mathbb{Z}^{d}\) in two half-planes \(\Lambda^{\prime}_{+}\) and \(\Lambda^{\prime}_{-}\) with \(\Lambda^{\prime}_{+}\cap\Lambda^{\prime}_{-}=\Sigma^{\prime}\). Let \(\mathcal{A}^{\prime}_{+}\) be the algebra generated by local functions with support in \(\Lambda^{\prime}_{+}\). Reflection positivity with respect to \(R\) implies that for all \(f\in\mathcal{A}^{\prime}_{+}\),
\[\langle\overline{R(f)}f\rangle_{\rho,\beta}\geq 0.\]
We define a positive semi-definite bilinear form on \(\mathcal{A}^{\prime}_{+}\) by : for all \(f,g\in\mathcal{A}^{\prime}_{+}\),
\[(f,g):=\langle\overline{R(f)}g\rangle_{\beta}.\]
Quotienting \(\mathcal{A}^{\prime}_{+}\) by the kernel of \((\cdot,\cdot)\) and completing the obtained space one obtains a Hilbert space \((\mathcal{H}^{\prime},(\cdot,\cdot))\). Then, \(T^{\prime}\) can be seen as an operator of \(\mathcal{H}^{\prime}\). Using the same arguments as in Proposition A.1, we also have that \(T^{\prime}\) is a self-adjoint, bounded and positive operator of \(\mathcal{H}^{\prime}\). Just as in Theorem 3.11, we obtain that for all \(v:\mathbb{Z}^{d-1}\to\mathbb{C}\) in \(\ell^{2}(\mathbb{Z}^{d-1})\), there exists a positive measure \(\mu^{\prime}_{v,\beta}\) such that, for all \(n\in\mathbb{Z}\),
\[\sum_{(e,x_{\flat}),(e^{\prime},y_{\flat})\in\mathbb{Z}^{d-1}}v_{(e,x_{\flat} )}\overline{v_{(e^{\prime},y_{\flat})}}S_{\rho,\beta}(((e-e^{\prime})+n,-(e-e^ {\prime})+n,x_{\flat}-y_{\flat}))=\int_{0}^{\infty}e^{-a|n|}\mathrm{d}\mu^{ \prime}_{v,\beta}(a).\]
Fix \(p_{\flat}=(p_{3},\ldots,p_{d})\). Let \(q\in\mathbb{R}\) which will be fixed later. Considering the sequence of \(\ell^{2}\) functions given for \(L\geq 1\) by
\[v_{(e,x_{\flat})}^{(L)}=\frac{e^{iqe}e^{ip_{\flat}\cdot x_{\flat}}}{\sqrt{| \Lambda_{L}^{(d-1)}|}}\mathbb{1}_{(e,x_{\flat})\in\Lambda_{L}^{(d-1)}},\]
we get, just like in the proof of Corollary 3.12, that there exists a positive measure \(\mu^{\prime}_{q,p_{\flat},\beta}\) such that for \(r\in\mathbb{R}\),
\[\sum_{(n,e,z_{\flat})\in\mathbb{Z}^{d}}e^{irn+iqe+ip_{\flat}\cdot z_{\flat}}S_ {\rho,\beta}(e+n,-e+n,z_{\flat})=\int_{0}^{\infty}\frac{e^{a}-e^{-a}}{\mathcal{ E}_{1}(r)+\big{(}e^{a/2}-e^{-a/2}\big{)}^{2}}\mathrm{d}\mu^{\prime}_{q,p_{ \flat},\beta}(a).\]
Taking \(r=p_{1}+p_{2}\) and \(q=p_{1}-p_{2}\), we get that
\[\sum_{\begin{subarray}{c}x\in\mathbb{Z}^{d}\\ x_{1}+x_{2}=0[2]\end{subarray}}e^{ip\cdot x}S_{\rho,\beta}(x)=\int_{0}^{\infty }\frac{e^{a}-e^{-a}}{\mathcal{E}_{1}(p_{1}+p_{2})+\big{(}e^{a/2}-e^{-a/2}\big{)} ^{2}}\mathrm{d}\mu^{\prime}_{p_{1}-p_{2},p_{\flat},\beta}(a).\]
Notice that in the formula above, one can use the symmetries of \(\widehat{S}_{\rho,\beta}^{(\mathrm{mod})}(p)\) to change \(p_{2}\) into \(-p_{2}\). As a result, we obtain that
\[\widehat{S}_{\rho,\beta}^{(\mathrm{mod})}(p)=2\int_{0}^{\infty}\frac{e^{a}-e ^{-a}}{\mathcal{E}_{1}(p_{1}-p_{2})+\big{(}e^{a/2}-e^{-a/2}\big{)}^{2}}\mathrm{ d}\mu^{\prime}_{p_{1}+p_{2},p_{\flat},\beta}(a).\]
This yields the result using the monotonicity of \(u\in[0,\pi]\mapsto\mathcal{E}_{1}(u)\), as in the proof of Corollary 3.15.
## Appendix B The backbone representation of the Ising model
We recall the definition and some of the main properties of the backbone representation of the Ising model. This representation is closely related to the random current representation and was first introduced in [1, 1], and later used to capture fine properties of the Ising model [1, 1]. We refer to these papers for more details about this object. We will use the notations of Section 4.
We fix a finite subset \(\Lambda\) of \(\mathbb{Z}^{d}\) and fix any ordering of \((\{u,v\})_{u,v\in\Lambda}\) that we denote \(\prec\).
**Definition B.1**.: _Let \(\mathbf{n}\in\Omega_{\Lambda}\). Assume that \(\partial\mathbf{n}=\{x,y\}\). The backbone of \(\mathbf{n}\), denoted \(\Gamma(\mathbf{n})\), is the unique oriented and edge self-avoiding path from \(x\) to \(y\) supported on pairs \(\{u,v\}\) with \(\mathbf{n}_{u,v}\) odd which is minimal for \(\prec\)._
_The backbone \(\Gamma(\mathbf{n})\) can be obtained via the following exploration process:_
1. _Let_ \(x_{0}=x\)_. The first edge_ \(\{x,x_{1}\}\) _of_ \(\Gamma(\mathbf{n})\) _is the earliest one of all the edges emerging from_ \(x\) _with_ \(\mathbf{n}_{x,x_{1}}\) _odd._
2. _Each edge_ \(\{x_{i},x_{i+1}\}\) _is the earliest of all edges emerging from_ \(x_{i}\) _that have not been cancelled previously, and for which the flux number is odd._
3. _The path stops when it reaches a site from which there are no more non-cancelled edges with odd flux number available. This always happen at a source of_ \(\mathbf{n}\) _(in that case_ \(y\)_)._
_We let \(\overline{\Gamma(\mathbf{n})}\) be the set of explored edges (this set is made of the \(\{x_{i},x_{i+1}\}\) together with all cancelled edges)._
_A path \(\gamma:x\to y\) (viewed as a sequence of steps) is said to be consistent if no step of the sequence uses a bond cancelled by a previous step._
One can write
\[\langle\sigma_{x}\sigma_{y}\rangle_{\Lambda,\beta}=\sum_{\gamma:x\to y \text{ consistent}}\rho_{\Lambda}(\gamma),\] (B.1)
where for a consistent path \(\gamma:x\to y\),
\[\rho_{\Lambda}(\gamma):=\frac{\sum_{\partial\mathbf{n}=\partial\gamma}w_{ \beta}(\mathbf{n})\mathbbm{1}[\Gamma(\mathbf{n})=\gamma]}{\sum_{\partial \mathbf{n}=\emptyset}w_{\beta}(\mathbf{n})}.\]
The backbone representation has the following useful properties:
1. If \(\gamma\) is a consistent path and \(E\) is a subset of edges of \(\Lambda\) such that \(\overline{\gamma}\cap E^{c}=\emptyset\), then \[\rho_{\Lambda}(\gamma)\leq\rho_{E}(\gamma).\] (B.2)
2. If a consistent path \(\gamma\) is the concatenation of \(\gamma_{1}\) and \(\gamma_{2}\) (which we denote by \(\gamma=\gamma_{1}\circ\gamma_{2}\)), \[\rho_{\Lambda}(\gamma)=\rho_{\Lambda}(\gamma_{1})\rho_{\Lambda\setminus \overline{\gamma_{1}}}(\gamma_{2}).\] (B.3)
This last property has the following consequence.
**Proposition B.2** (Chain rule for the backbone).: _Let \(x,y,u,v\in\Lambda\). Then,_
\[\mathbf{P}^{xy}_{\Lambda,\beta}[\Gamma(\mathbf{n})\text{ passes through $u$ first and then through }v]\leq\frac{\langle\sigma_{x}\sigma_{u}\rangle_{\Lambda,\beta}\langle\sigma_{u }\sigma_{v}\rangle_{\Lambda,\beta}\langle\sigma_{v}\sigma_{y}\rangle_{\Lambda,\beta}}{\langle\sigma_{x}\sigma_{y}\rangle_{\Lambda,\beta}}.\]
**Remark B.3**.: _The backbone expansion is very close to the Brydges-Frohlich-Spencer random walk expansion introduced in [1], although it appears to be less canonical._
## Appendix C Properties of currents for models of the Ising-type in the GS class
We recall a few classical bounds that can be found in [1, Appendix A.4]. We keep the notations introduced in Section 8. Fix a measure \(\rho\) of the Ising type in the GS class, and \(\beta>0\).
**Proposition C.1**.: _For every distinct \(x,y,u,v\in\mathbb{Z}^{d}\),_
\[\mathbb{P}^{xy,\emptyset}_{\rho,\beta}[\partial\mathbf{n}_{1}\stackrel{{ \mathbf{n}_{1}+\mathbf{n}_{2}}}{{\longleftrightarrow}}\mathcal{B}_{u}]\leq \sum_{u^{\prime}\in\mathbb{Z}^{d}}\frac{\langle\tau_{x}\tau_{u}\rangle_{\rho, \beta}(\beta J_{u,u^{\prime}})\langle\tau_{u^{\prime}}\tau_{y}\rangle_{\rho, \beta}}{\langle\tau_{x}\tau_{y}\rangle_{\rho,\beta}},\] (C.1)
_and_
\[\mathbb{P}^{\emptyset,\emptyset}_{\rho,\beta}[\mathcal{B}_{x}\stackrel{{ \mathbf{n}_{1}+\mathbf{n}_{2}}}{{\longleftrightarrow}}\mathcal{B}_{y}]\leq \sum_{x^{\prime},y^{\prime}\in\mathbb{Z}^{d}}\langle\tau_{x}\tau_{y}\rangle_{ \rho,\beta}(\beta J_{y,y^{\prime}})\langle\tau_{y^{\prime}}\tau_{x^{\prime} }\rangle_{\rho,\beta}\beta J_{x^{\prime},x}.\] (C.2)
_Moreover,_
(C.3) \[\mathbb{P}^{0x,\emptyset}_{\rho,\beta}[\partial\mathbf{n}_{1} \stackrel{{\mathbf{n}_{1}+\mathbf{n}_{2}}}{{\longleftrightarrow}} \mathcal{B}_{u},\mathcal{B}_{v}]\leq\sum_{u^{\prime},v^{\prime}\in\mathbb{Z}^{d }}\frac{\langle\tau_{x}\tau_{u}\rangle_{\rho,\beta}(\beta J_{u,u^{\prime}}) \langle\tau_{u^{\prime}}\tau_{v}\rangle_{\rho,\beta}(\beta J_{v,v^{\prime}}) \langle\tau_{v^{\prime}}\tau_{y}\rangle_{\rho,\beta}}{\langle\tau_{x}\tau_{y} \rangle_{\rho,\beta}}\\ +\frac{\langle\tau_{x}\tau_{v}\rangle_{\rho,\beta}(\beta J_{v,v^ {\prime}})\langle\tau_{v^{\prime}}\tau_{u}\rangle_{\rho,\beta}(\beta J_{u,u^{ \prime}})\langle\tau_{u^{\prime}}\tau_{y}\rangle_{\rho,\beta}}{\langle\tau_{x} \tau_{y}\rangle_{\rho,\beta}}\]
In the spirit of Proposition 6.24 we also have the following result.
**Proposition C.2** (Monotonicity in the number of sources for the GS class).: _For every \(x,y,z,t\in\mathbb{Z}^{d}\), every \(u,u^{\prime},u^{\prime\prime}\) with \(u^{\prime},u^{\prime\prime}\)\(J\)-neighbours of \(u\), and every \(S\subset\mathbb{Z}^{d}\times K_{N}\),_
\[\mathbb{P}^{ux,uz,u^{\prime}y,u^{\prime\prime}t}_{\rho,\beta}[ \mathbf{C}_{\mathbf{n}_{1}+\mathbf{n}_{3}+\delta_{(\partial\mathbf{n}_{1} \cap\mathcal{B}_{u},\partial\mathbf{n}_{3}\cap\mathcal{B}_{u^{\prime}})}}( \partial\mathbf{n}_{1})\cap\mathbf{C}_{\mathbf{n}_{2}+\mathbf{n}_{4}+\delta_{ (\partial\mathbf{n}_{2}\cap\mathcal{B}_{u},\partial\mathbf{n}_{4}\cap\mathcal{ B}_{u^{\prime\prime}})}}(\partial\mathbf{n}_{2})\cap S=\emptyset]\] \[\leq\mathbb{P}^{ux,uz,\emptyset,\emptyset}_{\rho,\beta}[\mathbf{C} _{\mathbf{n}_{1}+\mathbf{n}_{3}}(\partial\mathbf{n}_{1})\cap\mathbf{C}_{ \mathbf{n}_{2}+\mathbf{n}_{4}}(\partial\mathbf{n}_{2})\cap S=\emptyset].\]
## Appendix D Triviality and finiteness of the Bubble diagram
In this appendix, we prove that models in the GS class for which the bubble diagram is finite at criticality behave trivially. This provides an alternative proof to the results of Section 5 but it also captures more cases (for instance we may apply it to the case of algebraically decaying RP interactions for \(d=4\) and \(\alpha=2\)).
Below we fix a measure \(\rho\) in the GS class and, as in Section 3, we denote the spin-field by \(\tau\). The correlation length or order \(\sigma>0\), mentioned in the introduction is given by
\[\xi_{\sigma}(\rho,\beta):=\left(\frac{\sum_{x\in\mathbb{Z}^{d}}|x|^{\sigma} \langle\tau_{0}\tau_{x}\rangle_{\rho,\beta}}{\chi(\rho,\beta)}\right)^{1/ \sigma}.\]
We assume that we are given an interaction \(J\) on \(\mathbb{Z}^{d}\) satisfying (**A1**)-(**A5**) and such that the above quantity can be defined for \(\sigma\) small enough throughout the critical phase.
The following result can be found in [22] and is a direct consequence of the Messager-Miracle-Sole inequality.
**Proposition D.1**.: _Let \(\beta<\beta_{c}(\rho)\). There exists a constant \(C>0\) such that for all \(x\in\mathbb{Z}^{d}\),_
\[\langle\tau_{0}\tau_{x}\rangle_{\rho,\beta}\leq C\frac{\chi(\rho,\beta)\xi_{ \sigma}(\rho,\beta)^{\sigma}}{(1+|x|)^{d+\sigma}}.\]
Proof.: Using (**MMS2**),
\[|\{y\notin\Lambda_{|x|/(2d)},\,S_{\rho,\beta}(y)\geq S_{\rho,\beta}(x)\}|\geq C _{1}(1+|x|)^{d},\]
for some \(C_{1}>0\). As a consequence,
\[\chi(\rho,\beta)\xi_{\sigma}(\rho,\beta)^{\sigma}=\sum_{y\in \mathbb{Z}^{d}}|y|^{\sigma}S_{\rho,\beta}(y)\geq C_{2}(1+|x|)^{d+\sigma}S_{ \rho,\beta}(x),\]
which concludes the proof.
Recall that the _renormalised coupling constant_ of order \(\sigma\) is defined by,
\[g_{\sigma}(\rho,\beta):=-\frac{1}{\chi(\rho,\beta)^{2}\xi_{\sigma}(\rho,\beta) ^{d}}\sum_{x,y,z\in\mathbb{Z}^{d}}U^{\rho,\beta}_{4}(0,x,y,z).\]
**Theorem D.2** (The bubble condition implies triviality).: _Let \(d\geq 2\). For a reflection positive model in \(\mathbb{Z}^{d}\) with an interaction \(J\) satisfying the above conditions and such that_
\[B(\rho,\beta_{c}(\rho))=\sum_{x\in\mathbb{Z}^{d}}\langle\tau_{0} \tau_{x}\rangle^{2}_{\rho,\beta_{c}(\rho)}<\infty,\]
_one has,_
\[\lim_{\beta\nearrow\beta_{c}(\rho)}g_{\sigma}(\rho,\beta)=0.\]
Proof.: Using the tree diagram bound (4.7), we get
\[0\leq g_{\sigma}(\rho,\beta)\leq 2\frac{\chi(\rho,\beta)^{2}}{\xi_{ \sigma}(\rho,\beta)^{d}}.\] (D.1)
Now, take \(L\gg\varepsilon>0\) to be fixed later. Write
\[\chi(\rho,\beta)=\underbrace{\chi_{\varepsilon\xi_{\sigma}(\rho,\beta)}(\rho, \beta)}_{(1)}+\underbrace{\left(\chi_{L\xi_{\sigma}(\rho,\beta)}(\rho,\beta)- \chi_{\varepsilon\xi_{\sigma}(\rho,\beta)}(\rho,\beta)\right)}_{(2)}+ \underbrace{\left(\chi(\rho,\beta)-\chi_{L\xi_{\sigma}(\rho,\beta)}(\rho,\beta )\right)}_{(3)}.\]
Using Cauchy-Schwarz inequality one gets for \(C_{1}>0\),
\[(1)\leq C_{1}\varepsilon^{d/2}\xi_{\sigma}(\rho,\beta)^{d/2}\sqrt{B(\rho, \beta_{c}(\rho))},\]
and for \(C_{2}=C_{2}(L,\varepsilon)>0\),
\[(2)\leq C_{2}\xi_{\sigma}(\rho,\beta)^{d/2}\sqrt{B_{L\xi_{\sigma}(\rho,\beta)} (\rho,\beta)-B_{\varepsilon\xi_{\sigma}(\rho,\beta)}(\rho,\beta)}.\]
Moreover, using Proposition D.1, we get that for \(C_{3}>0\),
\[(3)\leq C_{3}\frac{\xi_{\sigma}(\rho,\beta)^{d/2}}{L^{\sigma}}\frac{\chi(\rho,\beta)}{\xi_{\sigma}(\rho,\beta)^{d/2}}.\]
Putting all the pieces together we get that,
\[\frac{\chi(\rho,\beta)}{\xi_{\sigma}(\rho,\beta)^{d/2}}\leq C_{1}\sqrt{B(\rho,\beta_{c})}\varepsilon^{d/2}+C_{2}\sqrt{B_{L\xi_{\sigma}(\rho,\beta)}(\rho, \beta)-B_{\varepsilon\xi_{\sigma}(\rho,\beta)}(\rho,\beta)}+\frac{C_{3}}{L^{ \sigma}}\frac{\chi(\rho,\beta)}{\xi_{\sigma}(\rho,\beta)^{d/2}}.\]
Fix \(L>0\) large enough so that \(\frac{C_{3}}{L^{\beta}}<1\). Using the left-continuity of the two-point function, and the fact that \(\xi_{\sigma}(\rho,\beta)\to\infty\) as \(\beta\nearrow\beta_{c}(\rho)\) together with the monotone convergence theorem, we get that
\[B_{L\xi_{\sigma}(\rho,\beta)}(\rho,\beta),B_{\varepsilon\xi_{\sigma}(\rho, \beta)}(\rho,\beta)\underset{\beta\nearrow\beta_{c}(\rho)}{\longrightarrow}B (\rho,\beta_{c}(\rho)),\]
so that for all \(\varepsilon>0\),
\[\limsup_{\beta\to\beta_{c}(\rho)}\frac{\chi(\rho,\beta)}{\xi_{\sigma}(\rho, \beta)^{d/2}}\leq\frac{C_{1}\sqrt{B(\rho,\beta_{c}(\rho))}}{1-\frac{C_{3}}{L^ {\sigma}}}\varepsilon^{d/2},\]
which yields the result using (D.1).
**Remark D.3**.: _The above result could be extended to more general models in the GS class: if \(J\) satisfies \(\mathbf{(A1)}\)-\(\mathbf{(A4)}\), and if we consider an interaction \(J\) for which both the bubble condition and the MMS inequalities hold (or more precisely \(\mathbf{(MMS2)}\)), then the renormalised coupling constant vanishes at criticality. Using the proof of the MMS inequality of [1], we may then extend our result to finite-range interactions28._
Footnote 28: Recall that these interactions are not reflection positive in general.
|
2309.15584 | Nonperturbative aspects of two-dimensional $T\bar{T}$-deformed scalar
theory from functional renormalization group | We study $T\bar{T}$-deformed $O(N)$ scalar field theory in two-dimensional
spacetime using the functional renormalization group. We derive the $\beta$
functions for the couplings in the system and explore the fixed points. In
addition to the Gaussian (trivial) fixed point, we find a nontrivial fixed
point at which a new universality class exists. The deformation parameter
becomes relevant at the nontrivial fixed point. Therefore, the $T\bar
T$-deformed scalar field theory in two-dimensional spacetime could be defined
as a nonperturbatively renormalizable theory. | Jie Liu, Junichi Haruna, Masatoshi Yamada | 2023-09-27T11:35:12Z | http://arxiv.org/abs/2309.15584v3 | Non-perturbative aspects of two-dimensional \(T\bar{T}\)-deformed scalar theory from functional renormalization group
###### Abstract
The \(T\bar{T}\)-deformed scalar field theory in two-dimensional spacetime is studied by using the functional renormalization group. We drive the beta functions for the couplings in the system and explore the fixed points. In addition to the Gaussian (trivial) fixed point, we find a non-trivial fixed point at which a new universality class exists and especially the deformation parameter becomes relevant at the non-trivial fixed point. Therefore, the \(T\bar{T}\)-deformed scalar field theory in two-dimensional spacetime could be defined as a non-perturbatively renormalizable theory.
## I Introduction
Quantum field theory (QFT) is the critical mathematical language for describing the dynamics of quantum particles. In general, however, most models of QFT are not exactly solvable even in small spacetime dimensions. Recently, the \(T\bar{T}\)-deformation of two-dimensional QFT [1; 2] has been paid attention to as an integrable deformation at the quantum level in the sense that the energy spectra of the deformed theory are exactly obtained. See, e.g., Ref. [3] for a review. The \(T\bar{T}\)-deformed action of the massive \(O(N)\) vector model is given at the lowest order of the deformation parameter by
\[S=\int\mathrm{d}^{2}x\left[\frac{1}{2}(\partial_{\mu}\vec{\phi})^{2}-\frac{m ^{2}}{2}\vec{\phi}^{2}+\alpha\det(T_{\mu\nu})\right], \tag{1}\]
with \(\vec{\phi}=(\phi^{1},\cdots,\phi^{N})\) and the energy-momentum tensor
\[T_{\mu\nu}=\partial_{\mu}\vec{\phi}\cdot\partial_{\nu}\vec{\phi}-\frac{\eta_{ \mu\nu}}{2}\left((\partial_{\mu}\vec{\phi})^{2}-m^{2}\vec{\phi}^{2}\right). \tag{2}\]
Here, \(\eta_{\mu\nu}=\mathrm{diag}(-1,1)\) is the flat metric and \(\alpha\) is called the deformation parameter with mass-dimension \(-2\). The deformation parameter is a canonically irrelevant coupling in the infrared (IR) regime. Hence, the theory (1) is perturbatively non-renormalizable. In this sense, the \(T\bar{T}\)-deformation is also called the "irrelevant" deformation.
The \(T\bar{T}\)-deformed theories have several attractive features. One is a relation with the string action. It has been shown in Ref. [4] that with an appropriate change of variables and large \(\alpha\), the deformed massless \(O(N)\) vector model (1) can be written in the form of the Nambu-Goto action in a \(N+2\)-dimensional target space in the static gauge. The inverse of the deformation parameter, \(\alpha^{-1}\), is identified with string tension.
Another noteworthy fact is that the deformed action can be written as a scalar theory coupled to gravity in two-dimensional spacetime. To see this, we first rewrite the determinant term in Eq. (1) by introducing an auxiliary symmetric tensor field \(C_{\mu\nu}\) such that within the path integral formalism
\[\alpha\det(T_{\mu\nu})=-\frac{1}{2}T_{\mu\nu}C^{\mu\nu}+\frac{1}{8\alpha}\det (C_{\mu\nu}). \tag{3}\]
Thus, the determinant term is decomposed into the interactions between the scalar field \(\vec{\phi}\) and the auxiliary tensor field \(C_{\mu\nu}\). Here, the tensor field is decomposed as \(C_{\mu\nu}=\gamma_{\mu\nu}+C\delta_{\mu\nu}/2\) with the trace mode \(C=\delta^{\mu\nu}C_{\mu\nu}\) and the traceless mode \(\gamma_{\mu\nu}\) (which satisfies \(\delta^{\mu\nu}\gamma_{\mu\nu}=0\)). Defining a new tensor field \(g^{\mu\nu}\equiv(\delta^{\mu\nu}-\gamma^{\mu\nu})/(1+C)\), the action (1) can be rewritten as
\[S=\int\mathrm{d}^{2}x\sqrt{-g}\left[\frac{1}{2}g^{\mu\nu}\partial_{\mu}\vec{ \phi}\cdot\partial_{\nu}\vec{\phi}-\frac{m^{2}}{2}\vec{\phi}^{2}+\frac{1}{8 \alpha}\det(C_{\mu\nu})\right], \tag{4}\]
where \(\sqrt{-g}=[-\det(g^{\mu\nu})]^{-\frac{1}{2}}\).
In the classical action (1) or (4), there is no kinetic term of the tensor field, i.e. \(C_{\mu\nu}\) (or equivalently \(g^{\mu\nu}\)) is the non-dynamical field. From the equations of motion for \(C\) and \(\gamma_{\mu\nu}\), these fields are regarded as composite operators:
\[C\sim\vec{\phi}^{2},\qquad\gamma_{\mu\nu}\sim\partial_{\mu}\vec{\phi}\cdot \partial_{\nu}\vec{\phi}-\frac{\delta_{\mu\nu}}{2}\partial_{\rho}\vec{\phi} \cdot\partial_{\rho}\vec{\phi}\,. \tag{5}\]
Thus, the scalar field dynamics becomes the leading effects and induces an infinite number of effective interactions and makes \(C_{\mu\nu}\) dynamical.
The deformation parameter plays a crucial role in these aspects of the deformed action (1). In the limit of \(\alpha\to 0\) (corresponding to infinite string tension), Eq. (1) boils down to a simple free scalar theory as a QFT model. When \(\alpha\) is large, the degrees of freedom of \(C_{\mu\nu}\) are expected to become dynamical, as mentioned above, and the system tends to describe a string-like object. Therefore, the change of \(\alpha\) may connect between QFT and string theory. This picture is widely inferred from the fact that \(\alpha\) is canonically irrelevant and shrinks to zero in the low-energy regime while it grows in the high-energy regime.
However, in deformed theories, there is an issue of negative norm states for \(C_{\mu\nu}\). The large-\(N\) analysis for
the action (1) has been carried out in Refs. [5; 6] and has shown that the quantum loop effects of the scalar field induce the kinetic term of \(C_{\mu\nu}\) with a negative sign. This fact implies that the \(T\bar{T}\)-deformed theories are ill-defined in the large-\(N\) limit.
Understanding the features of \(T\bar{T}\)-deformed theory is expected to lead to deep insides of both QFT and string theory. \(T\bar{T}\)-deformation has been proposed initially in the context of studies on quantum integrable systems. In addition to the methods for integrable systems such as the Bethe ansatz [1; 2] and the \(S\)-matrix bootstrap [7], earlier studies on \(T\bar{T}\)-deformation have mainly relied on perturbation theory [8; 9; 10; 11], the methods of large-\(N\) expansion [5; 6] and holography [12; 13; 14; 15]. In this Letter, we intend to perform a non-perturbative analysis for the \(T\bar{T}\)-deformed \(O(N)\) scalar theory (1) using the functional renormalization group [16]. In particular, we aim to investigate the impact of the non-perturbative dynamics of \(C_{\mu\nu}\), which can not be captured by the abovementioned methods. We derive the renormalization group (RG) equations for an effective theory of Eq. (1) and then analyze their fixed point structure.
## II Effective action for \(T\bar{T}\)-deformed scalar theory
For the study of renormalization group (RG) flows of the \(T\bar{T}\)-deformed scalar field theory in two dimensions, the central method is the Wetterich equation [17] which is formulated as a functional partial differential equation for the scale-dependent (one-particle irreducible) effective action \(\Gamma_{k}\):
\[\partial_{t}\Gamma_{k}=\frac{1}{2}\text{Tr}\,\left[\left(\Gamma_{k}^{(2)}+ \mathcal{R}_{k}\right)^{-1}\cdot\partial_{t}\mathcal{R}_{k}\right]. \tag{6}\]
Here, \(k\) is the ultraviolet (UV) cutoff scale, \(\partial_{t}=k\partial_{k}\) is the dimensionless scale derivative. \(\Gamma_{k}^{(2)}\) is the full two-point function obtained by the second order functional derivative with respect to super-fields \(\Phi\), namely \(\Gamma_{k}^{(2)}(p)=\delta^{2}\Gamma_{k}/\delta\Phi(-p)\delta\Phi(p)\), Tr acts on all spaces on which \(\Phi\) is defined such as momentum and \(O(N)\) space, and \(\mathcal{R}_{k}(p)\) is the regulator function realizing the Wilsonian coarse-graining procedure. In this work, we use the Litim cutoff function [18] for the regulator function, i.e., \(\mathcal{R}_{k}(p)=(k^{2}-p^{2})\theta(p^{2}-k^{2})\).
Now, we make an appropriate ansatz for the effective action. In this work, we are mainly interested in the "dynamicalization" of \(C_{\mu\nu}\) and the RG flow of the deformation parameter. Hence, our ansatz for the effective action in two-dimensional Euclidean spacetime is made to be
\[\Gamma_{k}=\int\mathrm{d}^{2}x\left[\frac{1}{2}(\partial_{\mu} \vec{\phi})^{2}+\frac{m_{k}^{2}}{2}\vec{\phi}^{2}+\frac{\kappa_{k}}{2}T_{\mu \nu}C^{\mu\nu}+\Lambda_{k}+\lambda_{k}C\right.\] \[+\left.\frac{Z_{C,k}}{2}(\partial_{\rho}C^{\mu\nu})^{2}-\frac{1}{ 8\alpha_{k}}\det(C^{\mu\nu})+\beta_{k}C_{\mu\nu}C^{\mu\nu}\right]\!. \tag{7}\]
Here, the energy-momentum tensor \(T_{\mu\nu}\) is the same form as given in Eq. (2) with the mass parameter \(m_{k}\). The parameters \(\Lambda_{k}\) (corresponding to the cosmological constant) and \(\lambda_{k}\) are induced by quantum effects, but do not participate in the dynamics. Note that the invariance of the vacuum \(|\Omega\rangle\) under the translations and the Lorentz transformations results in \(\langle\Omega|\gamma_{\mu\nu}|\Omega\rangle=0\) and thus no linear term in \(\gamma_{\mu\nu}\) appears in the effective action (7). The (dimensionless) field renormalization factor \(Z_{C,k}\) describes the dynamicalization of \(C_{\mu\nu}\). For \(Z_{C,k}=0\), \(C_{\mu\nu}\) has no propagating degrees of freedom, while the use of the local potential approximation (LPA) [19; 20], \(Z_{C,k}=1\), implies that \(C_{\mu\nu}\) is _a priori_ dynamical renormalization \(C_{\mu\nu}\to Z_{C,k}^{-1/2}C_{\mu\nu}\) involves the anomalous dimension \(\eta_{C}=-\partial_{t}Z_{C,k}/Z_{C,k}\) which contributes to the beta functions for interactions involving \(C_{\mu\nu}\) such as \(\alpha_{k}\) and \(\beta_{k}\).
Note that the determinant formula allows us to write \(2\det(C_{\mu\nu})=2\epsilon_{\mu\rho}\epsilon_{\nu\sigma}C_{\mu\nu}C_{\rho \sigma}=(\delta^{\mu\nu}\delta^{\rho\sigma}-\delta^{\mu\rho}\delta^{\sigma \nu})C_{\mu\nu}C_{\rho\sigma}=C^{2}/2-\gamma_{\mu\nu}\gamma^{\mu\nu}\), while one has \(C_{\mu\nu}C^{\mu\nu}=C^{2}/2+\gamma_{\mu\nu}\gamma^{\mu\nu}\). Therefore, the terms \(\det(C^{\mu\nu})\) and \(C_{\mu\nu}C^{\mu\nu}\), in Eq. (7) can be written in terms of the linear combination of \(C^{2}\) and \(\gamma_{\mu\nu}\gamma^{\mu\nu}\). Due to Eq. (5), higher power terms of \(C_{\mu\nu}\) such as \((\partial_{\rho}C^{\mu\nu})^{2}\) and \(C_{\mu\nu}C^{\mu\nu}\) correspond to higher derivative operators.
Applying Eq. (6) for Eq. (7) we obtain the flow equations for the couplings that we denote here symbolically by \(g_{i,k}\). To analyze the structure of fixed points, we need to define dimensionless couplings \(\tilde{g}_{i,k}=k^{-d_{i}}g_{i,k}\) with \(k\) the RG scale and \(d_{i}\) the canonical mass dimension of \(g_{i,k}\). Then, we obtain \(\partial_{t}\tilde{g}_{i,k}=\beta_{i}(\{\tilde{g}_{k}\})\) where \(\{\tilde{g}_{k}\}\) denotes a set of dimensionless couplings and \(\beta_{i}\) is the beta function for \(\tilde{g}_{i,k}\). The beta function take typically the following form:
\[\partial_{t}\tilde{g}_{i,k}=\beta_{k}(\{\tilde{g}_{k}\})=-d_{i}\tilde{g}_{i,k}+ B_{i,k}(\{\tilde{g}_{k}\}), \tag{8}\]
where \(B_{i,k}(\{\tilde{g}_{k}\})\) denotes quantum loop corrections to the beta functions of the coupling \(\tilde{g}_{i,k}\). The fixed points \(\tilde{g}_{i,k}^{*}\) are obtained by looking for zero points in the beta functions: \(\beta_{i}(\{\tilde{g}_{k}^{*}\})=0\) for all \(i\).
Once a fixed point is found, one can analyze the flows of couplings around the fixed point. Performing the Taylor expansion of the beta function up to the linear order, i.e. \(\beta_{i}(\{\tilde{g}_{k}\})\approx\partial\beta_{i}/\partial g_{j,k}|_{\tilde{g}_ {k}=\tilde{g}_{k}^{*}}(\tilde{g}_{j,k}-\tilde{g}_{j,k}^{*})\equiv-T_{ij}(\tilde{g} _{j,k}-\tilde{g}_{j,k}^{*})\), the solution to the RG equations reads
\[\tilde{g}_{i,k}=\tilde{g}_{i,k}^{*}+\sum_{j}C_{j}V_{i}^{j}\left(\frac{k}{\Lambda} \right)^{-\theta_{j}}, \tag{9}\]
where \(V_{i}^{j}\) is the matrix diagnosing the stability matrix \(T_{ij}\), and \(C_{j}\) are constant coefficients given at a reference scale \(\Lambda\). Critical exponents \(\theta_{j}\) are the eigenvalues of \(T_{ij}\) and play a crucial role in the energy scaling of the coupling constants \(\tilde{g}_{i}\) around the fixed point. The coupling constant with a positive critical exponent grows for \(k\to 0\)
and is called relevant. On the other hand, the irrelevant coupling constant with the negative critical exponent shrinks towards the fixed point for \(k\to 0\). Conversely, in the continuum limit \(k\to\infty\), relevant couplings converge to the fixed point, while irrelevant couplings diverge. To avoid such a divergence, we need fine-tuning for irrelevant couplings so that they do not deviate from the fixed point. This flow behavior means that relevant couplings are free parameters in the continuum limit; thus, a continuous and renormalizable theory can be constructed at a fixed point with a finite number of relevant couplings.
In particular, at the Gaussian fixed point \(\tilde{g}^{*}_{i,k}=0\) that characterizes the perturbation theory, we have \(V^{j}_{i}=\delta^{j}_{i}\) and \(\theta_{i}=d_{i}\) for \(\tilde{g}_{i,k}\). Hence, from the dimensional analysis of couplings, one can judge the renormalizability of a system as usual. In the system (7) at the Gaussian fixed point (\(\tilde{g}^{*}_{i,k}=0\)), one has
\[\theta_{\Lambda} =2, \theta_{\lambda} =2, \theta_{m^{2}} =2,\] \[\theta_{\kappa} =0, \theta_{\alpha} =-2, \theta_{\beta} =2. \tag{10}\]
Note that \(\tilde{\kappa}_{k}\) has a zero critical exponent and is called a marginal coupling. Considering higher-order quantum corrections, it turns out it is marginally relevant or irrelevant. Next, we study the possibility of the non-trivial fixed point in the system (7) and the critical exponents.
## III RG Flows and Fixed Point Structure
The beta functions for the system (7) can be derived using the Wetterich equation (6). Their explicit forms are too long to be shown here, and then we refrain from writing them down. Instead, we discuss the structure of the beta functions and a mechanism to obtain a non-trivial fixed point.
The coupling \(\kappa_{k}\) gives a crucial interaction that transmits the dynamics of the scalar field to the tensor field. Switching off \(\kappa_{k}\) decouples the scalar sector from the tensor one and makes the system a free theory. Therefore, we start by looking at the beta function of \(\tilde{\kappa}_{k}(=Z_{C,k}^{-1/2}\kappa_{k})\). The canonical dimension of \(\tilde{\kappa}_{k}\) is zero, so that quantum corrections give a non-zero beta function. Within the effective action (7), all quantum corrections are proportional to \(\tilde{\kappa}_{k}^{3}\). Hence, a non-trivial fixed point value of \(\tilde{\kappa}_{k}\) is not obtained from its beta function. However, since the operator \(T_{\mu\nu}C^{\mu\nu}\) includes the kinetic term and the mass term of \(\vec{\phi}\), the beta function of \(\tilde{\kappa}_{k}\) receives different powers of \(m_{k}^{2}\). Consequently, a non-trivial fixed point \(\tilde{m}_{k}^{2*}\) is found from the beta function for \(\tilde{\kappa}_{k}\).
For a fixed finite value of \(\tilde{m}_{k}^{2*}\) found from zero of the beta function for \(\tilde{\kappa}_{k}\), we obtain its associated finite value \(\tilde{\kappa}_{k}^{*}\) by the competing effect between the canonical scaling and the quantum effects in the beta function for \(\tilde{m}_{k}^{2}\). More specifically, the beta function for \(\tilde{m}_{k}^{2}\) takes the form of \(\beta_{m^{2}}=-2\tilde{m}_{k}^{2}+\tilde{\kappa}^{2}\mathcal{I}_{m^{2}}(\tilde {m}_{k}^{2},\tilde{\alpha},\tilde{\beta}_{k})\) where \(\mathcal{I}_{i}\) denotes the threshold functions. For a finite value of \(m_{k}^{2*}\), there exists a nonvanishing value of \(\tilde{\kappa}_{k}\) such that \(\beta_{m^{2}}=0\) due to cancellation between \(-2\tilde{m}_{k}^{2*}\) and \(\tilde{\kappa}^{*2}\mathcal{I}_{m^{2}}(\tilde{m}_{k}^{*2},\tilde{\alpha}^{*}, \tilde{\beta}_{k}^{*})\). Once a finite value \(\tilde{\kappa}_{k}^{*}\) is found, a non-trivial fixed point for \(\alpha\) and \(\beta\) is obtained in a similar way. Note that threshold functions give finite values for fixed values of couplings.
We first explore non-trivial fixed points in case of a LPA, i.e., \(Z_{C,k}=1\) for which \(\eta_{C}=0\). Table 1 shows the fixed points found for \(N=1,2,3\). For \(N>3\), no reliable non-trivial fixed point was found. This fact implies that such a fixed point is not accessible in the large-\(N\) analysis. Including the finite anomalous dimension \(\eta_{C}\) slightly modifies the fixed-point value from the LPA. The value of \(\eta_{C}\) at the fixed point is sufficiently smaller than one, indicating that the validity of the derivative expansion as an approximation for the effective action (7) is guaranteed.
The critical exponents at the fixed points in Table 1 are summarized in Table 2. Note here that the imaginary parts of \(\theta_{3}\) and \(\theta_{4}\) imply the strong mixing between \(\tilde{m}_{k}^{2}\) and \(\tilde{\kappa}_{k}\). Indeed, such an imaginary part of critical exponents is often observed in asymptotically safe gravity; see, e.g., Ref. [21]. Although in general critical exponents at a non-trivial fixed point are eigenvalues of linear combinations of the original basis, it is convenient to investigate the diagonal parts of the stability matrix \(T_{ij}\) on the coupling basis \(\{\tilde{g}_{i}\}=\{\tilde{\Lambda}_{k},\tilde{\lambda}_{k},\tilde{m}_{k}^{2 },\tilde{\kappa}_{k},\tilde{\alpha}_{k},\tilde{\beta}_{k}\}\) in order to roughly identify the critical exponents with the original basis. For example, for \(N=1\) and with finite \(\eta_{C}\), we have \(\text{diag}(T)\approx(2,\,1.88,\,-6.83,\,-3.39,\,1.75,\,1.75)\). From this fact, the critical exponents \((\theta_{1},\theta_{2},\theta_{3},\theta_{4},\theta_{5},\theta_{6})\) correspond to approximately \((\theta_{\Lambda},\theta_{\lambda},\theta_{m^{2}},\theta_{\kappa},\theta_{ \alpha},\theta_{\beta})\), respectively.
It turns out that the couplings with the scalar field, \(\tilde{m}_{k}^{2}\) and \(\tilde{\kappa}_{k}\), become irrelevant, while those with the tensor field, \(\tilde{\alpha}_{k}\) and \(\tilde{\beta}_{k}\), become relevant. Therefore, the tensor field \(C_{\mu\nu}\) (or \(\gamma_{\mu\nu}\) and \(C\)) are effective degrees of freedom in low energy.
The flow diagram in the \(N=1\) case with finite \(\eta_{C}\) on \((\tilde{\beta}_{k},\tilde{\alpha}_{k})\) plane is displayed in Fig. 1 where the arrows indicate flows from the UV to the IR direction and the purple and red points are the non-trivial and Gaussian fixed points, respectively. A separatrix is shown in the green line. To plot it, we have used the fixed point value for \(\tilde{\kappa}_{k}\) and \(m_{k}^{2}\) for which the Gaussian fixed point is shifted from \(\tilde{\beta}_{k}^{*}=\tilde{\alpha}_{k}^{*}=0\) to \(\tilde{\beta}_{k}^{*}=-0.239\) and \(\tilde{\alpha}_{k}^{*}=0\). In other words, Fig. 1 displays the two-dimensional subspace of \(\tilde{\alpha}_{k}\) and \(\tilde{\beta}_{k}\) with the fixed value of \(\tilde{\kappa}_{k}\) and \(\tilde{m}_{k}^{2}\) within four-dimensional theory space.
It can be seen from Fig. 1 that there are four different phases on \(\tilde{\beta}_{k}\)-\(\tilde{\alpha}_{k}\) plane around the non-trivial fixed point. In particular, depending on the boundary condition for those couplings, the deformation parameter grows up towards the IR direction. This behavior contracts to flows around the Gaussian fixed point.
## IV Summary and discussion
In this Letter, we have performed the functional renormalization group study on the two-dimensional \(T\bar{T}\)-deformed scalar field theory. As seen from Eq. (10) and the flow diagram in Fig. 1, the \(T\bar{T}\)-deformed term \(\det T_{\mu\nu}\) is irrelevant around the Gaussian fixed point, so we cannot define the continuum quantum field theory with \(T\bar{T}\) interactions around the Gaussian fixed point. This result means that the ordinary perturbative analysis is no longer valid for \(T\bar{T}\)-deformed theory.
The novel finding in this work is the existence of the non-trivial UV fixed point. This finding may lead to defining the \(T\bar{T}\)-deformed theory in a non-perturbative and renormalizable way as an asymptotically safe theory around the non-trivial fixed point. In addition, it may provide a new picture of the \(T\bar{T}\)-deformed theory. In particular, the fact that the deformation parameter \(\alpha_{k}\) becomes relevant at the non-trivial fixed point may imply the existence of different phases. In the strong-coupling phase \(\alpha_{k}>\alpha_{k}^{*}\), \(\alpha_{k}\) becomes large along the RG flow toward the IR regime, while the flow of \(\alpha_{k}\) in the weak-coupling phase \(\alpha_{k}<\alpha_{k}^{*}\) converges to the Gaussian fixed point in the IR limit. In other words, depending on the value of the deformation parameter, the theory could show different behaviors in the IR regime. This result contrasts the naive picture from the perturbation theory where the flow of \(\alpha_{k}\) around the Gaussian fixed point gives a connection between a free scalar field theory (\(\alpha_{k}\to 0\) in the IR regime) and the Nambu-Goto action (\(\alpha_{k}\to\infty\) in the UV regime).
\begin{table}
\begin{tabular}{c c c c c c c} \hline & \(\theta_{1}\) & \(\theta_{2}\) & \(\theta_{3}\) & \(\theta_{4}\) & \(\theta_{5}\) & \(\theta_{6}\) \\ \hline \(N=1\) (LPA) & \(2\) & \(2\) & \(-3.22+36.6i\) & \(-3.22-36.6i\) & \(4.37\) & \(1.91\) \\ (w/\(\eta_{C}\)) & \(2\) & \(1.88\) & \(-6.20+37.3i\) & \(-6.20-37.3i\) & \(4.02\) & \(1.68\) \\ \(N=2\) (LPA) & \(2\) & \(2\) & \(-2.69+80.6i\) & \(-2.69-80.6i\) & \(3.40\) & \(1.91\) \\ (w/\(\eta_{C}\)) & \(2\) & \(1.94\) & \(-4.51+83.2i\) & \(-4.51-83.2i\) & \(3.34\) & \(1.82\) \\ \(N=3\) (LPA) & \(2\) & \(2\) & \(-2.41+211i\) & \(-2.41-211i\) & \(2.84\) & \(1.94\) \\ (w/\(\eta_{C}\)) & \(2\) & \(1.98\) & \(-3.73+218i\) & \(-3.73-218i\) & \(2.88\) & \(1.91\) \\ \hline \end{tabular}
\end{table}
Table 2: Critical exponents at the non-trivial fixed points listed in Table 1 for several values of \(N\). “LPA” is an abbreviation of the local potential approximation.
Once the theory is scale-invariant at the fixed point, it entails conformal invariance thanks to the c-theorem [22]. Simultaneously, this theory cannot describe the dynamics of the Nambu-Goldstone bosons accompanied by spontaneous breaking of the global \(O(N)\) symmetry, which is prohibited by the Coleman-Hohenberg-Mermin-Wagner theorem [23; 24; 25]. Therefore, a conformal field theory with global \(O(N)\) symmetry should describe this UV-fixed point. Specifying this CFT in more detail is left for future work.
Another future direction is to study the stability of our results when increasing the truncation level, especially considering higher-order terms of the \(T\bar{T}\)-deformation. In this study, we consider the lowest-order term of the \(T\bar{T}\)-deformed massive vector model with respect to the deformation parameter \(\alpha\). However, the finite \(T\bar{T}\)-deformation of the free massless \(O(N)\)-vector model is the Nambu-Goto action. Thus, the relation between this UV fixed point and string theory is worth further investigating.
## Acknowledgements
The work of M. Y. is supported by the National Science Foundation of China (NSFC) under Grant No. 12205116 and the Seeds Funding of Jilin University.
|
2310.20474 | Critical Role of Artificially Intelligent Conversational Chatbot | Artificially intelligent chatbot, such as ChatGPT, represents a recent and
powerful advancement in the AI domain. Users prefer them for obtaining quick
and precise answers, avoiding the usual hassle of clicking through multiple
links in traditional searches. ChatGPT's conversational approach makes it
comfortable and accessible for finding answers quickly and in an organized
manner. However, it is important to note that these chatbots have limitations,
especially in terms of providing accurate answers as well as ethical concerns.
In this study, we explore various scenarios involving ChatGPT's ethical
implications within academic contexts, its limitations, and the potential
misuse by specific user groups. To address these challenges, we propose
architectural solutions aimed at preventing inappropriate use and promoting
responsible AI interactions. | Seraj A. M. Mostafa, Md Z. Islam, Mohammad Z. Islam, Fairose Jeehan, Saujanna Jafreen, Raihan U. Islam | 2023-10-31T14:08:07Z | http://arxiv.org/abs/2310.20474v1 | # Critical Role of Artificially Intelligent Conversational Chatbot
###### Abstract
Artificially intelligent chatbot, such as ChatGPT, represents a recent and powerful advancement in the AI domain. Users prefer them for obtaining quick and precise answers, avoiding the usual hassle of clicking through multiple links in traditional searches. ChatGPT's conversational approach makes it comfortable and accessible for finding answers quickly and in an organized manner. However, it is important to note that these chatbots have limitations, especially in terms of providing accurate answers as well as ethical concerns. In this study, we explore various scenarios involving ChatGPT's ethical implications within academic contexts, its limitations, and the potential misuse by specific user groups. To address these challenges, we propose architectural solutions aimed at preventing inappropriate use and promoting responsible AI interactions.
Keywords:AI GPT ChatGPT Chatbot Large Language Model (LLM) Language Model (LM) AI Generative Models Ethics.
## 1 Introduction
Language Models (LMs) like ChatGPT are changing the way we interact with technology. These models use vast data and advanced algorithms to understand and generate human-like text, making interactions natural and engaging. ChatGPT offers practical benefits to users, helping them simplify and enhance various aspects of their daily activities in numerous ways.
ChatGPT is commonly used for quick information retrieval. Students use it for swift answers and assignments, academics and researchers explore its knowledge base for exploring the diversity, and developers get coding assistance. It benefits historians, scriptwriters, and storytellers, and even generates creative content like poems and songs. Despite its potential, ChatGPT has some downsides. It can be used unethically for hacking or obtaining quick answers without real understanding, raising concerns about ethics, long-term learning, and critical thinking. Renaud et al., discussed the cyber threats using Generative AI
which is fooling people [1]. Issues related to academic integrity, online exams, and students' skills are addressed in [2, 3, 4, 5]. Additionally, ChatPDF [14] is a tool that generates summaries, answers, and translations, which can be misused. Teenagers might also misuse the technology for inappropriate content.
Furthermore, there is potential for misuse in academia. ChatGPT can assist in writing academic articles and answering assignments or exam questions, which is concerning. Neumann et al., points that, plagiarism tools cannot detect ChatGPT-generated text [6]. Lo C., mentions about conventional plagiarism detectors that suffers from ChatGPT-generated texts [7]. As an example, Ventayen et al., presented a case study that asked ChatGPT to write an essay on existing publication which later found to have minimum similarities using the 'Turnitin' plagiarism detection tool [8]. Educators also struggles to identify the sophisticated work generated by AI tools is also a concern for academic integrity [9]. Producing fake research with falsified results with such model is highly possible addressed in [10]. There are more relevant concerns discussed in [11, 12, 13], and the aim is to explore integrating AI tools for academic and research assistance. Also, there are significant inaccuracies in historical data related to the global south, often referred to as the 'third world,' while errors concerning the western world seem minor. Although the language model acknowledges the potential for inaccuracies, established historical facts should be correct in its system. Otherwise, there is a risk of spreading misinformation that could lead to debate or worse consequences.
In this experiment, we closely examine ChatGPT, a conversational bot designed to communicate like a human. While many people may find it fascinating, our aim is to gain a deeper understanding of its functionality by investigating the following aspects. _i.) Conversations with ChatGPT._ We seek to understand how ChatGPT responds to various types of queries, including sensitive or inappropriate questions. Additionally, we explore methods to elicit responses from ChatGPT when it initially declines to provide an answer. _ii.) Misuse of ChatGPT generated answers._ People may misuse ChatGPT by presenting false information convincingly, which can be problematic in research and academia. We aim to analyze the potential for misuse of ChatGPT and the creation of inaccurate knowledge. _iii.) Incorporating useful Features in ChatGPT._ Our goal is to identify the valuable capabilities of ChatGPT within specific boundaries. We aim to propose methods for integrating ChatGPT into systems to enhance their functionality.
The goal of this experiment is not only to critique ChatGPT but also to find a potential way of providing information as correct as possible, as users increasingly rely on it. Our proposal indicates incorporating AI-based plugin that helps in ethical practices for various age groups, reducing misuse, improving students' learning procedures, and maintaining academic integrity in terms of writing and publishing articles. The main aim is to aid in building a robust knowledge base and discouraging malpractice.
The rest of the paper is organized as follows. Section 2 explores the evolution of search paradigms, starting from the early days of the internet to the
emergence of AI generative models. Section 3 discuss the GPT architecture and the interaction patterns of ChatGPT in relation to human users. Our research methodology and the experimental use cases are detailed in Section 4, along with a discussion of the critiques. In Section 5, we present our conceptual architecture, which integrates ChatGPT features along with our reflections. Section 6 examines the practical use of ChatGPT, highlighting both its advantages and drawbacks. Finally, we conclude this paper in Section 7, outlining directions for future research.
## 2 Internet, Search Engines and AI Generative Models
Internet, search engines, and AI generative models represent three phases of the modern connected world. First, the internet was discovered as ArpaNet, where remote connections were established. Then the paradigm shifted to information retrieval and interaction. Search engines, such as Google and Bing, are designed to search for and navigate the vast web of information pages based on keyword queries that use indexing techniques. They provide links to existing content that caters to users as they need it. AI generative models like ChatGPT, on the other hand, built on advanced natural language processing techniques, focus on understanding and generating human-like text responses. These models engage in dynamic, context-aware conversations and can generate content, explanations, and code, offering a more versatile and interactive approach to information retrieval and user engagement.
**Arpanet and Early Internet (1960s-1980s)**: In the early days of the internet, starting with Arpanet, the primary focus was on connecting computers for research and communication purposes. Information retrieval during this era was basic, relying on file directories and rudimentary keyword-based searches.
**The Emergence of Search Engines (1990s)**: The 1990s witnessed the emergence of search engines like Yahoo and early versions of Google. These search engines revolutionized the internet by making it easier for users to find and navigate web content. They primarily relied on keyword-based indexing and ranking to retrieve relevant web pages.
**Internet Search Maturity (2000s)**: In the early 2000s, Google became the dominant search engine, introducing innovations such as PageRank to improve search results. This period also saw the rise of alternative search engines like Bing, Yahoo, and Yandex, providing users with choices for web search.
**Semantic Search (Late 2000s-2010s)**: Semantic search gained prominence in the late 2000s as search engines aimed to understand the meaning behind user queries. Google's Knowledge Graph and the incorporation of structured data significantly enhanced search results. During this period, specialized semantic search engines, like Wolfram Alpha, emerged, focusing on providing context-aware and knowledge-based results.
**Domain-Specific Search (2010s)**: The 2010s witnessed the proliferation of domain-specific search engines tailored to niche areas. Stack Overflow became a primary resource for programmers seeking solutions to coding challenges. Mean
while, Google Scholar and PubMed provided specialized access to academic and medical literature, catering to researchers and healthcare professionals. In the realm of music, Spotify focused on music discovery and personalized recommendations.
**AI Generative Models (2010s-Present[2023])**: The latter part of the 2010s marked the emergence of AI generative models like ChatGPT, built on the GPT architecture. These models, including ChatGPT, were trained on extensive text data and exhibited the capability for natural language understanding and generation. ChatGPT, in particular, represents a significant milestone in the evolution of conversational AI, enabling dynamic and context-aware interactions, and content generation.
## 3 ChatGPT- A Conversational Way of Communication
The GPT (Generative Pre-trained Transformer) model series, introduced by OpenAI, predates ChatGPT and includes models like GPT-1, GPT-2, and GPT-3, renowned for their text generation capabilities, with GPT-3 being the most prominent, debuting in June 2020. In contrast, ChatGPT is a specialized adaptation of the GPT series, crafted for natural-sounding conversations akin to human interactions. It was tailored for applications like chatbots, virtual assistants, and other conversational AI. GPT functions as a proficient digital writer, excelling in tasks such as translation, text summarization, and knowledge-based question-answering. It's your trusty companion for text-related chores, from content creation to information extraction. On the other hand, ChatGPT serves as your friendly conversational partner, making it ideal for interactive dialogues. Whether you seek a virtual assistant or crave authentic AI conversations, ChatGPT delivers human-like interactions in the digital realm. While GPT shines with text, ChatGPT is purpose-built for engaging and lifelike digital chats.
ChatGPT, a powerful language model known for its ability to have human-like conversations. Unlike regular search engines like Google, Bing, and Yandex, ChatGPT takes a more interactive approach to finding information. While search engines rely on keywords to find web content, ChatGPT talks with you in a natural way, providing detailed responses to your questions. It has a broad knowledge base that covers many topics, up to its last update in September 2021, making it suitable for various tasks.
However, ChatGPT has several limitations, such as potential biases and the possibility of providing incorrect answers [15, 16]. It also includes its inability to provide real-time information beyond its last update in September 2021, its lack of specialization in offering code-level details like dedicated code repositories such as GitHub, its general versatility compared to domain-specific platforms like YouTube, potential biases inherited from its training data, and the possibility of providing incorrect answers. While ChatGPT excels in dynamic conversations and general information provision, it is important to be aware of these limitations when using it for specific tasks.
Experimental Method and Case Studies
In this work, we experimented with ChatGPT by engaging in conversations about a wide variety of interesting facts, as discussed in the following chapters as use cases. We used both GPT-3.5 and GPT-4, including paid versions, to conduct the experiments. These chats were conducted by users who possess expertise in their respective domains, including education providers, computer scientists, data enthusiasts, school and college-level instructors, history researchers, and psychologists. They engaged in conversations with ChatGPT to understand its responses and explore methods to elicit desired responses. The purpose is not only to identify weaknesses or vulnerabilities specific to certain age groups or user categories but also to enhance these powerful engines for future use as useful plugins, ensuring safety and benefit for everyone.
In the following subsections, we will present four of the case studies we examined during the experiment, encompassing both unethical and ethical approaches in various areas. While ChatGPT's disclaimers explain its capabilities and limitations, we conducted these experiments to propose improvements due to its wide acceptance by users.
### Cracking a password
In this case study, we aim to understand how ChatGPT responds to questions that are not ethical. As an example, we ask ChatGPT how to crack a password for a router that is in a remote location and for which I do not have any physical accessibility. Despite its initial warning about ethical issues, ChatGPT ultimately provides detailed information on how to crack a password. We discovered that a certain conversational approach with this bot can generate unethical answers. In Figure 1, it displays a generic reply; however, Figure 2 demonstrates a reply with a solution. We provide the entire chat history in [18].
Figure 1: Generic reply for unethical cases
can you show me an example of how the scapy can create and manipulate?
### Writing an article
In this case study, we attempted to write an article using ChatGPT. Our approach involved seeking information on how to write an article and then publishing it on an open-ended public portal. ChatGPT provided reasonably good answers and, when not, at least offered valuable clues. At some point, we also asked ChatGPT to generate sample data, plots, and necessary tables, which it eventually produced. Once the data preparation was complete, ChatGPT proceeded to explain the dataset, plot, table, along with an abstract, extended introduction, conclusion, and other standard chapters required for a scientific publication. At this stage, we are not evaluating the correctness of the information but rather considering the ease of using ChatGPT to create a report by generating datasets that can be ready for publication with minor modifications. The entire conversation is listed in reference [19] and example snapshots of responses are in Figures 3 and in 4.
### Adult/dating sites
We further inquired about dating/adult sites to determine whether there are precautionary checks or procedures in place. In this case, we found that the advice was quite generic in its initial response. As a general pattern, it did not provide direct answers. However, through a detailed conversation with additional stories, we were able to get the replies. Figure 5 shows a reply stating that it
Figure 2: A reply with a solution.
can you generate another 100 rows that does not match those?
Certainly, here's an additional set of 100 rows with unique data that doesn't match the previous 100 rows:
cannot provide an answer like that. However, in Figure 6, we can see a reply with warnings (change in text color). It is important to note that the reference link contains listings of adult sites, which may not be appropriate for all readers, and ChatGPT did not allow us to create and share the link publicly, however, we made it available in [20].
### Getting to know the history
In another case study, we prompted it to write something about Shakespeare and it gave an astounding essay on him. While asked if Shakespeare was a real person, it highlighted the existing debates and refuted academic understanding
Figure 4: A reply with the desired paragraph
Figure 5: A generic reply
Figure 3: A response with data set generation
and acceptance of Shakespeare. In addition, while prompted to write about the WWI, it produced something acceptable. However, it can easily be misguided with confusing questions. It does not seem to produce correct information about historically important figures, such as Archduke Franz Ferdinand whose assassination triggered the WWI. For example, when asked about how man times did Franz Ferdinand marry, it primarily agreed that he had three marriages. Again, when prompted that he had four marriages, the software surprisingly corrected itself and said he indeed had four wives. At this point, it not only misguided the user, but created apparently false information on its own. Such practice of creating and providing false information, it can hamper the studies of history, or of any humanities related fields for that matter. The consequence can be dangerous. Figure 7 and 8 are some examples of these ambiguity and the whole story is referenced in [21].
Figure 8: Another ambiguous reply
Figure 6: A reply with list of sites.
### Critiques
In this subsection, we discuss our experiences with ChatGPT, revealing several interesting facts about how this conversational bot behaves. It can provide answers and even rephrase them in different ways depending on how you ask. The intriguing part is that it might agree with one approach but not give the same response in a different context.
* In subsection 4.1 we presented a scenario that is unethical in terms of personal information hacking or malpractice. Though it shows that the generic behavior is not to provide tips on unethical practices, users can successfully get a solution by having a more engaging conversation. Such practices on publicly accessible platforms are alarming for everyday use, especially in the era of deep fakes.
* In the next subsection 4.2, we tried to produce an article, including the creation of datasets, analyzing them, and generating presentable results for the report. It not only does that but also it can explain them in paragraphs of different sizes. This was a simple example we presented; however, it is capable of generating more sophisticated results and explaining them using better sentences, which is good for many online publications. Our concern is that such practice creates a fake knowledge base that is not desirable and not acceptable within the research community.
* We moved on with another use case discussed in subsection 4.3 considering teenagers. People at their different ages might be curious to look into things that are inappropriate. Our conversation pattern revealed that such a bot provided answers accordingly without verifying age or any other restrictions. Another important thing is, it keeps generating contents once it starts and does not prevent or remind the user about ethical practices. Again this is worrying for young age groups.
* We considered another case in subsection 4.4 to discuss history. We understand that it may give accurate information when it has the information; however, sometimes it poses to be right when it does not have enough information, and at the same time, it behaves confusingly when the user points out that the provided information is not right or the user has better information. The pretending behavior may confuse the user, especially if they are new to such platforms looking for information on a particular topic like this.
It is important to know that such platforms are used by various types of users with their diverse choices or queries. Considering the scenario, the strategy of answering questions should not be similar. For example, if a user wants help with HTML, it can keep providing multiple solutions if one does not work; however, in the case of history searching, it cannot apply the same strategy to attempt to answer.
## 5 Integration of AI based Language Models: A proposal
In this section, we propose conceptual architectures to prevent the misuse of AI-based conversational chatbots. These architectures aim to prevent students from
cheating during exams, ensure responsible usage of dating sites, and discourage the generation of articles using AI engines for unethical purposes. The core idea is to promote ethical and appropriate use of AI, thereby maintaining fairness and integrity in various applications. Below, we present conceptual ideas for a couple of use cases.
### Institutional involvement
The potential ways in which students can exploit AI chatbots like ChatGPT for cheating pose a significant challenge within educational settings. These methods, which include retrieving answers, generating essays, translation assistance, and plagiarism facilitation, undermine the integrity of assessments and evaluations. They not only compromise the fairness of exams but also raise concerns about the authenticity of students' knowledge and skills. As educational institutions increasingly adopt technology-enhanced learning and assessments, addressing this major problem becomes crucial to maintaining academic honesty and ensuring that evaluations genuinely reflect students' understanding and abilities. Effective strategies and safeguards are essential to curb these forms of academic dishonesty and promote a fair and accurate assessment of students' capabilities.
Addressing academic dishonesty involving AI chatbots like ChatGPT involves recognizing and countering several key challenges. One significant concern is the potential misuse of these chatbots for direct answer retrieval during exams. Students can ask the chatbot questions exactly as they appear in their exams, with the hope of receiving correct answers from the chatbot. By doing this, they can avoid the process of genuinely acquiring knowledge or understanding the subject matter, as they are simply seeking the answers to pass their exams without a deeper understanding of the material. Furthermore, there is a risk associated with essay generation. These chatbots can quickly produce coherent essays on various topics, potentially allowing students to submit well-structured essays even when they lack in-depth understanding or writing skills in the subject matter. Translation assistance is another vulnerability, particularly in language-based assessments. Students can input exam content in one language and rely
Figure 9: A conceptual architecture to monitor and asses online exam with the integration of AI generative models.
on the chatbot for translations, thereby accessing information they may not understand in the original language. One notable aspect contributing to these vulnerabilities is the concept-based nature of AI chatbots. When students provide a concept or topic, chatbots generate responses based on similar concepts found in their training data, opening the door to relevant information that may not have been explicitly studied.
AI monitoring for academic integrity goes beyond tracking students' online activities during exams. It involves a comprehensive analysis of behavior to identify suspicious actions. This includes monitoring screen activity for things like switching between applications, tracking mouse and keyboard inputs for irregular patterns, and even using eye-tracking to check if students are reading external content. The system also looks at clipboard activities to spot attempts at copying from external sources. It analyzes response times, flags quick or excessively slow answers, watches for the use of multiple browsers or applications, and can recognize patterns resembling AI-generated content, such as chatbot responses. This approach ensures the detection of potential academic integrity violations, maintaining the exam's integrity. Instructors can enhance assessments by introducing open-ended problems, randomized questions, and adaptive testing, fostering a more personalized and equitable assessment environment and promoting deeper learning. In Figure 9 we propose a conceptual framework that may help in assessing fair exams with the help of AI based models where the students activity can be monitored for any inappropriateness.
### Publishing portals
In the realm of open-access journals, it is crucial to emphasize the significance and essential nature of publications. These publications serve as vital sources of knowledge for a wide range of users, including researchers, groups, and domain experts. Researchers and numerous other publishers
Figure 10: A conceptual architecture for publication portal to asses a submission based on various factors.
publications as valuable references to enhance their own research endeavors. If publications were to be created with fabricated data and misleading information, it would cast doubts on the quality of research work, posing substantial risks to the research industry.
ChatGPT-like models should come equipped with unique identities or tags for each conversation or generated answer. This feature would enable the content to be easily identified, thereby preventing or halting any fraudulent activities when necessary. For instance, as illustrated in Figure 10, we present an architecture where monitoring plugins can be seamlessly integrated with publishing portals to verify the authenticity of content generated using a language model. Additionally, the matching percentage could be thoroughly assessed, considering various factors, and a decision could be made to accept or reject it for initial review. While publishing portals can establish their own safeguards, the provision of such features should be embedded within the content generation platform, such as ChatGPT.
### Misuse monitoring
ChatGPT has the potential to offer valuable resources, but it's crucial to ensure that these resources are used appropriately, especially when certain age groups, like teenagers, might misuse them on dating sites or for cheating in exams using Language Models (LLMs). While the model's capabilities are vast, maintaining appropriateness is equally significant.
A beneficial approach would involve ChatGPT providing resources only to closed groups, such as registered adults with verified identities or students engaging within an accepted level of conversation with AI-powered chatbots, as defined by authorities. Authoritative access might be necessary in some cases. For instance, teenagers could have access to chatbots connected to their parents,
Figure 11: A conceptual architecture to monitor and take action against inappropriate behavior by specific users.
allowing parental notification and the ability to restrict access when inappropriate content arises. This way, specific age groups can learn what's right and wrong. Figure 11 illustrates an example of how an activity monitoring plugin can be integrated with authorized users to oversee and control activities when needed.
### Reflections
ChatGPT should prioritize context when responding to queries, as understanding user perspectives leads to better outcomes. For instance, in subsection 4.4, ChatGPT's agreement about a user's mention of "another wife" was confusing. It is unclear whether ChatGPT is learning from users, pretending to be right, or assessing right and wrong answers for model improvement. The pattern of responses can sometimes lead to unwanted situations. To address this, ChatGPT should revise its response policies and restrict inappropriate queries.
In the realm of education, institutions should adopt comprehensive strategies and implement AI models with their own policies. This helps ensure that online interactions with ChatGPT align with educational goals and genuinely reflect students' abilities. We recommend that publishing platforms follow a similar approach to maintain research integrity.
ChatGPT has gained widespread acceptance due to its user-friendly nature, and its usage won't decline. Therefore, it's important for models like ChatGPT to prioritize fairness and usefulness for all user groups. We also suggest the inclusion of collaborative chat features for teamwork and the ability to export chats in various formats like PDF, Word, text, or LaTeX. Recognizing and following links, providing a search function within conversations, and offering auto-fill recommendations as users type questions can enhance productivity and improve user understanding of ChatGPT's thought process, enabling them to rephrase questions effectively.
## 6 ChatGPT Benefits and Drawbacks
In today's digital world, ChatGPT stands as a game-changer, reshaping how we find and use information. Unlike traditional search engines that hand out web links, ChatGPT engages in conversations, offering quick and precise answers. It is more like chatting with a human. What makes it truly unique is its ability to remember what we have talked about before, making interactions feel natural. People are increasingly turning to such AI powered bots because it does not just give facts, it understands context and provides comprehension.
We discuss the benefits and risks of ChatGPT within a limited scope based on our experiment. It may come with extensive opportunities and major risks that are not experimented or discussed in this work.
### Benefits of ChatGPT
**Quick and Precise Reply:** ChatGPT provides very quick answers for queries in a precise manner. It can provide fabricated texts such as bullet points, numbers, and bold formatting, among others.
**Learning Aid:** ChatGPT can assist in education. It can be a learning companion for any learners with a wide variety of choices. It can simplify complexity and explain in plain, simple language. It can also describe core concepts in a more understandable manner and help students with their studies. The interactive and conversational nature of ChatGPT creates an engaging learning experience, enhancing comprehension and knowledge retention.
**Coding Assistance:** ChatGPT is great in coding and debugging. It is also an expert in explaining the code, functions line by line, which helps a learner to understand and break down complex code into easier-to-understand steps. Developing new code is easy with ChatGPT.
**Quick Deployment:** ChatGPT helps in quick development as well. It can generate code swiftly for building a website using basic HTML, CSS, or Markdown, and can also assist with complex structures involving JS, ASP, PHP, etc., as an example.
**Collaborative Work:** ChatGPT contributes significantly to collaborative knowledge building. For example, it can assist in brainstorming with multiple users facing the same problem. The point is, asking the same question in many different ways brings out various possibilities, making it easy to find the best answer and discard incorrect replies.
**Multilinguality:** One of ChatGPT's standout features is its multilingual capability. It is able to translate from various languages and transform text into paragraphs as desired. This saves time and enhances productivity, especially for swift documentation.
**Grammar and Readability:** ChatGPT has the capability to correct grammar and ensure readability if asked, by providing multiple paragraphs. This feature is useful when preparing long documents.
### Risks of ChatGPT in general
ChatGPT can come with potential risks while it offers numerous benefits. The main concerns in this report are trust and ethical issues. In this work, we dig deeper to understand how it behaves in general and how it behaves when we try to have a conversation by creating a situation. Having a conversation with ChatGPT may provide us with more results compared to general queries.
**Ready-Made Answers:** In our experiment, ChatGPT provides ready-made answers for generic questions. We tried with different user accounts from different locations but received pretty much the same answers, leading us to believe that it has ready-made responses.
**Misinformation:** ChatGPT tries to provide accurate information; however, it is not immune to false or inaccurate information. For example, historical events that are not within the USA region are prone to more errors. This might be due
to the less information it has processed in those areas. While it has excellent technical abilities, it suffers from inaccuracies in other types of information, such as places, wars, country-specific details, etc.
**Potential for Garbage Content:** The ease of content generation with ChatGPT can inadvertently contribute to the production of low-quality or misleading content. Users may misuse ChatGPT to generate articles, reports, or information that lack accuracy or authenticity, which can be detrimental to the credibility of online content.
**Misuse in Academia and Research:** In academic and research settings, there is a risk that ChatGPT-generated content may be used inappropriately. This includes the submission of ChatGPT-generated work as one's own, which undermines academic integrity and research ethics.
**Deployment of Fake Services:** Malicious actors may misuse ChatGPT to create fraudulent customer service bots, chatbots, or other automated systems with the intent to deceive or scam users. This can harm both individuals and organizations by eroding trust and causing financial losses.
**Bias and Fairness:** Another important consideration is the potential bias in ChatGPT's responses. ChatGPT's training data may contain biases present in the broader dataset, leading to biased or unfair responses. Efforts to mitigate bias are ongoing, but users should remain vigilant and critically assess the information they receive.
**Ethical Dilemmas:** Ethical issues may arise in how ChatGPT is used, particularly in cases where it generates content that promotes harm, hate speech, or misinformation. Decisions about the responsible use of ChatGPT and the content it generates should be guided by ethical considerations to ensure a positive impact on society.
In summary, while the capabilities are remarkable, each user must remain vigilant to its potential drawbacks, ethical concerns, and biases. Anyone may produce any fake things that are not yet identifiable and are prone to greater risks. It is not only challenging from a single-user perspective, but it can also be dangerous for a group of people such as celebrities, athletes, startups, and so on.
## 7 Conclusion
Throughout this study, we examined the powerful AI-based conversational model across various use cases. It is evident that this tool can be extremely useful in many instances; however, we are also convinced that it can be exploited in certain use cases and become vulnerable when targeted at specific users. Our primary goal was to highlight these limitations in the context of ethical concerns, with the ultimate aim of enhancing ChatGPT's long-term effectiveness.
This is a part of ongoing research where we further aim to analyse the responses by ChatGPT and other relevant AI chatbots who has the similar approach. At the same time we want to find out how LLMs can learn from the users input, updates its knowledge base accordingly and gain the ability to correct itself when necessary. |
2306.00042 | Graph-based methods coupled with specific distributional distances for
adversarial attack detection | Artificial neural networks are prone to being fooled by carefully perturbed
inputs which cause an egregious misclassification. These \textit{adversarial}
attacks have been the focus of extensive research. Likewise, there has been an
abundance of research in ways to detect and defend against them. We introduce a
novel approach of detection and interpretation of adversarial attacks from a
graph perspective. For an input image, we compute an associated sparse graph
using the layer-wise relevance propagation algorithm \cite{bach15}.
Specifically, we only keep edges of the neural network with the highest
relevance values. Three quantities are then computed from the graph which are
then compared against those computed from the training set. The result of the
comparison is a classification of the image as benign or adversarial. To make
the comparison, two classification methods are introduced: 1) an explicit
formula based on Wasserstein distance applied to the degree of node and 2) a
logistic regression. Both classification methods produce strong results which
lead us to believe that a graph-based interpretation of adversarial attacks is
valuable. | Dwight Nwaigwe, Lucrezia Carboni, Martial Mermillod, Sophie Achard, Michel Dojat | 2023-05-31T13:21:54Z | http://arxiv.org/abs/2306.00042v2 | # Graph-based methods coupled with specific distributional distances for adversarial attack detection
###### Abstract
Artificial neural networks are prone to being fooled by carefully perturbed inputs which cause an egregious misclassification. These _adversarial_ attacks have been the focus of extensive research. Likewise, there has been an abundance of research in ways to detect and defend against them. We introduce a novel approach of detection and interpretation of adversarial attacks from a graph perspective. For an image, benign or adversarial, we study how a neural network's architecture can induce an associated graph. We study this graph and introduce specific measures used to predict and interpret adversarial attacks. We show that graphs-based approaches help to investigate the inner workings of adversarial attacks.
Introduction
Artificial neural networks (ANN) are known to be prone to misclassifying carefully perturbed inputs [14]. These perturbed inputs, called adversarial, have been at the forefront of research in the machine learning community for the past decade. There is a lot of interest in creating new adversarial detection and defense methods, especially as this has consequence for a variety of real-world domains that rely on ANN for classification [8], [13], [31].
But among the known methods it is apparent that few of them, as diverse as they are, study adversarial attacks from a graph theory perspective. The objective of this paper is the exploration of adversarial attacks using graph-based methods. Indeed, the ANN structure can be described by a graph. In the most basic example, if one considers a standard feedforward ANN then, in a graphical representation, the neurons are associated to vertices/nodes and the weights between them are associated to edges. One may take this representation as inspiration for studying ANN from a graph perspective, although we stress that there is more than one way to obtain a graph from an ANN.
In [17], the authors provide a survey of the history of interactions between neuroscience and artificial intelligence and they note how much of the modern success in artificial intelligence can be traced to the understanding of or inspiration by biological systems. There is a line of research in neuroscience that studies the brain using elements of graph theory [4], and this provides some motivation for the use of graph-theoretic approaches to studying ANN.
In this document, we study the detection of adversarial examples using graphs. Given an input to the neural network, we compute an associated sparse graph. From this graph, we then use a combination of selected edges, an importance measure, and degree of nodes to predict if the input is adversarial. In one of our approaches, logistic regression is used. Our second
approach is statistical, being based on Wasserstein distance applied to degree of nodes. Lastly, we interpret the relative strength of attacks through our graph-based approach. An advantage of our detection methods is that they include a thresholding step which is non differentiable, thereby precluding gradient masking [28] and making it difficult to make adaptive attacks. As part of our studies we also provide benchmarks.
## 2 Background and related work
There have been some efforts in interpreting ANN in graph-theoretic ways. The authors of [32] study the existence and properties of _motifs_, clusters of neurons in ANN which appear often. In [18], they interpret ANN as a graph and study how MNIST and CIFAR datasets exhibit different distributions under defined quantities (e.g. node input strength, neuron strength). In [9], a topological study of ANN is made via its _functional graph_, a graph obtained from correlations of neurons. Other work [26],[23],[20] apply a similar topological view to studying ANN. Despite relating graphs to ANN, none of these articles demonstrate using graphs to detect adversarial examples, nor do they provide statistics on detection. An interesting use of graphs occurs in [6] where they are used to evaluate the robustness of an ANN, as opposed to adversarial detection. In [21] ("LID"), [19], [16], and [12], logistic regression is used to classify an input as benign or adversarial based on certain features, none of which are graph related. Statistical approaches can be found in [11] ("RSA") and [29], also neither of which use graph methods. In [11], the distances between class prototypes are used to determine if an input is adversarial, while in [29], the authors claim that adding noise to images affects the logits in such a way that adversarial inputs can be detected. Our methods extend and complement the previous methods by showing the power of graph theory perspectives from either a logistic regression or a pure statistics perspective. We also compare our methods with LID and RSA.
## 3 Graph generation and quantities of interest
To compute the associated graph \(\mathcal{G}\) for a neural network and input pair, we use layerwise relevance propagation [2], [24]. This algorithm allows one to assign quantities to neurons which can be interpreted as an indicator of
the influence that a neuron has on the output. We assume our graph to be directed. Following the notation in [24] for the LRP-\(\alpha\beta\) rule, signals are propagated from the output layer towards the input layers. For a neuron \(k\) in layer \(\ell+1\) that is connected to neuron \(i\) in layer \(\ell\), the propagated signal from \(k\) to \(i\) is defined to be
\[R_{i,k}^{\ell,\ell+1}=R_{k}^{\ell+1}\left(\alpha\frac{a_{i}\max(w_{ik},0)}{ \epsilon+\sum_{h}a_{h}\max(w_{hk},0)}-\beta\frac{a_{i}\min(w_{ik},0)}{\epsilon +\sum_{h}a_{h}\min(w_{hk},0)}\right) \tag{1}\]
where \(R_{k}^{\ell}\) is the relevance of neuron \(k\), \(a_{i}\) is the activation of neuron \(i\) in layer \(\ell\); \(w_{hk}\) is the weight between neurons \(h,k\); \(\epsilon\) is a small parameter; and \(\alpha-\beta\) = 1. The relevance of a neuron \(k\) in layer \(l\) is given by
\[R_{k}^{\ell}=\sum_{i}R_{k,i}^{\ell,\ell+1}. \tag{2}\]
To start the algorithm, one assigns the relevance of the output neurons of the neural network to be equal to the neural network output. Upon completion of the algorithm, we rank the pairwise-relevance scores \(\{R_{i,k}^{\ell,\ell+1}\}\) in descending order and keep the top 1%. Our thresholding is inspired by [4]. These edges become the edges in our induced graph \(\mathcal{G}\). One can compute various quantities from \(\mathcal{G}\). One such quantity is given by
\[I(v_{i})=\sum_{j\neq i}\frac{1}{2^{d(v_{i},v_{j})}} \tag{3}\]
where \(\{v_{i}\}_{i}\) is the set of nodes and \(d(v_{i},v_{j})\) is the distance between vertices \(v_{i},v_{j}\). We note that for the distance between adjacent nodes we use (1), and the distance between any pair of nodes is given by the path that corresponds to the shortest sum of distances of adjacent nodes. An intuitive meaning of (3) is that it gives more importance to a vertex that has many neighbors and short distances to them. This equation is inspired by closeness centrality [3] which is given by
\[C(v_{i})=\frac{1}{\sum_{j\neq i}d(v_{i},v_{j})}. \tag{4}\]
A difference between (3) and (4) is that the former is monotone in the cardinality of \(\{v_{i}\}_{i}\). For bipartite graphs, or "stacks" of bipartite graphs (one can think of multi-layer perceptrons in this fashion) a measure of closeness centrality tends not be useful, hence the motivation for (3).
Another quantity of interest is the degree of a vertex, specifically which is defined to be the difference between out degree and in degree:
\[\deg(v)=\deg_{out}(v)-\deg_{in}(v). \tag{5}\]
Our last quantity of interest are the values of certain edges of \(\mathcal{G}\). This allows us to incorporate some of \(\mathcal{G}\)'s topology. The edges we use are those that correspond to the last two layers of the original neural network. We only use these edges because using all edges would require a data structure of size \(O(n_{1}n_{2},...n_{l})\), where \(n_{i}\) is the number of nodes in layer \(i\). Clearly, this requires an extensive amount of memory when a sufficient number of \(n_{i}\) is large. One can see that in general, when using graph data, it is preferable, at least from a memory standpoint, to use a quantity whose size is much smaller than \(O(n_{1}n_{2},...n_{l})\), for instance a dataset whose size is \(O(|V|)\), where \(V\) is the set of nodes. In fact, our use of degree and node importance (4) as computed for each node meets this constraint.
In [15], the neurons just before the softmax layer are studied, which has a similarity with our study of edge relevance. In that article, the authors use the said neurons to compare robustness of non-human primate and human vision with regards to adversarial images. This lends further a (biological) motivation in our use of edge relevance for the edges connecting the penultimate to the output layer. Since we apply a threshold to the edges of \(\mathcal{G}\), there are nodes of \(\mathcal{G}\) which are not adjacent to an edge. More generally, the edges among the set \(\{\mathcal{G}_{i}\}_{i}\), need not be the same, where \(\{\mathcal{G}_{i}\}_{i}\) represents a set of graphs induced from the same architecture. To enforce consistency of representation for the relevance of edges adjacent to the output layer, we create a weighted adjacency matrix of the same dimension as the adjacency matrix for nodes in the last two layers. The relevance values that are above the threshold are recorded as is, and those below this percentile are set to 0. The matrix is then flattened into a vector. This flattened vector is our third quantity of interest, and its nonzero components are given by (1), assuming that component is greater than the threshold.
Lastly, we note that it would be very difficult to create an adaptive attack to counter the methodology proposed here since our detection methods involve graph thresholding, a nondifferentiable operation.
## 4 A statistical test based on Wasserstein distances
The Wasserstein-1 distance between two probability distributions \(p\) and \(q\) defined on a measurable metric space \(\mathcal{X}\) is given by
\[\mathcal{W}(p,q)=\min_{\pi(x,y)\in\Pi}\int\left\|x-y\right\|_{1}d\pi(x,y) \tag{6}\]
where \(\Pi\) is the set of all measures on \((\mathcal{X},\mathcal{X})\) whose marginal distributions are given by \(p\) and \(q\). In the case when \(p\) consists of one sample \(x\) and \(q\) consists of discrete samples \((y_{i})_{i=1}^{N}\), then
\[\mathcal{W}(\delta_{x},q)=\frac{1}{N}\sum_{i}^{N}\|x-y_{i}\|_{1}. \tag{7}\]
where \(\delta_{x}\) is the distribution with support at \(x\). Wasserstein distances have been applied to machine learning in several ways. In [7], Wasserstein distances are used to compress data into a small dimensional subspace while maintaining a large distance from adversarial distributions. Other work [30] uses Wasserstein distances to create adversarial attacks.
Our goal in using Wasserstein distances is different than that in the examples mentioned. Our goal is to apply Wasserstein differences for benign and adversarial graph statistics in order to classify an input as benign or adversarial. The statistic we are concerned with is degree.
Let \(\hat{\mathcal{B}}_{i}\) denote the empirical distribution of degree in the case when benign inputs are correctly classified as belonging to class \(i\). Similarly, let \(\hat{\mathcal{A}}_{i}\) denote
\begin{table}
\begin{tabular}{||c c||} \hline \hline FORMULA & name \\ \hline \(R_{i,k}^{\ell,\ell+1}=R_{k}^{\ell+1}\left(\alpha\frac{a_{i}\max(w_{ik},0)}{ \epsilon+\sum_{h}a_{h}\max(w_{hk},0)}-\beta\frac{a_{i}\min(w_{ik},0)}{\epsilon+ \sum_{h}a_{h}\min(w_{hk},0)}\right)\) & edge relevance \\ \hline \(I(v_{i})=\sum_{j\neq i}\frac{1}{2^{d(v_{i},v_{j})}}\) & node importance \\ \hline \(\deg=\deg_{out}(v)-\deg_{in}(v)\) & degree \\ \hline \end{tabular}
\end{table}
Table 1: Summary of relevant graph statistics. Edge relevance is restricted to last layer.
the empirical distribution that corresponds to perturbed inputs which the model incorrectly classifies as belonging to class \(i\), and whose unperturbed image is correctly classified. For instance, since we are concerned with degree, the domain of the distribution function \(\hat{\mathcal{B}}_{i}\) is a vector whose dimension is equal to the number of nodes in the induced graphs. If for some input, the model outputs class \(i\), we would like to know if the output was generated by a random variable with distribution \(\mathcal{B}_{i}\) or with distribution \(\mathcal{A}_{i}\) where the lack of a hat denotes the true distribution. As before, we first construct the graph \(\mathcal{G}\) for the sample and compute a sample degree vector, which we denote by the random variable \(\mathbf{Z}\). For a yet to be defined subset of nodes \(\mathcal{S}\), we define the following Wasserstein Sums Ratio (WSR) quantity:
\[\text{WSR}(\mathcal{S},\hat{\mathcal{A}}_{i},\hat{\mathcal{B}}_{i},\mathbf{Z},i)=\frac{\sum_{j\in\mathcal{S}}\mathcal{W}(\delta_{\mathbf{Z}_{j}},\hat{ \mathcal{B}}_{i}^{j})}{\sum_{j\in\mathcal{S}}\mathcal{W}(\delta_{\mathbf{Z}_{j }},\hat{\mathcal{A}}_{i}^{j})} \tag{8}\]
where the \(j\) in \(\hat{\mathcal{A}}_{i}^{j}\) refers to the empirical distribution for node \(j\), and similarly for \(\hat{\mathcal{B}}_{i}^{j}\). Equation (8) says that for each node that belongs to \(\mathcal{S}\), we compute Wasserstein-1 distances node-wise from the sample to the empirical distributions and we sum over the node indices, and compute the ratio. If the ratio is less than some threshold, we classify the input as benign, otherwise as adversarial. It may occur that the denominator of (8) is equal to 0, thus, in this case, a small term is added to the numerator and denominator. This can happen if the empirical distributions \(\{\hat{\mathcal{A}}_{i}^{j}\}_{j\in\mathcal{S}}\) only have support at a point. Lastly, we note that we could have also computed the Wasserstein distance in \(\mathbb{R}^{N}\), where \(N\) is the number of nodes in \(\mathcal{G}\). However, that is a more involved procedure. Using (7), we can write (8) as
\[\text{WSR}(\mathcal{S},\hat{\mathcal{A}},\hat{\mathcal{B}},\mathbf{Z},i)=\frac {\frac{1}{N_{\mathcal{B}_{i}^{j}}}\sum_{j\in\mathcal{S}}\sum_{k=1}^{N_{ \mathcal{B}_{i}^{j}}}\|\mathbf{Z}_{j}-y_{i}^{j}(k)\|_{1}}{\frac{1}{N_{\hat{ \mathcal{A}}_{i}^{j}}}\sum_{j\in\mathcal{S}}\sum_{k=1}^{N_{\hat{\mathcal{A}}_{ i}^{j}}}\|\mathbf{Z}_{j}-x_{i}^{j}(k)\|_{1}} \tag{9}\]
where \(y_{i}^{j}(k)\) is a sample from \(\hat{\mathcal{B}}_{i}^{j}\) and \(x_{i}^{j}(k)\) is a sample from \(\hat{\mathcal{A}}_{i}^{j}\), and \(N_{\mathcal{B}_{i}^{j}}\) is the number of samples in \(\hat{\mathcal{B}}_{i}^{j}\), respectively for \(\hat{\mathcal{A}}_{i}^{j}\). Lastly, we make the set \(\mathcal{S}\) as follows: we calculate
\[\Delta_{i}^{j}:=\mathbb{E}X_{i}^{j}-\mathbb{E}Y_{i}^{j} \tag{10}\]
where \(X_{i}^{j}\) has distribution \(\hat{\mathcal{A}}_{i}^{j}\) and \(Y_{i}^{j}\) has distribution \(\hat{\mathcal{B}}_{i}^{j}\) and \(\mathbb{E}\) is expected
value. We then create the set
\[\mathcal{S}:=\{j:\Delta_{i}^{j}<0\text{ for all }i\}. \tag{11}\]
The set \(\mathcal{S}\) identifies nodes where the mean of the benign distribution is greater than the adversarial distribution for all classes. Should it happen that \(\hat{\mathcal{A}}_{i}^{j}\) is empty for some \(j\) (we have experienced this only for one combination of model and attack), one may create a placeholder version of it by setting each entry to a very large negative value (the large negative value has the effect of removing the index \(j\) from consideration when making the set \(\mathcal{S}\). Algorithm 1 shows adversarial detection using WSR.
```
Input: neural network \(\mathcal{NN}\), image \(I\); \(\tau\), \(\mathcal{S}\), \(\hat{\mathcal{A}}_{i}^{j}\); \(\hat{\mathcal{B}}_{i}^{j}\) for all \(i\) and \(j\)\(i\leftarrow\mathcal{NN}(I)\) compute \(\mathcal{G}\) from \(I\) and \(\mathcal{NN}\) compute node degree z from \(\mathcal{G}\)\(val\leftarrow\text{WSR}(\mathcal{S},\hat{\mathcal{A}},\hat{\mathcal{B}}, \mathbf{z},i)\) if \(val<\)\(\tau\) then classify \(I\) as benign, otherwise classify \(I\) as adversarial.
```
**Algorithm 1** Adversarial detection using WSR (variant 1)
The way we construct \(\mathcal{S}\) has the tendency to pick nodes that generalize well across all classes at the expense of nodes that specialize. In an alternative algorithm, we propose to use the specialized nodes. For a given output that is classified as class \(i\), we use \(\mathcal{S}_{i}=\{j:\Delta_{i}^{j}<0\}\). This can result in a more accurate test using our approach, but at the expense of a little longer computation since there are more nodes to use for computations. The algorithm is shown in Algorithm 2.
```
Input: neural network \(\mathcal{NN}\), image \(I\); \(\tau_{i}\), \(\mathcal{S}_{i}\), \(\hat{\mathcal{A}}_{i}^{j}\); \(\hat{\mathcal{B}}_{i}^{j}\) for all \(i\) and \(j\)\(i\leftarrow\mathcal{NN}(I)\) compute \(\mathcal{G}\) from \(I\) and \(\mathcal{NN}\) compute node degree z from \(\mathcal{G}\)\(val\leftarrow\text{WSR}(\mathcal{S}_{i},\hat{\mathcal{A}},\hat{ \mathcal{B}},\mathbf{z},i)\) if \(val<\)\(\tau_{i}\) then classify \(I\) as benign, otherwise classify \(I\) as adversarial.
```
**Algorithm 2** Adversarial detection using WSR (variant 2)
Consistency
We would like to analyze under what conditions (8) is a faithful predictor. We treat the case of a finite-width ANN with sufficiently many neurons. A finite-width ANN has the property that the degree distribution has compact support, which implies that the Wasserstein distance between an empirical degree distribution and true distribution is bounded, and the Wasserstein distance is continuous with respect to \(\|\cdot\|_{\infty}\). We begin our proof of consistency by showing that given a real-valued random variable \(X\); an empirical distribution \(\hat{F}_{n}\) of some other real-valued random variable with true distribution \(F\); a function \(G\) (whose arguments are a random variable and a distribution) that is uniformly continuous in the second argument with respect to \(\|\cdot\|_{\infty}\); and bounded, that
\[\mathbb{E}_{X}G(X,\hat{F}_{n})\xrightarrow{a.s.}\mathbb{E}_{X}G(X,F) \tag{12}\]
as \(n\to\infty\). To prove (12), it is sufficient to show that
\[G(X,\hat{F}_{n})\xrightarrow{a.s.}G(X,F)\ \forall x. \tag{13}\]
Under identical and independently distributed (iid) assumptions, the Glivenko-Cantelli lemma states that \(\|\hat{F}_{n}-\hat{F}\|_{\infty}\xrightarrow{a.s.}0\). This combined with the uniform continuity of \(G\) in the second argument with respect to \(\|\cdot\|_{\infty}\) proves (13). To prove (12), we let \(h_{n}(x)=G(x,\hat{F}_{n})\) and \(h(x)=G(x,F)\). From (13) we have \(h_{n}(x)\xrightarrow{a.s.}h(x)\) for all \(x\) as \(n\to\infty\). We may combine this with the boundedness assumption to use the Lebesgue dominated convergence theorem, resulting in \(\lim_{n\to\infty}\mathbb{E}_{X}h_{n}(X)=\mathbb{E}_{X}\lim_{n\to\infty}h_{n}(X )=\mathbb{E}_{X}h(X)\) almost surely.
We now begin to analyze (8), and we start by supposing that our random variable \(\mathbf{Z}\) corresponds to the benign case. Let
\[\begin{split} U^{b}_{j,i}&=\mathcal{W}(\delta_{ \mathbf{Z}_{j}},\hat{\mathcal{B}}^{j}_{i})\\ U^{a}_{j,i}&=\mathcal{W}(\delta_{\mathbf{Z}_{j}}, \hat{\mathcal{A}}^{j}_{i}).\end{split} \tag{14}\]
For additional simplicity, let us assume that quantities defined in (14) are iid over the index \(j\). The iid assumption implies that
\[\mathbf{E}_{\mathbf{Z}_{j}}\mathcal{W}(\delta_{\mathbf{Z}_{j}},\hat{\mathcal{ B}}^{j}_{i})=:\mathbf{E}U^{b}_{i}\]
and
\[\mathbf{E}_{\mathbf{Z}_{j}}\mathcal{W}(\delta_{\mathbf{Z}_{j}},\hat{\mathcal{ A}}^{j}_{i})=:\mathbf{E}U^{a}_{i}\]
for all \(i\). By equation (12), the results we obtain going forward will hold for the population distribution in high probability assuming our empirical distributions have enough samples. By the weak law of large numbers,
\[\left|\frac{\sum_{j=1}^{|\mathcal{S}|}\mathcal{W}(\delta_{\mathbf{Z}_{j}},\hat{ \mathcal{B}}_{i}^{j})}{|\mathcal{S}|}-\mathbf{E}U_{i}^{b}\right|<\epsilon_{1} \text{ as }|\mathcal{S}|\rightarrow\infty\]
Similarly,
\[\left|\frac{\sum_{j=1}^{|\mathcal{S}|}\mathcal{W}(\delta_{\mathbf{Z}_{j}},\hat {\mathcal{A}}_{i}^{j})}{|\mathcal{S}|}-\mathbf{E}U_{i}^{a}\right|<\epsilon_{2} \text{ as }|\mathcal{S}|\rightarrow\infty.\]
Then (8) is equal to
\[\frac{\sum_{j=1}^{|\mathcal{S}|}U_{j,i}^{b}}{\sum_{j=1}^{|\mathcal{ S}|}U_{j,i}^{a}} =\frac{|\mathcal{S}|\mathbf{E}U_{i}^{b}+|\mathcal{S}|\epsilon_{1} }{|\mathcal{S}|\mathbf{E}U_{i}^{a}+|\mathcal{S}|\epsilon_{2}}\] \[=\frac{\mathbf{E}U_{i}^{b}+\epsilon_{1}}{\mathbf{E}U_{i}^{a}+ \epsilon_{2}}\] \[\rightarrow\frac{\mathbf{E}U_{i}^{b}}{\mathbf{E}U_{i}^{a}}\text{ as } |\mathcal{S}|\rightarrow\infty \tag{15}\]
where \(\epsilon_{1}\) and \(\epsilon_{2}\) are \(o(|\mathcal{S}|)\). If we consider the case when \(\mathbf{Z}\) is adversarial, we get a similar limit as in (15). Thus for consistency, we need the two limits to not be equal, thus we write
\[\frac{\mathbf{E}U_{i}^{b}}{\mathbf{E}U_{i}^{a}}<\frac{\mathbf{E}V_{i}^{b}}{ \mathbf{E}V_{i}^{a}} \tag{16}\]
where we use \(V\) to denote adversarial quantities. This is equivalent to \(\mathbf{E}U_{i}^{b}\)\(\mathbf{E}V_{i}^{a}<\mathbf{E}U_{i}^{a}\)\(\mathbf{E}V_{i}^{b}\). This is a realistic assumption for distributions with different means. A classification threshold, \(\tau\), is then picked such that
\[\frac{\mathbf{E}U_{i}^{b}}{\mathbf{E}U_{i}^{a}}<\tau<\frac{\mathbf{E}V_{i}^{b} }{\mathbf{E}V_{i}^{a}}. \tag{17}\]
An interesting example of (16) is the case in which \(\mathbf{E}U_{i}^{b}=\mathbf{E}V_{i}^{a}\) and \(\mathbf{E}U_{i}^{a}=\mathbf{E}V_{i}^{b}\) and where all terms do not equal 1. In this instance, (8) in the benign case will be the inverse of that in the adversarial case. Furthermore, neither ratio will equal 1. This happens when adversarial distributions are simply shifts of benign distributions.
## 6 Experimental details
### Architectures
We experiment with five models, two of which are detailed in Tables 2-3 while the other three are VGG-19, InceptionResnetV2, and MobileNet. The last layers of VGG-19, InceptionResnetV2, and MobileNet are preloaded from Keras, and their last layers are replaced with three custom, fully-connected layers, with output sizes 4096, 1000, and 10, respectively, and trained with ImageNet weights. With respect to the last three models, we only compute graph-based quantities from these layers. For models 1 and 2, we use all layers.
### Datasets
We trained our models on MNIST, CIFAR-10, and SVHN datasets. For each model we created adversarial examples using the Adversarial Robustness Toolbox [27]. For CIFAR-10 and SVHN, all images were enlarged to (224, 224, 3). Images were preprocessed using built-in Keras layers that handle input preprocessing.
\begin{table}
\begin{tabular}{||c c c||} \hline \hline \multicolumn{1}{||c}{**LAYER TYPE**} & \multicolumn{1}{c}{OUTPUT SIZE} & \multicolumn{1}{c||}{ACTIVATION FUNCTION} \\ \hline \hline \multicolumn{1}{||c}{FULLY CONNECTED} & 300 & \multicolumn{1}{c||}{reLu} \\ \hline \multicolumn{1}{||c}{FULLY CONNECTED} & 200 & \multicolumn{1}{c||}{reLu} \\ \hline \multicolumn{1}{||c}{FULLY CONNECTED} & 150 & \multicolumn{1}{c||}{reLu} \\ \hline \multicolumn{1}{||c}{FULLY CONNECTED} & 150 & \multicolumn{1}{c||}{reLu} \\ \hline \multicolumn{1}{||c}{FULLY CONNECTED} & 100 & \multicolumn{1}{c||}{sigmoid} \\ \hline \multicolumn{1}{||c}{FULLY CONNECTED} & 10 & \multicolumn{1}{c||}{SOFTMAX} \\ \hline \end{tabular}
\end{table}
Table 2: Architecture of Model 1
### Attacks
We consider fast gradient sign method, [14], projected gradient descent [22], untargeted Carlini-Wagner L2 [5], DeepFool [25], Square [1], and Auto [10] attacks. Fast gradient sign method attacks were clipped when perturbations were outside a ball of radius 10% in the \(\ell^{\infty}\) norm. Projected gradient descent attacks were crafted using the same norm but with up to a 5% perturbation; the number of iterations was 40 except for InceptionResnetV2, MobileNet, and VGG19, in which 10 were used. Square and Auto attacks had the same norm and perturbation as projected gradient descent attacks. Optimization was done using ADAM with learning rate 0.01. For each attack we generated 10,000 adversarially perturbed images from 10,000 original (test data) images. In creating training data for the detection methods we introduce, approximately 14,000 samples were used, and the methods were compared on approximately 6,000 samples. For RSA the numbers are approximately the same. For LID, we used approximately 6,000 training and test samples each, with the exception of models 1 and 2 in which we used approximately 7,000 training and 3,000 test samples.
### Hyperparameters
The values of \(\epsilon\) and \(\alpha\) in our implementation of LRP-\(\alpha\beta\) are 2 and \(10^{-7}\), respectively. In our implementation of RSA we use \(M=8,K=16\), and the layer used is the third from the output layer. For creating noisy samples in the algorithm in LID, we use Gaussian noise of zero mean and variance 0.05. Also in our implementation of LID, we only use the last 10 layers for
\begin{table}
\begin{tabular}{||c c c||} \hline \hline layer type & output size & activation function \\ \hline \hline conv & 3 filters, kernel size (4,4) & identity \\ \hline maxpool & pool size=(2,2), strides=(2,2) & reLu \\ \hline conv & 3 filters, kernel size (4,4) & identity \\ \hline maxpool & pool size=(2,2), strides=(2,2) & reLu \\ \hline fully connected & 100 & reLu \\ \hline fully connected & 10 & softmax \\ \hline \hline \end{tabular}
\end{table}
Table 3: Architecture of Model 2
computational ease.
## 7 Results and discussion
### Comparison of logistic regression approaches
In Tables 3(a), 3(b),and 3(c) we report the specificity (percentage benign samples that are correctly detected) and sensitivity (percentage adversarial samples that are correctly detected). One can see that the various graph statistics considered here can be strong sensitive and specific predictors of adversarial attacks in the case of using logistic regression. Among Mobilenet, Inception-ResnetV2 and VGG19, degree seems to slightly be the best predictor among our statistics. From the tables, we see that the worst performance occurs for Carlini-Wagner and Deepfool attacks. These two attacks are known to be among the most difficult to detect, so our results are consistent with this belief. In particular, for VGG19 and Carlini-Wagner, our classifier is able to almost always detect benign samples, but detects almost no adversarial examples.
Among models 1 and 2, degree is significantly the best predictor, while edge relevance for Model 2 is a poor predictor across all attacks, being unable to detect adversarial images. This is because the edge relevance for benign and adversarial samples are equal to 0. The largest relevances for Model 2 are found in layers closer to the input layer. During the thresholding process, the relevances for the edges corresponding to the output layer are set to 0 because they are relatively small. Lastly, in comparison to LID, our results are superior across almost all model/attack combinations.
### Comparison of statistical approaches
Tables 4(a), 4(b), and 4(c) show results in terms of AUROC (area under receiver operating characteristic curve) for various detection methods. In almost all cases, WSR2 provides more accurate predictions than WSR1. Further, both WSR variations outperform RSA. Model 1, in comparison to the other models, performs somewhat poorly under WSR1. This seems to be due to Model 1 having the least number of neurons, making the corresponding \(|\mathcal{S}|\) relatively small. On this note, we can also see from the tables that model/attack pairs with small \(|\mathcal{S}|\) tend to have worse results under WSR.
This is particularly noticeable in the case of Carlini-Wagner and Deepfool attacks under WSR1; this lower performance was also noted in our results using logistic regression.
\begin{table}
\end{table}
Table 4: Comparison between logistic regression methods. First and second quantities in each entry are benign and adversarial detection rate, respectively. FGSM, PGD, CW2, DF, Sq, and AA represent fast gradient sign method, projected gradient descent, Carlini-Wagner L2, Square, and Auto attacks, respectively. Values are percentages.
We can use WSR and logistic regression in a complementary way. For instance, graph-based quantities generated from VGG19 and Carlini-Wagner attacks tend to be poorly classified with logistic regression. In contrast, WSR2 performs well in this case, and it can be used in place of logistic regression.
We considered using equation 5 from [29] as a baseline, perhaps in place of RSA, but chose not to because of the extremely large time needed for the source code to run, and secondly, our initial results suggested that this method gives poor accuracy, near 50%, which is much lower than the numbers the article states. In our effort to increase accuracy we experimented with different hyperparameters, including noise, but to no avail. This calls into question the usefulness and robustness of using equation 5 in [29].
Figure 1: Empirical distributions for WSR1 for Model 2 (top row) and WSR2 for InceptionResnetV2 (bottom row). The top panel of each subplot shows WSR computed for adversarial examples, and the bottom subplot shows the computation for benign examples. FGSM, PGD, CW2, DF, Sq, and AA represent fast gradient sign method, projected gradient descent, Carlini-Wagner L2, Deepfool, Square, and Auto attacks, respectively. For Model 2 and CW2, values above 200 are set to 200 for ease of display. Note that the benign and adversarial plots for Model 2 tend to agree with the remark made in section 1 about inverses.
### Nodal analysis
The distributions of node quantities is highly dependent on the model and attack. From the tables it can be seen that AUROC for WSR decreases as the strength of the attack increases (we consider a partial order of in increasing attack strength to be: Fast Gradient Sign Method, Projected Gradient Descent, and Carlini-Wagner L2). We can relate this observation to how the cardinality of \(|\mathcal{S}|\) varies with model/attack. The cardinality of \(|\mathcal{S}|\) can be seen in Tables (a)a,(b)b, and (c)c. For CIFAR-10 and SVHN datasets, we observe that the cardinality tends to be a lot smaller for Carlini-Wagner L2 and DeepFool attacks, and it seems to explain the lower accuracy achieved by WSR on these attacks. We recall that from section 5, the accuracy of WSR increases with \(|\mathcal{S}|\).
We also note that in some cases the benign distribution of WSR and the adversarial distribution of WSR are centered at points which are close to inverses. This seems to be the case for Model 2, as shown in figure 1. This is in agreement with an earlier remark in Section 5 about equation (8) having inverse values under benign and adversarial examples, assuming the benign and adversarial test statistics have the same distributions up to a shift.
\begin{table}
\end{table}
Table 5: Comparison of AUROC for statistical detection methods. FGSM, PGD, CW2, DF, Sq, and AA represent fast gradient sign method, projected gradient descent, Carlini-Wagner L2, Deepfool, Square, and Auto attacks, respectively. WSR1 and WSR2 are WSR variants 1 and 2 respectively. Values are percentages.
\begin{table}
\end{table}
Table 6: Cardinality of \(\mathcal{S}\) by model and attack. FFGSM, PGD, CW2, DF, Sq, and AA represent fast gradient sign method, projected gradient descent, Carlini-Wagner L2, Deepfool, Square, and Auto attacks, respectively.
Conclusion
We have demonstrated that neural network architectures can be interpreted in a graph context from which we can use the statistics of graph-based quantities to detect adversarial attacks. We introduced three measures that we applied to our graphs and used them as predictors of adversarial attack. We showed that this approach can produce high detection performances with logistic regression. We also studied the distributions of node degree using a statistical test based on Wasserstein distances. We find it intriguing that a sparse graph encodes sufficient information about inputs to a neural network. We hope that the perspective introduced here will provide a different way of understanding adversarial attacks.
## 9 Acknowledgments
L. Carboni and D. Nwaigwe are the recipients of a grant from MIAI@Grenoble Alpes (ANR 19-P3IA-003). |
2305.19725 | Direct Learning-Based Deep Spiking Neural Networks: A Review | The spiking neural network (SNN), as a promising brain-inspired computational
model with binary spike information transmission mechanism, rich
spatially-temporal dynamics, and event-driven characteristics, has received
extensive attention. However, its intricately discontinuous spike mechanism
brings difficulty to the optimization of the deep SNN. Since the surrogate
gradient method can greatly mitigate the optimization difficulty and shows
great potential in directly training deep SNNs, a variety of direct
learning-based deep SNN works have been proposed and achieved satisfying
progress in recent years. In this paper, we present a comprehensive survey of
these direct learning-based deep SNN works, mainly categorized into accuracy
improvement methods, efficiency improvement methods, and temporal dynamics
utilization methods. In addition, we also divide these categorizations into
finer granularities further to better organize and introduce them. Finally, the
challenges and trends that may be faced in future research are prospected. | Yufei Guo, Xuhui Huang, Zhe Ma | 2023-05-31T10:32:16Z | http://arxiv.org/abs/2305.19725v4 | # Direct Learning-Based Deep Spiking Neural Networks: A Review
###### Abstract
The spiking neural network (SNN), as a promising brain-inspired computational model with binary spike information transmission mechanism, rich spatially-temporal dynamics, and event-driven characteristics, has received extensive attention. However, its intricately discontinuous spike mechanism brings difficulty to the optimization of the deep SNN. Since the surrogate gradient method can greatly mitigate the optimization difficulty and shows great potential in directly training deep SNNs, a variety of direct learning-based deep SNN works have been proposed and achieved satisfying progress in recent years. In this paper, we present a comprehensive survey of these direct learning-based deep SNN works, mainly categorized into accuracy improvement methods, efficiency improvement methods, and temporal dynamics utilization methods. In addition, we also divide these categorizations into finer granularities further to better organize and introduce them. Finally, the challenges and trends that may be faced in future research are prospected.
Spiking Neural Network, Brain-inspired Computation, Direct Learning, Deep Neural Network, Energy Efficiency, Spatial-temporal Processing
## 1 Introduction
The Spiking Neural Network (SNN) has been recognized as one of the brain-inspired neural networks due to its bio-mimicry of the brain neurons. It transmits information by firing binary spikes and can process the information in a spatial-temporal manner (Fang et al., 2021; Wu et al., 2019; Zhang et al., 2020; Wu et al., 2019; Zhang et al., 2020). This event-driven and spatial-temporal manner makes the SNN very efficient and good at handling temporal signals, thus receiving a lot of research attention, especially recently.
Despite the energy efficiency and spatial-temporal processing advantages, it is a challenge to train deep SNNs due to the firing process of the SNN is undifferentiable, thus making it impossible to train SNNs via gradient-based optimization methods. At first, many works leverage the spike-timing-dependent plasticity (STDP) approach (Lobov et al., 2020), which is inspired by biology, to update the SNN weights. However, STDP cannot help train large-scale networks yet, thus limiting the practical applications of the SNN. There are two widely used effective pathways to obtain deep SNNs up to now. First, the ANN-SNN conversion approach (Han and Roy, 2020; Li et al., 2021; Bu et al., 2022; Liu et al., 2022; Li and Zeng, 2022; Wang et al., 2020).
et al., 2022; Bu et al., 2023) converts a well-trained ANN to an SNN by replacing the activation function from ReLU with spiking activation. It provides a fast way to obtain an SNN. However, it is limited in the rate-coding scheme and ignores the rich temporal dynamic behaviors of SNNs. Second, the surrogate gradient (SG)-based direct learning approach (Fang et al., 2021; Wu et al., 2018; Guo et al., 2022; Li et al., 2021) tries to find an alternative differentiable surrogate function to replace the undifferentiable firing activity when doing back-propagation of the spiking neurons. Since SG can handle temporal data and provide decent performance with few time-steps on the large-scale dataset, it has received more attention recently.
Considering the sufficient advantages and rapid development of the direct learning-based deep SNN, a comprehensive and systematic survey on this kind of work is essential. Previously related surveys (Wang et al., 2020; Zhang et al., 2022; Roy et al., 2019; Tavanaei et al., 2019; Ponulak and Kasinski, 2011; Yamazaki et al., 2022) have begun to classify existing works mainly based on the key components of SNNs: biological neurons, encoding methods, SNN structures, SNN learning mechanisms, software and hardware frameworks, datasets, and applications. Though such classification is intuitive to general readers, it is difficult for them to grasp the challenges and the landmark work involved. While in this survey, we provide a new perspective to summarize these related works, _i.e._, starting from analyzing the characteristics and difficulties of the SNN, and then classify them into i) accuracy improvement methods, ii) efficiency improvement methods, and iii) temporal dynamics utilization methods, based on the solutions for corresponding problems or the utilization of SNNs' advantages.
Further, these categories are divided into finer granularities: i) accuracy improvement methods are subdivided as improving representative capabilities and relieving training difficulties; ii) efficiency improvement methods are subdivided as network compression techniques and sparse SNNs; iii) temporal dynamics utilization methods are subdivided as sequential learning and cooperating with neuromorphic cameras. In addition to the classification by using strengths or overcoming weaknesses of SNNs, these recent methods can also be divided into the neuron level, network structure level, and training technique level, according to where these methods actually work. The classifications and main techniques of these methods are listed in Table 1 and Table 2. Finally, some promising future research directions are provided.
The organization of the remaining part is given as follows, section 2 introduces the preliminary for spiking neural networks. The characteristics and difficulties of the SNN are also analyzed in section 2. section 3 presents the recent advances falling into different categories. section 4 points out future research trends and concludes the review.
## 2 Preliminary
Since the neuron models are not the focus of the paper, here, we briefly introduce the commonly used discretized Leaky Integrate-and-Fire (LIF) spiking neurons to show the basic characteristic and difficulties in SNNs, which can be formulated by
\[U_{l}^{t}=\tau U_{l}^{t-1}+\mathbf{W}_{l}O_{l-1}^{t},\qquad U_{l}^{t}<V_{\rm th}, \tag{1}\]
where \(U_{l}^{t}\) is the membrane potential at \(t\)-th time-step for \(l\)-th layer, \(O_{l-1}^{t}\) is the spike output from the previous layer, \(\mathbf{W}_{l}\) is the weight matrix at \(l\)-th layer, \(V_{th}\) is the firing threshold, and \(\tau\) is a time leak constant for the membrane potential, which is in \((0,1]\). When \(\tau\) is \(1\), the above equation will degenerate to the Integrate-and-Fire (IF) spiking neuron.
**Characteristic 1**.: _Rich spatially-temporal dynamics. Seen from Equation 1, different from ANNs, SNNs enjoy the unique spatial-temporal dynamic in the spiking neuron model._
Then, when the membrane potential exceeds the firing threshold, it will fire a spike and then fall to resting potential, given by
\[O_{l}^{t}=\left\{\begin{array}{ll}1,&\text{if }U_{l}^{t}\geq V_{\rm th}\\ 0,&\text{otherwise}\end{array}\right.. \tag{2}\]
**Characteristic 2**.: _Efficiency. Since the output is a binary tensor, the multiplications of activations and weights can be replaced by additions, thus enjoying high energy efficiency. Furthermore, when there is no spike output generated, the neuron will keep silent. This event-driven mechanism can further save energy when implemented in neuromorphic hardware._
**Characteristic 3**.: _Limited representative ability. Obviously, transmitting information by quantizing the real-valued membrane potentials into binary output spikes will introduce the quantization error in SNNs, thus causing information loss (Guo et al., 2022b; Wang et al., 2023). Furthermore, the binary spike feature map from a timestep cannot carry enough information like the real-valued one in ANNs (Guo et al., 2022d)._ These two problems limit the representative ability of SNN to some extent.
**Characteristic 4**.: _Non-differentiability. Another thorny problem in SNNs is the non-differentiability of the firing function._
To demonstrate this problem, we formulate the gradient at the layer \(l\) by the chain rule, given by
\[\frac{\partial L}{\partial\mathbf{W}_{l}}=\sum_{t}(\frac{\partial L}{\partial O _{l}^{t}}\frac{\partial O_{l}^{t}}{\partial U_{l}^{t}}+\frac{\partial L}{ \partial U_{l}^{t+1}}\frac{\partial U_{l}^{t+1}}{\partial U_{l}^{t}})\frac{ \partial U_{l}^{t}}{\partial\mathbf{W}_{l}}, \tag{3}\]
where \(\frac{\partial O_{l}^{t}}{\partial U_{l}^{t}}\) is the gradient of firing function at at \(t\)-th time-step for \(l\)-th layer and is \(0\) almost everywhere, while infinity at \(V_{\rm th}\). As a consequence, the gradient descent \((\mathbf{W}_{l}\leftarrow\mathbf{W}_{l}-\eta\frac{\partial L}{\partial\mathbf{ W}_{l}})\) either freezes or updates to infinity.
Most existing direct learning-based SNN works focus on solving difficulties or utilizing the advantages of SNNs. Boosting the representative ability and mitigating the non-differentiability can both improve SNN's accuracy. From this perspective, we organize the recent advances in the SNN field as accuracy improvement methods, efficiency improvement methods, and temporal dynamics utilization methods.
## 3 Recent Advances
In recent years, a variety of direct learning-based deep spiking neural networks have been proposed. Most of these methods fall into solving or utilizing the intrinsic disadvantages or advantages of SNNs. Based on this, in the section, we classify these methods into accuracy improvement methods, efficiency improvement methods, and temporal dynamics utilization methods. In addition, these classifications are also organized in different aspects with a comprehensive analysis. Table 1 and Table 2 summarizes the surveyed SNN methods in different categories.
Note that the direct learning methods can be divided into time-based methods and activation-based methods based on whether the gradient represents spike timing (time-based) or spike scale (activation-based) (Zhu et al., 2022c). In time-based methods, the gradients represent the direction where the timing of a spike should be moved, _i.e._, be moved leftward or rightward on the time axis. The SpikeProp (Bohte
et al., 2002) and its variants (Booij and tat Nguyen, 2005; Hong et al., 2019; Xu et al., 2013) all belong to this kind of method and they adopt the negative inverse of the time derivative of membrane potential function to approximate the derivative of spike timing to membrane potential. Since most of the time-based methods would restrict each neuron to fire at most once, in (Zhou et al., 2021), the spike time is directly taken as the state of a neuron. Thus the relation of neurons can be modeled by the spike time and the SNN can be trained similarly to an ANN. Though the time-based methods enjoy less computation cost than the activation-based methods and many works (Zhu et al., 2022; Zhang and Li, 2020) have greatly improved the accuracy of the field, it is still difficult to train deep time-based SNN models and apply them to large-scale datasets, _e.g._, ImageNet. Considering the limits of the time-based methods and the topic of summarizing the recent deep SNNs here, we mainly focus on activation-based methods in the paper.
\begin{table}
\begin{tabular}{c l l l l} \hline \hline
**Type** & **Method** & **Key Technology** & \multicolumn{2}{c}{**On the Level\({}^{\star}\)**} \\ \cline{3-5} & LSNN (Bellece et al., 2018) & Adaptive threshold & ✓ & \\ & LIMD(Wang et al., 2022) & Adaptive threshold & ✓ & \\ & BDETT (Ding et al., 2022) & Dynamic threshold & ✓ & \\ & PLIF (Fang et al., 2021b) & Learnable leak constant & ✓ & \\ & Plastic Synaptic Delays (Yu et al., 2022c) & Learnable leak constant & ✓ & \\ & Diet-SNN (Rathi and Roy, 2020) & Learnable leak constant\& threshold & ✓ & \\ & DS-ResNet (Feng et al., 2022) & Multi-firing \& Act before Add-ResNet & ✓ & ✓ \\ & SNN-MLP (Li et al., 2022a) & Group LIF & ✓ & \\ & GLIF (Yao et al., 2022) & Unified gated LIF & ✓ & \\ & Augmented Spikes (Yu et al., 2022d) & Augmented spikes & ✓ & \\ & ImI\(\omega\)-RNN (Shen et al., 2023) & Leaky Integrate and Fire or Burst & ✓ & \\ & MT-SNN (Wang et al., 2023) & Multiple threshold approach & ✓ & \\ & SEW-ResNet (Fang et al., 2021a) & Act before ADD form-based ResNet & ✓ & \\ & MS-ResNet (Hu et al., 2021) & Pre-activation form-based ResNet & ✓ & \\ & AutoSNN (Na et al., 2022) & Neural architecture search & ✓ & \\ & SNASNet (Kim et al., 2022a) & Neural architecture search & ✓ & \\ & TA-SNN (Yao et al., 2021) & Attention mechanism & ✓ & \\ & STSC-SNN (Yu et al., 2022a) & Attention mechanism & ✓ & \\ & TCIA-SNN (Zhu et al., 2022b) & Attention mechanism & ✓ & \\ & Real Spike (Guo et al., 2022d) & Training-inference decoupled structure & ✓ & \\ & IM-Loss (Guo et al., 2022a) & Information maximization loss & ✓ & \\ & RecBlo-SNN (Guo et al., 2022c) & Membrane potential distribution loss & & ✓ \\ & Distilling spikes (Kushawhaha et al., 2021) & Knowledge distillation & ✓ & ✓ \\ & Local tandem Learning (Yang et al., 2022) & Tandem Learning & ✓ & ✓ \\ & sparse-KD (Xu et al., 2023a) & Knowledge distillation & ✓ & ✓ \\ & KDSNN (Xu et al., 2023b) & Knowledge distillation & ✓ & \\ & SNN distillation (Takuya et al., 2021) & Knowledge distillation & ✓ & \\ \hline & SuperSpike (Zenke and Ganguli, 2018) & Fixed surrogate gradient & ✓ \\ & LISNN (Cheng et al., 2020) & Fixed surrogate gradient & ✓ \\ & IM-Loss (Guo et al., 2022a) & Dynamic surrogate gradient & ✓ \\ & Gradual surrogate gradient (Guo et al., 2022a) & Dynamic surrogate gradient & ✓ \\ & Differentiable Spike (Li et al., 2021b) & Learnable surrogate gradient & ✓ \\ & SpikeHHS (Leng et al., 2022) & Differentiable surrogate gradient search & ✓ \\ & DSR (Meng et al., 2022) & Differentiation on Spike Representation & ✓ \\ & NSNN (Ma et al., 2023) & Noise-driven learning rule & ✓ \\ & STDPP (Zhang et al., 2022c) & Rectified postsynaptic potential function & ✓ & \\ & SEW-ResNet (Fang et al., 2021a) & Act before ADD form-based ResNet & ✓ & \\ & M-ResNet (Hu et al., 2021) & Pre-activation form-based ResNet & ✓ & \\ & MultiLevel RNN (Wu et al., 2019c) & Constructive auxiliary feature maps & ✓ \\ & tdBN (Zheng et al., 2021) & Threshold-dependent batch normalization & & ✓ \\ & BNTT (Kim and Panda, 2021) & Temporal batch normalization through time & ✓ \\ & PSP-BN (Ikegawa et al., 2022) & Postsynaptic potential normalization & ✓ & \\ & TEBN (Kim and Panda, 2021) & Temporal effective batch normalization & & ✓ \\ & RecBlo-SNN (Guo et al., 2022c) & Membrane potential distribution loss & & ✓ \\ & TET (Deng et al., 2022) & Temporal regularization loss & & ✓ \\ & Tandem learning (Wu et al., 2021a) & Tandem learning & ✓ & ✓ \\ & Progressive tandem learning (Wu et al., 2021b) & Progressive tandem learning & & ✓ \\ & Joint A-SNN (Guo et al., 2023) & Joint training of ANN and SNN & ✓ \\ \hline \hline \end{tabular} \({}^{\star}\) NL denotes Neuron Level, NSL denotes Network Structure Level, TTL denotes Training Technique Level.
\end{table}
Table 1: Overview of Direct Learning-Based Deep Spiking Neural Networks: Part I.
### Accuracy Improvement Methods
As aforementioned, the limited information capacity and the non-differentiability of firing activity of the SNN cause its accuracy loss for wide tasks. Therefore, to mitigate the accuracy loss in the SNN, a great number of methods devoted to improving the representative capabilities and relief training difficulties of SNNs have been proposed and achieved successful improvements in the past few years.
#### 3.1.1 Improving representative capabilities
Two problems result in the representative ability decreasing of the SNN, the process of firing activity will induce information loss, which has been proved in (Guo et al., 2022b) and binary spike maps suffer the limited information capacity, which has been proved in (Guo et al., 2022d). These problems can be mitigated on the neuron level, network structure level, and training technique level.
**On the neuron level.** A common way to boost the representative capability of the SNN is to make some hyper-parameters in the spiking neuron learnable. In LSNN (Bellec et al., 2018) and LTMD (Wang et al., 2022a), the adaptive threshold spike neuron was proposed to enhance the computing and learning capabilities of SNNs. Further, a novel bio-inspired dynamic energy-temporal threshold, which can be adjusted dynamically according to input data for SNNs was introduced in the BDETT (Ding et al., 2022). Some works adopted the learnable membrane time constant in spiking neurons (Yin et al., 2020; Zimmer et al., 2019; Fang et al., 2021b; Luo et al., 2022; Yu et al., 2022c). Combining these two manners, Diet-SNN (Rathi and Roy, 2020) simultaneously adopted the learnable membrane leak and firing threshold.
There are also some works focusing on embedding more factors in the spiking neuron to improve its diversity. A multi-level firing (MLF) unit, which contains multiple LIF neurons with different level thresholds thus could generate more quantization spikes with different thresholds was proposed in DS-ResNet (Feng et al., 2022). A full-precision LIF to communicate between patches in Multi-Layer Perceptron (MLP), including horizontal LIF and vertical LIF in different directions was proposed in SNN-MLP (Li et al., 2022a). SNN-MLP used group LIF to extract better local features. In GLIF (Yao et al., 2022), to enlarge the representation space of spiking neurons, a unified gated leaky integrate-and-fire Neuron was proposed to fuse different bio-features in different neuronal behaviors via embedding gating factors. In augmented spikes (Yu et al., 2022d), a special spiking neuron model was proposed to process augmented spikes, where additional information can be carried from spike strength and latency. This neuron model extends the computation with an additional dimension and thus could be of great significance for the representative ability of the SNN. In LIFB (Shen et al., 2023), a new spiking neuron model called the Leaky Integrate and Fire or Burst was proposed. The neuron model exhibits three modes including resting, regular spike, and burst spike, which significantly enriches the representative capability. Similar to LIFB, MT-SNN (Wang et al., 2023) proposed a multiple threshold approach to firing different spike modes to alleviate the quantization error, such that it could reach a high accuracy at fewer steps.
Different from these works, InfLoR-SNN (Guo et al., 2022b) proposed a membrane potential rectifier (MPR), which can adjust the membrane potential to a new value closer to quantization spikes than itself before firing activity. MPR directly handles the quantization error problem in SNNs, thus improving the representative ability.
**On the network structure level.** To increase the SNN diversity, some works advocate for improving the SNN architecture. In SEW-ResNet (Fang et al., 2021a) and DS-ResNet (Feng et al., 2022), the widely used standard ResNet backbone is replaced by activation before addition form-based ResNet. In this way, the blocks in the network will fire positive integer spikes. Its representation capability will no doubt be
increased, however, the advantages of event-driven and multiplication-addition transform in SNNs will be lost in the meantime. To solve the aforementioned problem, MS-ResNet (Hu et al., 2021) adopted the pre-activation form-based ResNet. In this way, the spike-based convolution can be retained. The difference between these methods is shown in Figure 1. However, these SNN architectures are all manually designed. For designing well-performed SNN models automatically, AutoSNN (Na et al., 2022) and SNASNet (Kim et al., 2022) combined the Neural Architecture Search (NAS) approach to find better SNN architectures. And TA-SNN (Yao et al., 2021), STSC-SNN (Yu et al., 2022), and TCJA-SNN (Zhu et al., 2022) leveraged the learnable attention mechanism to improve the SNN performance.
Different from changing the network topology, Real Spike (Guo et al., 2022) provides a training-inference decoupled structure. This method enhances the representation capacity of the SNN by learning real-valued spikes during training. While in the inference phase, the rich representation capacity will be transferred from spike neurons to the convolutions by a re-parameterization technique, and meanwhile, the real-valued spikes will be transformed into binary spikes, thus maintaining the event-driven and multiplication-addition transform advantages of SNNs.
Besides, increasing the timestep of SNN will undoubtedly improve the SNN accuracy too, which has been proved in many works (Fang et al., 2021; Wu et al., 2018, 2019). To some extent, increasing the timestep is equivalent to increasing neuron output bits through the temporal dimension, which will increase the representation capability of feature map (Feng et al., 2022). However, using more timesteps achieves better performance at the cost of increasing inference time.
**On the training technique level.** Some works attempted to improve the representative capability of the SNN on the training technique level, which can be categorized as regularization and distillation. Regularization is a technique that introduces another loss term to explicitly regularize the membrane potential or spike distribution to retain more useful information in the network that could indirectly help train the network as follows,
\[\mathcal{L}_{Total}=\mathcal{L}_{CE}+\lambda\mathcal{L}_{DL} \tag{4}\]
Figure 1: Different SNN ResNet architectures.
where \(\mathcal{L}_{CE}\) is the common cross-entropy loss, \(\mathcal{L}_{DL}\) is the distribution loss for learning the proper membrane potential or spike, and \(\lambda\) is a coefficient to balance the effect of the two types of losses. IM-Loss (Guo et al., 2022) argues that improving the activation information entropy can reduce the quantization error, and proposed an information maximization loss function that can maximize the activation information entropy. In RecDis-SNN (Guo et al., 2022), a loss for membrane potential distribution to explicitly penalize three undesired shifts was proposed. Though the work is not designed for reducing quantization error specifically, it still results in a bimodal membrane potential distribution, which has been proven can mitigate the quantization error problem.
The distillation methodology aims to help train a small student model by transferring knowledge of a rather large trained teacher model based on the consensus that the representative ability of a teacher model is better than that of the student model. Recently, some interesting works that introduce the distillation method in the SNN domain were proposed. In (Kushawaha et al., 2021), a big teacher SNN model is used to guide the small SNN counterpart learning. While in (Yang et al., 2022; Takuya et al., 2021; Xu et al., 2023, 2023), an ANN-teacher is used to guide SNN-student learning. In specific, Local Tandem Learning (Yang et al., 2022) uses the intermediate feature representations of the ANN to supervise the learning of SNN. While in sparse-KD (Xu et al., 2023), the logit output of the ANN was adopted to guide the learning of the SNN. Furthermore, KDSNN (Xu et al., 2023) and SNN distillation (Takuya et al., 2021) used both feature-based and logit-based information to distill the SNN.
#### 3.1.2 Relieving training difficulties
The non-differentiability of the firing function impedes the deep SNN direct training. To handle this problem, recently, using the surrogate gradient (SG) function for spiking neurons has received much attention. SG method utilizes a differentiable surrogate function to replace the non-differentiable firing activity to calculate the gradient in the back-propagation (Fang et al., 2021; Rathi and Roy, 2020; Wu et al., 2019; Neftci et al., 2019). Though the SG method can alleviate the non-differentiability problem, there exists an obvious gradient mismatch between the gradient of the firing function and the surrogate gradient. And the problem easily leads to under-optimized SNNs with severe performance degradation. Intuitively, an elaborately designed surrogate gradient can help to relieve the gradient mismatch in the backward propagation. As a consequence, some works are focusing on designing better surrogate gradients. In addition, the gradient explosion/vanishing problem in SNNs is severer over ANNs, due to the adoption of tanh-like function for most SG methods. There are also some works focusing on handling the gradient explosion/vanishing problem. Note that, these methods in this section can also be classified as the improvement on the neuron level, network structure level, and training technique level, which can be seen in the Table 1. Nevertheless, to better introduce these works, we still organize them as designing the better surrogate gradient and relieving the gradient explosion/vanishing problem.
**Designing the better surrogate gradient (SG)**. Most earlier works adopt fixed SG-based methods to handle the non-differentiability problem. For example, the derivative of a truncated quadratic function, the derivatives of a sigmoid, and a rectangular function were respectively adopted in (Bohte, 2011), (Zenke and Ganguli, 2018), and (Cheng et al., 2020). However, such a strategy would limit the learning capacity of the network. To this end, a dynamic SG method was proposed in (Guo et al., 2022; Chen et al., 2022), where the SG could change along with epochs as follows,
\[\varphi(x)=\frac{1}{2}\mathrm{tanh}(K(i)(x-V_{\mathrm{th}}))+\frac{1}{2} \tag{5}\]
where \(\varphi(x)\) is the backward approximation function for the firing activity and \(K(i)\) is a dynamic coefficient that changes along with the training epoch as follows,
\[K(i)=\frac{(10^{\frac{i}{N}}-10^{0})K_{\max}+(10^{1}-10^{\frac{i}{N}})K_{\min}}{9} \tag{6}\]
where \(K_{\min}\) and \(K_{\max}\) are the lower bound and the upper bound of \(K\), and \(i\) is the index of epoch starting from \(0\) to \(N-1\). The \(\varphi(x)\) and its gradient can be seen in Figure 2. Driven by \(K(i)\), it will gradually evolve to the firing function, thus ensuring sufficient weight updates at the beginning and accurate gradients at the end of the training. Nevertheless, the above SG methods are still designed manually. To find the optimal solution, in (Li et al., 2021), the Differentiable Spike method that can adaptively evolve during training to find the optimal shape and smoothness for gradient estimation based on the finite difference technique was proposed. Then, in (Leng et al., 2022), combined with the NAS technique, a differentiable SG search (DGS) method to find the optimized SGs for SNN was proposed. Different from designing a better SG for firing function, DSR (Meng et al., 2022) derived that the spiking dynamics with spiking neural models can be represented as some sub-differentiable mapping and trained the SNNs by the gradients of the mapping, thus avoiding the non-differentiability problem in SNN training. And NSNN (Ma et al., 2023) presented the noisy spiking neural network and the noise-driven learning rule (NDL) for the surrogate gradient.
**Relieving the gradient explosion/vanishing problem**. The gradient explosion or vanishing problem is still severe in SG-only methods. There are three kinds of methods to solve this problem: using improved neurons or architectures, improved batch normalizations, and regularization. In (Zhang et al., 2022), a simple yet efficient rectified linear postsynaptic potential function (ReL-PSP) for spiking neurons, which benefits for handling the gradient explosion problem, was proposed. On the network architecture level, SEW-ResNet (Fang et al., 2021) showed that standard spiking ResNet is inapplicable to overcome identity mapping and vanishing/explosion gradient problems and advised using ResNet with activation before addition form. Recently, the pre-activation form-based ResNet was explored in MS-ResNet (Hu et al., 2021). This network topology can simultaneously handle the gradient explosion/vanishing problem and retain the advantages of the SNN.
The normalization approaches are widely used in ANNs to train well-performed models, and these approaches are also introduced in the SNN field to handle the vanishing/explosion gradient problems. For
Figure 2: The approximation function (left) under different values of the coefficient, \(k\) and its corresponding gradient (right). The blue curves represent the firing function (left) and its true gradient (right).
example, NeuNorm (Wu et al., 2019c) normalized the data along the channel dimension like BN in ANNs through constructing auxiliary feature maps. Threshold-dependent batch normalization (tdBN) (Zheng et al., 2021) considers the SNN normalization from a temporal perspective and extends the scope of BN to the additional temporal dimension. Furthermore, some works (Kim and Panda, 2021; Ikegawa et al., 2022; Duan et al., 2022) argued that the distributions of different timesteps vary wildly, thus bringing a negative impact when using shared parameters. Subsequently, the temporal Batch Normalization Through Time (BNTT), postsynaptic potential normalization (PSP-BN), and temporal effective batch normalization (TEBN) that can regulate the spike flows by utilizing separate sets of BN parameters on different timesteps were proposed. Though adopting temporal BN parameters on different timesteps can obtain more well-performed SNN models, this kind of BN technique can not fold the BN parameters into the weights and will increase the computations and running time in the inference stage, which should also be noticed.
Using the regularization loss can also mitigate the gradient explosion/vanishing problem. In RecDissNN (Guo et al., 2022c), a new perspective to further classify the gradient explosion/vanishing difficulty of SNNs into three undesired shifts of the membrane potential distribution was presented. To avoid these undesired shifts, a membrane potential regularization loss was proposed in RecDis-SNN, this loss introduces no additional operations in the SNN inference phase. In TET (Deng et al., 2022), an extra temporal regularization loss to compensate for the loss of momentum in the gradient descent with SG methods was proposed. With this loss, TET can converge into flatter minima with better generalizability.
Since ANNs are fully differentiable to be trained with gradient descent, there is also some work utilizing ANN to guide the SNN's optimization (Wu et al., 2021b, a; Guo et al., 2023). In (Wu et al., 2021a) a tandem learning framework was proposed, that consists of an SNN and an ANN that share the same weight. In this framework, the spike count as the discrete neural representation in the SNN would be presented to the coupled ANN activation function in the forward phase. And in the backward phase, the error back-propagation is performed on the ANN to update the shared weight for both the SNN and the ANN. Furthermore, in (Wu et al., 2021b), a progressive tandem learning framework was proposed, that introduces a layer-wise learning method to fine-tune the the shared network weights. Considering the difference between the ANN and SNN, Joint A-SNN (Guo et al., 2023) developed a partial weight-sharing regime for the joint training of weight-shared ANN and SNN, that applies the Singular Value Decomposition (SVD) to the weights parameters and keep the same eigenvectors while the separated eigenvalues for the ANN and SNN.
### Efficiency Improvement Methods
An important reason why have SNNs received extensive attention recently is that they are seen as more energy efficient than ANNs due to their event-driven computation mechanism and the replacement of energy-consuming weight multiplication with addition. To further explore the efficiency advantages of SNNs so that they can be applied to energy-constrained devices is also a hot topic in the SNN field. This kind of method can be mainly categorized into network compression techniques and sparse SNNs.
#### 3.2.1 Network compression techniques
Network compression techniques have been widely used in ANNs. There are also some works applying these techniques in SNNs. In the literature, approaches for compressing deep SNNs can be classified into three categories: parameter pruning, NAS, and knowledge distillation.
**Parameter pruning**. Parameter pruning mainly focuses on eliminating the redundant parameters in the model by removing the uncritical ones. SNNs, unlike their non-spiking counterparts, consist of a temporal
dimension. Along with considering temporal information, a spatial and temporal pruning of SNNs is proposed in [15]. Generally speaking, pruning will cause accuracy degradation to some extent. To avoid this, SD-SNN [16] and Grad R [17] proposed the pruning-regeneration method for removing the redundancy in SNNs from the brain development plasticity mechanism. With synaptic regeneration, these works can effectively prevent and repair over-pruning. Recently, an interesting temporal pruning, which is specific for SNNs, was proposed in [15]. This method starts with an SNN of \(T\) timesteps and reduces \(T\) every iteration of training, which results in a continuum of accurate and efficient SNNs from \(T\) timesteps, down to \(1\) timestep.
**Neural Architecture Searching (NAS)**. Obviously, a compact network carefully designed can reduce the storage and computation complexity of SNNs. However, due to the limitations of humans' inherent knowledge, it is difficult for people to jump out of their original thinking paradigm and design an optimal compact model. Therefore, there are some works using NAS techniques to let the algorithm automatically design the compact neural architecture [14, 16]. Furthermore, in [16], the lottery ticket hypothesis was investigated which shows that dense SNN networks contain smaller SNN subnetworks, _i.e._, winning tickets, which can achieve comparable performance to the dense ones, and the smaller compact one is picked as to be used network.
\begin{table}
\begin{tabular}{c c l c c c} \hline \hline \multicolumn{2}{c}{**Type**} & \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Key Technology**} & \multicolumn{2}{c}{**On the Level\({}^{*}\)**} \\ \cline{5-6} & & & NL & NSL & TTL \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & & Spatio-Temporal Pruning [15] & Spatio-temporal pruning & & \(\checkmarkmark\) \\ & & SD-SNN [16] & Pruning-regeneration method & & \(\checkmarkmark\) \\ & & Grad R [17] & Pruning-regeneration method & & \(\checkmarkmark\) \\ & & Temporal pruning [15] & Temporal pruning & & \(\checkmark\) \\ & & Autosn [14] & Neural architecture searching & & \(\checkmarkmark\) \\ & & SNASNet [16] & Neural architecture searching & & \(\checkmarkmark\) \\ & & Lottery ticket Hypothesis [16] & Lottery ticket hypothesis & & \(\checkmark\) \\ & & Distilling spikes [14] & Knowledge distillation & & \(\checkmark\) \\ & & Local Random Learning [15] & Tandem Learning & & \(\checkmark\) \\ & & sparse-KD [14] & Knowledge distillation & & \(\checkmark\) \\ & & KDSNN [14] & Knowledge distillation & & \(\checkmark\) \\ & & SNN distillation [14] & Knowledge distillation & & \(\checkmark\) \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & & ANN [16] & A lot of adaptive spiking neurons & \(\checkmark\) & \\ & & Correlation-based regularization [16] & Correlation-based regularized & & \(\checkmark\) \\ & & Superspike [16] & Heterosynaptic regularization term & & \(\checkmark\) \\ & & RecBlo-SNN [17] & Membrane potential distribution & & \(\checkmark\) \\ & & Low-activity SNN [17] & Regularization term & & \(\checkmark\) \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & & Sequence approximation [14] & Dual-search-space optimization & & \(\checkmark\) \\ & Sequential learning [15] & Improved recurrence dynamics & \(\checkmark\) & & \\ & SNN_HAR [14] & Spatio-temporal extraction & & \(\checkmark\) & \\ & & Robust SNN [16] & Temporal penalty settings & & \(\checkmark\) \\ & & Tandem learning-based SNN model [16] & Tandem learning & & \(\checkmark\) \\ & learning & SG-based SNN model [16] & Surrogate gradient method & & \(\checkmark\) \\ & & Combination-based SNN [18] & Combination of many techniques & \(\checkmark\) & \(\checkmark\) & \\ & & Low-activity SNN [17] & Regularization term & & \(\checkmark\) \\ & & SNNCNN [14] & Combination of CNNs and SNNs & & \(\checkmark\) & \(\checkmark\) \\ & & RSNNs [16] & offline supervised learning rule & & \(\checkmark\) \\ \hline \multirow{8}{*}{
\begin{tabular}{} \end{tabular} } & & adaptive-spikement [14] & Learning neuronal dynamics & \(\checkmark\) & \\ & & StereoSpike [16] & Modified U-Net-like architecture & & \(\checkmark\) & \(\checkmark\) \\ & SuperFast [17] & Event-enhanced frame interpolation & & \(\checkmark\) & \\ & & E-SAI [17] & Synthetic aperture imaging method & & \(\checkmark\) \\ & Cooperating & EVSNN [16] & Potential-assisted SNN & & \(\checkmark\) & \\ & with & Spiking-Fer [1] & Deep CSNN & & \(\checkmark\) & \(\checkmark\) \\ & neuromorphic & Automotive Detection [18] & PIF \& SG \& Event encoding & \(\checkmark\) & \(\checkmark\) \\ & cameras & STNet [16] & Spiking transformer network & & \(\checkmark\) & \(\checkmark\) \\ & & LaneSNNs [16] & offline supervised learning rule & & \(\checkmark\) \\ & & HALSE [14] & Hybrid approach & & \(\checkmark\) \\ & SpikeMS [15] & Spatio-temporal loss & & \(\checkmark\) & \(\checkmark\) \\ & & Event-based Pose Tracking [17] & Spiking Spatiotemporal Transformer & & \(\checkmark\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview of Direct Learning-Based Deep Spiking Neural Networks: Part II
**Knowledge distillation**. The knowledge distillation methods aim at obtaining a compact model from a large model. In [21], a larger teacher SNN model is used to distill a smaller SNN model. And in [22, 23, 24], the same architecture ANN-teacher is used to distill SNN-student.
#### 3.2.2 Sparse SNNs
Different from ANNs, SNNs transmit information by spike events, and the computation occurs only when the neuron receives spike events. Benefitting from this event-driven computation mechanism, SNNs can greatly save energy and run efficiently when implemented on neuromorphic hardware. Hence, limiting the firing rate of spiking neurons to achieve a sparse SNN is also a widely used way to improve the efficiency of the SNN. These kinds of methods can limit the firing rate of the SNN on both the neuron level and training technique level.
**On the neuron level.** In ASNN [25], an adaptive SNN based on a group of adaptive spiking neurons was proposed. These adaptive spiking neurons can optimize their firing rate using asynchronous pulsed Sigma-Delta coding efficiently.
**On the training technique level.** In [12], a correlation-based regularizer, which is incorporated into a loss function, was proposed to minimize the redundancies between the features at each layer for structural sparsity. Obviously, this method is beneficial for energy-efficient. Superspike [26] added a heterosynaptic regularization term to the learning rule of the hidden layer weights to avoid pathologically high firing rates. RecDis-SNN [14] incorporated a membrane potential loss into the SNN to regulate the membrane potential distribution to an appropriate range to avoid high firing rates. In [17], to enforce sparse spiking activity, a \(l_{1}\) or \(l_{2}\) regularization on the total number of spikes emitted by each layer was applied.
### Temporal Dynamics Utilization Methods
Different from ANNs, SNNs enjoy rich temporal dynamics characteristics, which makes them more suitable for some particular temporal tasks and some vision sensors with high resolution in time, _e.g._, neuromorphic cameras, which can capture temporally rich information asynchronously inspired by the information process form of eyes. Given such characteristics, a great number of methods falling in sequential learning and cooperating with neuromorphic cameras have been proposed for SNNs.
#### 3.3.1 Sequential learning
As aforementioned in Section 2, SNNs maintain a dynamic state in the neuron memory. In [18], the usefulness of the inherent recurrence dynamics of the SNN for sequential learning was demonstrated, that it can retain important information. Thus, SNNs show better performance on sequential learning compared to ANNs with similar scales in many works. In [27], a function approximation theoretical basis was developed that any spike-sequence-to-spike-sequence mapping functions can be approximated by an SNN with one neuron per layer using skip-layer connections. And then, based on the basis, a suitable SNN model for the classification of spatio-temporal data was designed. In [10], SNNs were leveraged to study the Human Activity Recognition (HAR) task. Since SNNs allow spatio-temporal extraction of features and enjoy low-power computation with binary spikes, they can reduce up to 94% energy consumption while achieving better accuracy compared with homogeneous ANN counterparts. In [19], an interesting phenomenon was found that SNNs trained with the appropriate temporal penalty settings are more robust against adversarial images than ANNs.
As the common sequential signal, many preliminary works on speech recognition systems based on spiking neural networks have been explored (Tavanaei and Maida, 2017, 2018; Hao et al., 2020; Wu et al., 2020; Zhang et al., 2019; Wu et al., 2018, 2018, 2019b). In (Wu et al., 2020), a deep spiking neural network was trained by the tandem learning method to handle the large vocabulary automatic speech recognition task. The experimental results demonstrated that the deep SNN trained could compete with its ANN counterpart while requiring as low as 0.68 times total synaptic operations to their ANN counterparts. There are also some works training deep SNN directly with SG methods for the speech task. In (Ponghiran and Roy, 2022), inspired by the LSTM, a custom version of SNNs was defined that combines a forget gate with multi-bit outputs instead of binary spikes, yielding better accuracy than that of LSTMs, but with 2\(\times\) fewer parameters. In (Bittar and Garner, 2022), the spiking neural networks trained like recurrent neural networks only using the standard surrogate gradient method can achieve promising results on speech recognition tasks, which shows the advantage of SNNs to handle this kind of task. In (Bittar and Garner, 2022), a combination of adaptation, recurrence, and surrogate gradient techniques for spiking neural networks was proposed. And with these improvements, light spiking architectures that are not only able to compete with ANN solutions but also retain a high degree of compatibility with them were yielded. In (Pellegrini et al., 2021), the dilated convolution spiking layers and a new regularization term to penalize the averaged number of spikes were used to train low-activity supervised convolutional spiking neural networks. The results showed that the SNN models can reach an error rate very close to standard DNNs while very energy efficient for speech tasks. In Sadovsky et al. (2023), a new technique for speech recognition that combines convolutional neural networks with spiking neural networks was presented to create an SNNCNN model. The results showed that the combination of CNNs and SNNs outperforms both MLPs and ANNs, providing a new route to further improvements in the field. In (Yin et al., 2021), an activity-regularizing surrogate gradient method combined with recurrent networks of tunable and adaptive spiking neurons for SNNs was proposed, and the method performed well on the speech recognition task.
#### 3.3.2 Cooperating with neuromorphic cameras
Neuromorphic camera, which is also called event-based cameras, have recently shown great potential for high-speed motion estimation owing to their ability to capture temporally rich information asynchronously. SNNs, with their spatio-temporal and event-driven processing mechanisms, are very suitable for handling such asynchronous data. Many excellent works combine SNNs and neuromorphic cameras to solve real-world large-scale problems. In (Hagenaars et al., 2021; Kosta and Roy, 2022), an event-based optical flow estimation method was presented. In StereoSpike (Rancon et al., 2021) a depth estimation method was provided. SuperFast (Gao et al., 2022) leveraged an SNN and an event camera to present an event-enhanced high-speed video frame interpolation method. SuperFast can generate a very high frame rate (up to 5000 FPS) video from the input low frame rate (25 FPS) video. Furthermore, Based on a hybrid network composed of SNNs and ANNs, E-SAI (Yu et al., 2022) provided a novel synthetic aperture imaging method, which can see through dense occlusions and extreme lighting conditions from event data. And in EVSNN (Zhu et al., 2022) a novel Event-based Video reconstruction framework was proposed. To fully use the information from different modalities, HALSIE (Biswas et al., 2022) proposed a hybrid approach for semantic segmentation comprised of dual encoders with an SNN branch to provide rich temporal cues from asynchronous events, and an ANN branch for extracting spatial information from regular frame data by simultaneously leveraging image and event modalities.
There are also some works that apply this technique in autonomous driving. In (Cordone et al., 2022), fast and efficient automotive object detection with spiking neural networks on automotive event data was proposed. In (Zhang et al., 2022), a spiking transformer network, STNet, which can dynamically extract
and fuse information from both temporal and spatial domains was proposed for single object tracking using event data. Besides, since event cameras enjoy extremely low latency and high dynamic range, they can also be used to handle the harsh environment, _i.e._, extreme lighting conditions or dense occlusions. LaneSNNs (Viale et al., 2022) presented an SNN-based approach for detecting the lanes marked on the streets using the event-based camera input. The experimental results show a very low power consumption of about 1 W, which can significantly increase the lifetime and autonomy of battery-driven systems.
Based on the event-based cameras and SNNs, some works attempted to assist the behavioral recognition research. For examples, Spiking-Fer (Barchid et al., 2023) proposed a new end-to-end deep convolutional SNN method to predict facial expression. SpikeMS (Parameshwara et al., 2021) proposed a deep encoder-decoder SNN architecture and a novel spatio-temporal loss for motion segmentation using the event-based DVS camera as input. In (Zou et al., 2023), a dedicated end-to-end sparse deep SNN consisting of the Spike-Element-Wise (SEW) ResNet and a novel Spiking Spatiotemporal Transformer was proposed for event-based pose tracking. This method achieves a significant computation reduction of 80% in FLOPS, demonstrating the superior advantage of SNN in this kind of task.
## 4 Future Trends and Conclusions
The spiking neural networks, born in mimicking the information process of brain neurons, enjoy many specific characteristics and show great potential in many tasks, but meanwhile suffer from many weaknesses. As a consequence, a number of direct learning-based deep SNN solutions for handling these disadvantages or utilizing the advantages of SNNs have been proposed recently. As we summarized in this survey, these methods can be roughly categorized into i) accuracy improvement methods, ii) efficiency improvement methods, and iii) temporal dynamics utilization methods. Though successful milestones and progress have been achieved through these works, there are still many challenges in the field.
On the accuracy improvement aspect, the SNN still faces serious performance loss, especially for the large network and datasets. The main reasons might include:
* _Lack of measurement of information capacity:_ it is still unclear how to precisely calculate the information capacity of the spike maps and what kind of neuron types or network topology is suitable for preserving information while the information passing through the network, even after firing function. We believe SNN neurons and architectures should not be referenced from brains or ANNs completely. Specific designs in regard to the characteristic of SNNs for preserving information should be explored. For instance, to increase the spiking neuron representative ability, the binary spike {0, 1}, which is used to mimic the activation or silence in the brain, can be replaced by ternary spike {-1, 0, 1}, thus the information capacity of the spiking neuron will be boosted, but the event-driven and multiplication-free operation advantages of the binary spike can be preserved still. And as aforementioned, the widely used standard ResNet backbone in ANNs is not suitable for SNNs. And the PreAct ResNet backbone performs better since the membrane potential in neurons before the firing function will be added to the next block, thus the complete information will be transmitted simultaneously. While for the standard ResNet backbone, only quantized information is transmitted. To further preserve the information, adding the shortcut layer by layer in the PreAct ResNet backbone is better in our experiment, which is much different from the architectures in ANNs and is a promising exploration direction.
* _Inherent optimization difficulties:_ It is still a difficult problem to optimize the SNN in a discrete space, even though many novel gradient estimators or approximate functions have been proposed, there are still some huge obstacles in the field. Such as the gradient explosion/vanishing problem, with the
increasing timestep, the problem along with the gradient errors will become severer and make the network hard to converge. Thus how to completely eliminate the impact of this problem to directly train an SNN with large timesteps is still under exploration. We believe more theoretical studies and practical tricks will emerge to answer this question in the future.
It is also worth noting that accuracy is not the only criterion of SNNs, the versatility is another key criterion, that measures whether a method can be used in practice. Some methods proposed in prior works are very versatile, such as learnable spike factors proposed in Real Spike (Guo et al., 2022d), membrane potential rectifier proposed in InfLoR-SNN (Guo et al., 2022b), temporal regularization loss proposed in TET (Deng et al., 2022), _etc_. These methods enjoy simple implementation and low coupling, thus having become common widely used practices to improve the accuracy of SNNs. Some methods improve the accuracy of SNNs by designing complex spiking neurons or specific architectures. Such improvements usually show a stronger ability to increase performance. However, as we have pointed out before, some of them suffer complicated computation and even lose the energy-efficiency advantage, which violates the original intention of SNNs. Therefore, purely pursuing high accuracy without considering versatility has limited significance in practice. The balance between accuracy and versatility is also an essential criterion for SNN research that should be considered in the following works.
On the efficiency improvement aspect, some prior works ignore the important fact, that the event-driven paradigm and friendly to the neuromorphic hardware make SNNs much different from ANNs. When implemented on the neuromorphic hardware, the computation in the SNN occurs only if the spiking neuron receives spike events. Hence, the direct reason for improving the efficiency of the SNN is reducing the number of the firing spikes, not reducing network size. Some methods intending to improve the efficiency of SNNs by pruning inactive neurons as doing in ANNs can not make sense in this situation. We even think that under the condition the SNN network size does not exceed the capacity of the neuromorphic hardware, enlarging the network size but limiting the the number of the firing spikes at the same time may be a potential route to improve the accuracy and efficiency simultaneously. In this way, different weights of the SNN may respond to different data, thus being equivalent to improving the representative capabilities of the SNN. However, a more systematic study needs to be done in the future.
On the temporal dynamics utilization aspect, a great number of interesting methods have been proposed and shown wide success. We think it is a very potential direction in the SNN field. Some explainable machine learning-related study indicates that different network types follow different patterns and enjoy different advantages. In this sense, it might be more meaningful to dive into the temporal dynamics of the SNN deeply, but not to pursue higher accuracy as ANNs. Meanwhile, considering the respective advantages, to use ANNs and SNNs together needs to be studied further.
Last but not least, more special applications for SNNs also should be explored still. Though SNNs have been used widely in many fields, including the neuromorphic camera, HAR task, speech recognition, autonomous driving, _etc_, as aforementioned and the object detection (Kim et al., 2020; Zhou et al., 2020), object tracking (Luo et al., 2020), image segmentation(Patel et al., 2021), robotic (Dupeyroux et al., 2021; Stagsted et al., 2020), _etc_, where some remarkable studies have applied SNNs on recently, compared to ANNs, their real-world applications are still very limited. considering the unique advantage, efficiency of SNNs, we think there is a great opportunity for applying SNNs in the Green Artificial Intelligence (GAI), which has become an important subfield of Artificial Intelligence and has notable practical value. We believe many studies focusing on using SNNs for GAI will emerge soon.
## Conflict of Interest Statement
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Author Contributions
Yufei Guo and Xuhui Huang wrote the paper with Zhe Ma being active contributors toward editing and revising the paper as well as supervising the project. All authors contributed to the article and approved the submitted version.
## Funding
This work is supported by grants from the National Natural Science Foundation of China under contracts No.12202412 and No.12202413.
|
2310.20312 | Multi-state models for double transitions associated with parasitism in
biological control | Competition between parasitoids can reduce the success of pest control in
biological programs using two species as bio-control agents or when multiple
species exploit the same host crop. Parasitoid foraging behavior and the
ability to identify already parasitized hosts affect the efficacy of parasitoid
species as bio-agents to regulate pest insects. We evaluated the behavioural
changes of parasitoids according to the quality of hosts ({\it i.e.},
previously parasitised or not), and the characterisation of these transitions
over time via multi-state models. We evaluated the effects of previous
parasitism of the brown stinkbug {\it Euschistus heros} eggs on the parasitism
rate of the species {\it Trissolcus basalis} and {\it Telenomus podisi}. We
successively modelled the choice of eggs (with three possibilities: non
parasitised eggs, eggs previously parasitised by {\it T. podisi}, and eggs
previously parasitised by {\it T. basalis}) and the conditional behaviour given
the choice (walking, drumming, ovipositing or marking the chosen egg). We
consider multi-state models in two successive stages to calculate double
transition probabilities, and the statistical methodology is based on the
maximum likelihood procedure. Using the Cox model and assuming a stationary
process, we verified that the treatment effect was significant for the choice,
indicating that the two parasitoid species have different choice patterns. For
the second stage, i.e. behaviour given the choice, the results also showed the
influence of the species on the conditional behaviour, especially for
previously parasitised eggs. Specifically, {\it T.podisi} avoids intraspecific
competition and makes decisions faster than {\it T. basalis}. In this work, we
emphasise the methodological contribution with multi-state models, especially
in the context of double transitions. | Idemauro Antonio Rodrigues de Lara, Gabriel Rodrigues Palma, Victor José Bon, Carolina Reigada, Rafael de Andrade Moral | 2023-10-31T09:37:58Z | http://arxiv.org/abs/2310.20312v1 | # Multi-state models for double transitions associated with parasitism in biological control
###### Abstract
Competition between parasitoids can reduce the success of pest control in biological programs using two species as bio-control agents or when multiple species exploit the same host crop. Parasitoid foraging behavior and the ability to identify already parasitized hosts affect the efficacy of parasitoid species as bio-agents to regulate pest insects. We evaluated the behavioural changes of parasitoids according to the quality of hosts (_i.e._, previously parasitised or not), and the characterisation of these transitions over time via multi-state models. We evaluated the effects of previous parasitism of the brown stinkbug _Euschistus heros_ eggs on the parasitism rate of the species _Trissolcus basalis_ and _Telenomus podisi_. We successively modelled the choice of eggs (with three possibilities: non parasitised eggs, eggs previously parasitised by _T. podisi_, and eggs previously parasitised by _T. basalis_) and the conditional behaviour given the choice (walking, drumming, ovipositing or marking the chosen egg). We consider multi-state models in two successive stages to calculate double transition probabilities, and the statistical methodology is based on the maximum likelihood procedure. Using the Cox model and assuming a stationary process, we verified that the treatment effect was significant for the choice, indicating that the two parasitoid species have different choice patterns. For the second stage, i.e. behaviour given the choice, the results also showed the influence of the species on the conditional behaviour, especially for previously parasitised eggs. Specifically, _T. podisi_ avoids intraspecific competition and makes decisions faster than _T. basalis_. In this work, we emphasise the methodological contribution with multi-state models, especially in the context of double transitions.
M Original Article
tochastic processes; entomological data; foraging behaviour; stationarity; likelihood procedure, transition intensities.
## 1 Introduction
Entomology has a important role in agricultural sciences, specifically because it includes studying and understanding the insect-insect and plant-insect interactions, which can be used to improve agricultural production [17, 27]. Among the many types of studies conducted by entomologists, here we focus on studies related to biological
control. This involves utilising living organisms to reduce the population of a target pest species. One example is the host-parasitoid system, where a parasitoid species is used to control the population of a pest that is used as a host (e.g. _Dichelops melananthus_, _Euschistus heros_, and _Podisus nigrispinus_[25]).
Several insects that play an important role in the parasitism of insect pests have been reported in the literature, including the parasittoids _Tamarixia radiata_, _Telenomus podisi_, _Trissolcus basalis_, and several species of the genus _Trichogramma_[6]. By studying their controlling capabilities in laboratory and field conditions, we may enhance the efficacy of biological control and consequently reduce the economic damage caused by insect pests. A direct contribution of experiments related to the biology of parasitids is the estimation of parasitism rates. Several factors can affect them, including hyperparasitism, where a parasitoid of a different species parasitises an egg that has already been parasitised, constituting a competition interaction [32]. However, hyperparasitism can also happen unintentionally when the parasitoid does not detect the eggs already parasitised by its species.
It is common for an entomological study to be longitudinal over time, both in field and laboratory-based studies. Moreover, recorded responses are often categorical. In such cases, the responses may represent behaviours or choices the insects make in different scenarios. An example was presented by [22] to understand the movement patterns of female adults of _Diaphorina citri_, a pest of citrus plantations, having as the response variable the preference for different potted plant positions.
The parasitoid reproductive behaviour and/or host quality discrimination by competitive parasitoid species, conditional to the host being previously parasitised or not, can be evaluated by recording insect behaviour data (i.e., drumming, ovipositing and marking host eggs) over time. In such cases, the responses may represent behaviours or choices the insects make in different scenarios of host quality.
It is known that the analysis of longitudinal categorical data can be done using Generalized Linear Models ([1], [29]), such as marginal, mixed effects and transition models [15]. Each of these models has its particularities that depend specifically on the design and objectives of the study. In Entomology, for example, the interaction between species can be measured from changes in behaviour over time under certain experimental conditions. Marginal and mixed effects models cannot model these changes over time. In contrast, transition models are very useful to describe the occurrences from one state to the next and also to assess the effect of experimental design conditions ([33], [11]).
Transition models are based on stochastic processes, and a classical reference is [30], which distinguishes classes of discrete-time and continuous-time models. When the process is in continuous time, they are also known as state-space models ([23], [13]). While in the discrete case, we limit ourselves to evaluating the transition probabilities assuming equally spaced time occasions. For the continuous case, there are options for inferences with respect to time, defined by infinitesimal parameters or intensity rates. Thus, not only are the probabilities of state changes described, but also the intensity with which these changes occur can be modelled, which is more informative.
The focus of this work is on a continuous time transition model (space-state mode) motivated by a biological problem arising from an experiment involving the parasitoids _Telenomus posidi_ and _Trissolcus basalis_, which are useful for pest control in soybean [3].
The egg parasittoids _Telenomus podisi_ and _Trissolcus basalis_ (Hymenoptera: Scelionidae) are important natural enemies used in biological control programs for different species of soybean pest bugs [10]. These parasitoid species are termed generalists because they attack different host species, including the eggs of the brown stinkbug
_Euchistus heros_ (Fabricius, 1798) (Hemiptera: Pentanomidae) [8].
The use of _T. podisi_ and _T. basalis_ as biological control agents for soybean bugs has occurred through mass releases of these parasitoids in Brazil, and in some regions, both species can also be found in crops [5]. In both situations, competitive interactions between species can be frequent during foraging by hosts, bringing consequences for pest control and the maintenance of these natural enemies in the soybean agroecosystem after parasitoid releases [9].
Competition for hosts can be reduced when parasitoid females are able to discriminate between already parasitised and non-parasitised hosts. This ability to discriminate host conditions is more frequent at the intraspecific level [18]. The identification of previously parasitised hosts is carried out using chemical and physical clues (semiochemicals and infochemicals) [18], to avoid the occurrence of multiparasitism or superparasitism, which directly affects the quality and quantity of the offspring's resources, which can lead to the death of the developing parasitoid larvae [19].
Knowledge of the effects arising from competition between parasitoids must be taken into account when defining strategies for the use of these agents in biological pest control programs [16]. In this way, understanding and knowing the biological and behavioural aspects, and also the biodiversity and distribution of parasitoid species in a given location becomes relevant to developing successful biological control programs [7]. In this context, it is also necessary to use appropriate statistical methods, which allow the study of species behaviours over time and at the same time evaluate intraspecific competition. In our motivational study, the parasitoids could make two successive choices, the first before choosing an egg type (non-parasitised, parasitised by their own species or by the opposing species), and then after choosing an egg (marking, ovipositing or drumming on the egg); therefore, the insects presented double transitions over time. In this context, the main goal is to present an extension to the multi-state models associated with successive transitions of the parasitoids as a methodological contribution to understanding the pattern of preferences and behaviours of these species.
The remainder of this article is structured as follows: in Section 2, we present our motivational case study; fundamentals of stochastic processes and multi-state models are presented in Section 3; results are presented and discussed in Section 4; a biological discussion of the results is presented in the Section 5; and finally our final considerations are made in Section 6.
## 2 Case study
The rearing of the stinkbug and parasitoid species in the laboratory started with insects provided by the Insect Biology Laboratory, Department of Entomology and Acarology, at the University of Sao Paulo - USP/ESALQ, in 2021. The rearing of _E. heros_ was carried out according to a methodology adapted from [24]. Interactions between parasitoid females of the species _T. posidi_ or _T. basalis_ with eggs of the stinkbug _E. heros_ took place in experimental arenas, represented by Petri dishes (\(15\times 2\) cm). A total of 12 eggs were made available to a female parasitoid, divided into 3 groups: 4 eggs previously parasitised by females of _T. podisi_, 4 eggs parasitised by _T. basalis_, and 4 unparasitised, as illustrated in Figure 1. For the observations, the following behaviours were defined and quantified: a) walking; b) drumming; c) ovipositing and d) marking (Figure 2). Each female was observed for 35 minutes, and ten replicates were performed for each parasitoid species.
## 3 Methods
### Brief review: Stochastic processes and State-space models
The methodological procedures are centred on stochastic processes (Markov chains) and maximum likelihood estimation. As a basis for the central ideas of this work, we present below a review of concepts related to continuous stochastic processes. For more details, see [21].
**Definition 3.1**.: A _stochastic process_ is a random phenomenon grounded by probabilistic laws that can occur in time or space. It can be denoted by \(\{Y_{t},t\in\tau\}\), where \(Y_{t}\) is the random variable associated to the phenomenon indexed by \(t\), which takes values in \(\tau\), the time (or space) when (where) the process was observed. Depending on the nature of the set \(\tau\), the processes can be discrete or continuous. Here, we consider that \(Y_{t}\in S\), \(S=\{1,2,\ldots,k\}\) is the state space, a set that represents nominal cate
Figure 1: Experimental scenarios for quantifying the success of parasitism in the presence of eggs previously parasitised by _Trissolcus basalis_, by _Telenomus podisi_ and not parasitised, in the absence of competition (Adapted from [2])
Figure 2: Behaviours exhibited by parasitoids during parasitism and photos of _Telenomus podisi_: a) walking; b) drumming/putting the ovipositor out; c) ovipositing; and d) marking the egg (chemical signalling) (Adapted from [2])
gories (discrete response), and \(\tau=[0,t)\) is a time interval, therefore it is continuous.
According to Definition 3.1, if \(y_{0}\) is the initial state of an individual, when it is observed in time \(t=0\), it can move to any state in \(S\), i.e.:
\[Y(t):\left\{\begin{array}{ll}y_{0},&0\leq t<t_{1}\\ y_{1},&t_{1}\leq t<t_{2}\\ y_{2},&t_{2}\leq t<t_{3}\\ \vdots&\vdots\\ y_{s},&t_{s}\leq t<t_{s+1}\end{array}\right.\]
where \(y_{0},y_{1},\ldots,y_{s}\in S\), but it is not necessary that \(y_{t}\neq y_{t+1}\). Therefore, we assume that in a finite interval, the process has a finite number of jumps for each state \(a\in S\). Eventually, it can happen that an individual enters a state and does not leave it, and if this is valid for all individuals, we call it an absorbing state.
**Definition 3.2**.: Consider a stochastic process as defined in 3.1, that is, a continuous-time process with discrete \(S\), typically also called a "jump process". A stochastic process governed by the following law of conditional probability:
\[\pi_{a,b}(s,t) = P(Y_{(t)}=b\mid Y_{(s)}=y_{a})\] \[= P(Y_{(t)}=b\mid Y_{(t_{0})}=y_{0},\ldots,Y_{(t_{n})}=y_{n},Y_{(s) }=a),\]
\(\forall s<t\in\tau\) and \(a,b\in S\), is defined as a Markovian process. The _Markov property_ given by Equation (1) defines that the probability of a future event, given all history, depends only on the last state. Moreover, [12] clarified that these probabilities can be homogeneous over time, and for the continuous case we have
\[\pi_{ab}(t)=P(Y_{(t+s)}=b\mid Y_{(s)}=a),\;\;\forall\;\;s<t\in\tau\;\;\mbox{ and}\;\;a,b\in S,\]
which can be represented, in matricial notation, by \(\mathbf{P}(s,t)=\mathbf{P}(t)\).
It is also assumed that at each time interval of the process, for every \(a\in S\) non-absorbing state, there is a distribution function \(F_{a}(t)\) for positive values that characterises the time until the event. Then, assuming stationarity, i.e, \(\pi_{ab}(t)=P(Y_{t+s}=b\mid Y_{s}=a)\), we can show that:
\[\frac{1-F_{a}(t+s)}{1-F_{a}(s)}=1-F_{a}(t),\;\;\;s,t\geq 0, \tag{2}\]
and a distribution that satisfies this condition (2) is the exponential. Therefore, we may write \(F_{a}(t)=1-\exp(-\theta_{a}t)\) if \(t\geq 0\), and consequently we have:
\[\frac{\partial\pi_{ab}(t)}{\partial t}=-\theta_{a}\pi_{ab}(t)+\theta_{a}\sum_ {c\neq a}\pi_{ac}\pi_{cb}(t),\]
where \(\theta_{ab}=\left.\frac{\partial\pi_{ab}(t)}{\partial t}\right|_{t=0}\) for all \(a,b\in S\). This defines the transition intensities
\[\theta_{ab}:\left\{\begin{array}{ll}-\theta_{a},&\mbox{if }a=b\\ \theta_{a}\pi_{ab}(t),&\mbox{if }a\neq b\end{array}\right..\]
If \(a\) is an absorbing state, then \(\theta_{a}=0\), but the reciprocal is not true. Therefore, in the continuous case, the matrices
\[\mathbf{P}(t)=\left(\begin{array}{cccc}\pi_{11}(t)&\pi_{12}(t)& \ldots&\pi_{1k}(t)\\ \pi_{21}(t)&\pi_{22}(t)&\ldots&\pi_{2k}(t)\\ \vdots&\vdots&\ldots&\vdots\\ \pi_{k1}(t)&\pi_{k2}(t)&\ldots&\pi_{kk}(t)\end{array}\right)\mbox{and }\mbox{ }\mathbf{Q}=\left(\begin{array}{cccc}-\theta_{11}&\theta_{12}& \ldots&\theta_{1k}\\ \theta_{21}&-\theta_{22}&\ldots&\theta_{2k}\\ \vdots&\vdots&\ldots&\vdots\\ \theta_{k1}&\theta_{k2}&\ldots&-\theta_{kk}\end{array}\right)\]
are jointly important in interpreting response category changes and movement time.
**Definition 3.3**.: Let \(\mathbf{x}_{it}=(x_{it1},\ldots,x_{itp})^{\prime}\) be a vector of covariates associated to a random sample of individuals \((i=1,2,\ldots,N)\) with response variable inherent to the stochastic process \(Y_{t}\in S=\{1,2,\ldots,k\},t\in\tau\). Assuming that the intensities are the same for all \(i\), the _multi-state regression model_ is an extension of the generalised linear model [23]:
\[\theta_{ab}(\cdot)=f\big{[}\theta_{ab}^{(0)}(\cdot);\mathbf{\beta}_{a }^{\top}\mathbf{x}(t)\big{]},\]
where \(\theta_{ab}^{(0)}(\cdot)\) is the baseline intensity, and \(\mathbf{\beta}_{a}\) is the vector of parameters for each transition \(a\). However, the multi-state Cox model (proportional hazards), that assumes proportionality of the rates of the different transitions, is the most used regression model [14], and is given by
\[\theta_{ab}(\mathbf{x})=\theta_{ab}^{(0)}\exp[\mathbf{\beta$ }_{a}^{\top}\mbox{\boldmath$x}(t)], \tag{3}\]
and estimated by maximum likelihood.
### Specific Methods
For the analysis of the double transitions over time, we consider two non-independent stages. Also, in both stages, we assume the first-order Markov property (as defined in Section 3.1) with a finite number of jumps in the studied time interval.
The first stage refers to the egg type choice, for which we have the stochastic process indexed by the random variable \(\{Y_{1}(t)\in S_{1},t\in\tau\}\), \(\tau=[0,35)\), \(S_{1}=\{1,2,3,4\}\), where 1: unparasitised eggs; 2: eggs parasitised by _T. podisi_; 3: eggs parasitised by _T. basalis_ and 4: no choice. For the second stage of the process, i.e. behaviour given choice, we include an additional category to consider the different insect behaviours after choice made in the first stage. We have therefore the conditional stochastic process \(\{[Y_{2}\mid Y_{1}](t)\in S_{2},t\in\tau\},S_{2}=\{1,2,3,4\}\), in which they represent 1: marking; 2: ovipositing 3: drumming and 4: returning to set \(S_{1}\), hereafter named as "other". The transition scheme is represented in Figure 3, in which we can see that there are no absorbing states.
Assuming the processes are stationary, we may consider the transition intensities matrix
\[\boldsymbol{Q}=\left(\begin{array}{ccccc}-(\theta_{1}+\theta_{2}+\theta_{3})& \theta_{1}&\theta_{2}&\theta_{3}\\ \theta_{4}&-(\theta_{4}+\theta_{5}+\theta_{6})&\theta_{5}&\theta_{6}\\ \theta_{7}&\theta_{8}&-(\theta_{7}+\theta_{8}+\theta_{9})&\theta_{9}\\ \theta_{10}&\theta_{11}&\theta_{12}&-(\theta_{10}+\theta_{11}+\theta_{12}) \end{array}\right)\]
for both stages, assuming a stochastic double random walk for the parasitoids. Here, the parameters are functionally related to the effects of time and species (treatment factor) through the Markovian Cox-model (equation 3):
\[\theta_{ab}=\theta_{ab}^{(0)}\exp\{\boldsymbol{\beta}_{a}^{\top}\text{species} (t)\},\ \ \ \ \forall\ \ a,b\in S. \tag{4}\]
The null model for both processes considers only the time effect, while the full model incorporates the treatment effect as described by equation 4 above. Furthermore, the parameters are estimated by maximum likelihood via an iterative algorithm, whose initial values \(\theta_{ab}^{(0)}\) are calculated based on the observed transition frequencies and the time spent in each state:
\[\theta_{ab}^{(0)}=\left\{\begin{array}{cc}-\dfrac{n_{a}}{T_{a}},&\text{if }k =k^{\prime}\\ &\\ \dfrac{n_{ab}}{T_{a}},&\text{if }k\neq k^{\prime}\end{array}\right.\]
where \(n_{a.}\) denotes the number of insects that were in state \(a\) at the previous time, \(n_{ab}\) are the transition frequencies from \(a\), and \(T_{a.}\) is the total time spent at the previous state \(a\).
Figure 3: Double transitions scheme: choice in the first stage and behaviour given choice.
To differentiate the two stages, we denote the intensities matrices by \(\mathbf{Q_{Y_{1}}}\) and \(\mathbf{Q_{Y_{2}|Y_{1}}}\), and the transition probability matrices by \(\mathbf{P_{Y_{1}}(t)}\) and \(\mathbf{P_{Y_{2}|Y_{1}}(t)}\). The transition probabilities, and the time spent in each state, are estimated by the invariance principle of the maximum likelihood estimators. Finally, we employ likelihood-ratio (\(\Lambda\)) tests to assess the significance of the treatment effect. All analyses were carried out in R [26] using packages survival[28] and msm[20].
## 4 Results
We begin showing the contingency tables: treatment versus egg choice (Table1) and treatment versus behaviour unconditional to egg choice (Table 2). Higher frequencies were observed for choosing unparasitesed eggs and, in relation to behaviour, marking had higher frequencies for both species. According to the \(\chi^{2}\) test, we verify an association between treatment (parasitoid species) and egg choice (\(p<0.001\)), and a non-association with respect to the treatments and behaviours (\(p=0.9596\)).
Specifically regarding transitions over time, a total of 641 were observed, with a minimum of 12 and a maximum of 40 transitions (27.7 on average) both for choosing eggs and for behaviours, without taking into account the treatment structure. Evidently, the frequencies observed per treatment and transitions are merely exploratory techniques, the significance or otherwise of the effects depends on the time and intensities of transitions.
### First transition - choice
Next, we model egg choice (i.e., the \(Y_{1}\) process) using the Cox model. The treatment effect was significant (\(\Lambda=68.62\); d.f. = 7; \(p<0.001\)). For this first stage, the estimated
\begin{table}
\begin{tabular}{l r r} \hline \hline \multicolumn{3}{c}{Choices} & \multicolumn{2}{c}{Treatments} \\ & _Trissolcus basalis_ & _Telenomus posidi_ \\ \hline unparasitised eggs & 150 & 125 \\ eggs parasitised by _T. podisi_ & 97 & 9 \\ eggs parasitised by _T. basalis_ & 35 & 124 \\ no choice & 65 & 60 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Number of choices observed according to species, obtained in the parasitism data, according with the experiment developed by [3].
\begin{table}
\begin{tabular}{l r r} \hline \hline \multirow{2}{*}{Behaviours} & \multicolumn{2}{c}{Treatments} \\ & _Trissolcus basalis_ & _Telenomus posidi_ \\ \hline marking & 126 & 121 \\ ovipositing & 64 & 56 \\ drumming & 92 & 81 \\ returning to set \(S_{1}\) & 65 & 60 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Observed Number behaviour unconditional to egg choice according to species, obtained in the parasitism data, according with the experiment developed by [3].
transition intensities matrices were:
\[\hat{\mathbf{Q}}_{Y_{1}}(T.\;basalis)=\left(\begin{array}{rrrr}-0.339&0.000&0.000&0.3 39\\ 0.000&-0.202&0.001&0.200\\ 0.000&0.000&-0.690&0.690\\ 1.607&0.616&0.650&-2.874\end{array}\right)\]
and
\[\hat{\mathbf{Q}}_{Y_{1}}(T.\;podisi)=\left(\begin{array}{rrrr}-0.182&0.000&0.000&0.1 82\\ 0.000&-8.403&0.003&8.401\\ 0.000&0.000&-0.308&0.308\\ 1.130&1.805&1.720&-4.655\end{array}\right),\]
showing differences between transition rates, especially in the diagonals, for which higher values of transition intensity imply a longer time to exit the state. Thus, considering a recognition of previously parasitised eggs, the exit rate from the "eggs previously parasitised by _T. podisi_" state is lower for _T. podisi_ (\(-8.403\)) when compared to _T. basalis_ (\(-0.202\)).
From the intensities matrices, we may obtain the mean times and respective confidence intervals for the choices per each treatment (Figure 4). We observe that _T. podisi_ does not spend time choosing the eggs already parasitised by conspecifics, but this is not true when eggs had been parasitised by its competitor _T. basalis_. Moreover, we emphasise that null transition intensities do not imply null transition probabilities, but rather that the exit from the state does not occur instantaneously. Using the estimated intensities it is possible to obtain the transition probabilities matrices (see Figure 5). We see that _T. basalis_ is less selective when ovipositing, with higher transition probabilities to superparasitism and/or multiparasitism behaviours.
### Second transition - Behaviour given choice
Now considering the behaviours given a choice, the treatment effect was also significant for \(Y_{2}\mid Y_{1}:\) unparasitised eggs (\(\Lambda=33.20\); d.f. = 9; \(p<0.001\)), for \(Y_{2}\mid Y_{1}:\) eggs previously parasitised by _T. basalis_ (\(\Lambda=19.33\); d.f. = 9; \(p=0.022\)), and for \(Y_{2}\mid Y_{1}:\) eggs previously parasitised by _T. podisi_ (\(\Lambda=55.53\); d.f. = 8; \(p<0.001\)).
Regarding the estimated values, first, conditionally on the choice of unparasitised eggs, a greater difference was observed in the transition intensities associated with ovipositing (second line of transition intensities matrices), which indicates that the transition intensity for _T. podisi_ is higher than for _T. basalis_, except when going back to stage 1 (unparasitised eggs):
\[\hat{\mathbf{Q}}_{Y_{2}\mid Y_{1}=1}(T.\;basalis)=\left(\begin{array}{rrrr}-0.96 1&0.107&0.748&0.107\\ 3.320&-3.566&0.123&0.1229\\ 0.000&0.197&-0.299&0.102\\ 0.000&0.000&1.425&-1.425\end{array}\right)\]
and
\[\hat{\mathbf{Q}}_{Y_{2}|Y_{1}=1}(T.\ podisi)=\left(\begin{array}{rrrr}-0.916&0.095&0.790&0.032\\ 0.755&-1.006&0.226&0.0252\\ 0.000&0.195&-0.273&0.077\\ 0.000&0.000&2.119&-2.119\end{array}\right).\]
The estimated transition probabilities matrices are presented in Figure 6. Despite
Figure 4: Means and confidence intervals of time spent in each choice, for both treatments (parasitoid species).
Figure 5: Estimated probabilities for parasitoid choices (first step), considering 1: unparasitised eggs; 2: eggs parasitised by _T. podisi_; 3: eggs parasitised by _T. basalis_ and 4: no choice.
the apparent homogeneity, in practice, it is noted that, in general, the probabilities of transitions to ovipositing (state 2) and drumming (state 3) are higher for _T. podisi_. In summary, when choosing unparasitised eggs, the _T. podisi_ is faster than _T. basalis_, and presents a higher ovipositing rate.
Now, considering the behaviour conditional on the choice of eggs previously parasitised by _T. podisi_ and, firstly, analyzing the estimates of transition intensities, there is a considerable difference in point values when comparing the matrices of the two treatments (species):
\[\boldsymbol{\hat{Q}}_{Y_{2}|Y_{1}=2}(T.\;basalis)=\left(\begin{array}{rrrr}- 1.405&0.000&1.003&0.401\\ 2.745&-3.137&0.392&0.000\\ 0.000&0.076&-0.170&0.095\\ 0.000&0.105&1.368&-1.474\end{array}\right)\]
and
\[\boldsymbol{\hat{Q}}_{Y_{2}|Y_{1}=2}(T.\;podisi)=\left(\begin{array}{rrrr}- 0.695&0.000&0.618&0.0772\\ 0.990&-1.247&0.257&0.000\\ 0.006&0.200&-0.316&0.111\\ 0.000&0.000&1.733&-1.733\end{array}\right).\]
For example, regarding the behaviour of marking eggs (first row of the matrices), _T. podisi_ presented a lower intensity, signalling that the parasitoids make the transition through this state quickly, which indicates that they accepted the eggs for successful oviposition. The estimated transition probabilities matrices are presented in Figure 7.
Additionally, evaluating the estimates of the transition probabilities (Figure 7), there is a lower probability of oviposition of this species compared to _T. basalis_ (note that there are zero probabilities of transition from other states and drumming to ovipositing), possibly due to the previous recognition of previous conspecific parasitism, and on the other hand, higher probabilities of transition to drumming, which is an intermediate behaviour for state change.
Figure 6: Estimated transition probabilities conditional on choosing unparasitised eggs for parasitoid behaviours 1: marking, 2: ovipositing, 3: drumming, and 4: others.
Considering the behaviours given a choice of eggs previously parasitised by _T. basalis_, the transition intensities are similar for marking and ovipositing, but not for drumming and returning to stage 1 (others):
\[\hat{\boldsymbol{Q}}_{Y_{2}|Y_{1}=3}(T.\;basalis)=\left(\begin{array}{rrrr}-0. 659&0.231&0.428&0.000\\ 0.991&-1.6351&0.446&0.198\\ 0.000&0.370&-0.521&0.151\\ 0.000&0.000&1.180&-1.1803\end{array}\right)\]
and
\[\hat{\boldsymbol{Q}}_{Y_{2}|Y_{1}=3}(T.\;podisi)=\left(\begin{array}{rrrr}-0. 659&0.231&0.428&0.000\\ 0.991&-1.635&0.446&0.198\\ 0.000&0.000&-0.0803&0.0803\\ 0.000&0.000&2.571&-2.571\end{array}\right).\]
The drumming rates are higher for _T. basalis_, while _T.podisi_ presented lower intensity rates associated with 'other' behaviours. Stochastically, the transition probabilities for these states also showed significant differences, as illustrated in Figure 8.
Finally, the times and respective 95% confidence intervals of the transition intensity times associated with each behaviour, given the a choice of type of eggs, for each species are presented in the Figure 9. These figures reiterate the rapid action associated with the marking and ovipositing behaviours of _T. podisi_ when compared to _T. basalis_ for eggs previously parasitised by conspecifics.
Figure 7: Estimated transition probabilities conditional on choosing eggs previously parasitised by _T. podisi_ for parasitoid behaviours 1: marking, 2: ovipositing, 3: drumming, and 4: others.
## 5 Discussion
The ability to recognize physical or chemical marks left on the host after oviposition is considered a natural tendency of parasitoid species to avoid superpara
Figure 8: Estimated transition probabilities conditional on choosing eggs previously parasitismed by _T. basalis_ for parasitoid behaviours 1: marking, 2: ovipositing, 3: drumming, and 4: others.
Figure 9: Means and 95% confidence interval for time spent in each behaviour given choice, related to transition intensities considering the treatments _T. basalis T. podisi_ and species.
sitism/multiparasitism, which can lead the parasitoid offspring to be forced into a lethal competition [18]. In this study, both, _T. basalis_ and _T. podisi_ females displayed a comparable sequence of host handling behaviours: drumming, oviposition, and host marking. A major difference, however, was observed in how long was displayed the oviposition behaviour on eggs previously parasitised. Females of _T. basalis_ expend more time ovipositing in previously parasitised eggs by _T. podisi_, exhibiting a higher tendency to multiparasitism. _Trissolcus basalis_ females also oviposit in host eggs previously parasitised by conspecifics, leading to superparasitism, and increasing the chances of competition between siblings, since only one egg typically can successfully develop into adulthood.
On the other hand, _T. podisi_ females could avoid the eggs previously parasitised by conspecifics and, additionally, could find unparasitised eggs and oviposit on them faster than _T. basalis_. Because both _T. podisi_ and _T. basalis_ were exposed to the same parasitoid and host densities in our experiment, we can assume that _T. basalis_ has a stronger natural tendency to self-superparasitise and multiparasitise than _T. podisi_. In terms of host foraging, _T. podisi_ exhibited higher search efficiency, showing a high capability to find healthy hosts and avoid super and multiparasitism.
The occurrence of host marking is a reliable indicator of successful oviposition in sclionids [31]. Considering the marking behaviour, our results showed that _T. basalis_ females expend more time oviposit in groups of previously parasitised host eggs available. In agreement with the previous study [4], our results found that _T. basalis_ can reduce the host population. However, in a condition of very high intra- and interspecific competition, the reduction of the host population cannot result in an increase of the parasitoid populations for subsequent generations.
## 6 Conclusions
Biological pest control is a sustainable practice that benefits food production and health. Despite all the biological and environmental appeal for the development of these studies, there is also a need for adequate statistical methodologies to confirm the scientific hypotheses. Interactions between species, as well as changes in behaviour over time require specific methods of analysis to estimate the biological control efficiency of a species, and models for categorical longitudinal data are very useful in this context.
In this work, we presented the problem of the soybean pest _Euschistus heros_ and two potential agents for natural control in the field. As a statistical contribution, we developed an extension of multi-state models to compare two parasitoid species by evaluating their behaviours over time. These models allow not only to describe behavioural actions but also the intensity with which they occur. In this context, the method validated the experimental assumption that the species _T. podisi_ avoids intra-specific competition by being more efficient in recognising and avoiding previous conspecific parasitism. In the applied sense, the results can contribute to improving the parasitoid release strategies in the field and optimise the mass-rearing production. Moreover, the proposed statistical method used can also be a contribution to potential researchers studying insect behaviour. Although in this work the method has proved to be effective, for future studies there is a need to consider sub-intervals of time, for which we can allow different transition rates, since they may not be homogeneous over time.
## Acknowledgements
This work had financial support from the Brazilian Foundation, Coordenacao de "Cordenacao de Aperfeicoamento de Pessoal de Nivel Superior" (CAPES) process number \(88887.716582/2022-00\). This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 18/CRT/6. The authors are grateful to John Hinde for valuable suggestions that helped improve the manuscript.
|
2309.11906 | On the kernel of $\mathrm{SO}(3)$-Witten-Reshetikhin-Turaev quantum
representations | In this paper, we study the kernels of the
$\mathrm{SO}(3)$-Witten-Reshetikhin-Turaev quantum representations $\rho_p$ of
mapping class groups of closed orientable surfaces $\Sigma_g$ of genus $g.$ We
investigate the question whether the kernel of $\rho_p$ for $p$ prime is
exactly the subgroup generated by $p$-th powers of Dehn twists. We show that if
$g\geq 3$ and $p\geq 5$ then $\mathrm{Ker} \, \rho_p$ is contained in the
subgroup generated by $p$-th powers of Dehn twists and separating twists, and
if $g\geq 6$ and $p$ is a large enough prime then $\mathrm{Ker} \, \rho_p$ is
contained in the subgroup generated by the commutator subgroup of the Johnson
subgroup and by $p$-th powers of Dehn twists. | Renaud Detcherry, Ramanujan Santharoubane | 2023-09-21T09:16:47Z | http://arxiv.org/abs/2309.11906v2 | # On the kernel of \(\operatorname{SO}(3)\)-Witten-Reshetikhin-Turaev quantum representations
###### Abstract.
In this paper, we study the kernels of the \(\operatorname{SO}(3)\)-Witten-Reshetikhin-Turaev quantum representations \(\rho_{p}\) of mapping class groups of closed orientable surfaces \(\Sigma_{g}\) of genus \(g.\) We investigate the question whether the kernel of \(\rho_{p}\) for \(p\) prime is exactly the subgroup generated by \(p\)-th powers of Dehn twists. We show that if \(g\geq 3\) and \(p\geq 5\) then \(\operatorname{Ker}\rho_{p}\) is contained in the subgroup generated by \(p\)-th powers of Dehn twists and separating twists, and if \(g\geq 6\) and \(p\) is a large enough prime then \(\operatorname{Ker}\rho_{p}\) is contained in the subgroup generated by the commutator subgroup of the Johnson subgroup and by \(p\)-th powers of Dehn twists.
## 1. Introduction
For \(\Sigma\) a compact connected oriented surface, let \(\operatorname{Mod}(\Sigma)\) be its mapping class group. Among the many constructions of finite dimensional linear representations of \(\operatorname{Mod}(\Sigma)\), the Witten-Reshetikhin-Turaev quantum representations, introduced by Witten [20] and rigorously defined by Reshetikhin and Turaev [19] have many striking properties. The Witten-Reshetikhin-Turaev theory associates a family of representations to any compact Lie group, however in this article we will focus on the so-called \(\operatorname{SO}(3)\)-WRT representations. For any odd integer \(p\geq 3\), let \(K_{p}\) be the cyclotomic field \(\mathbb{Q}[e^{2i\pi/p}]\). For any compact connected oriented surface \(\Sigma\), the \(\operatorname{SO}(3)\)-quantum representation is a projective representation
\[\rho_{p}:\operatorname{Mod}(\Sigma)\longrightarrow\operatorname{PGL}_{d}(K_{p }),\]
If \(\Sigma\) has boundary, the representation depends on the specification of integers between \(0\) and \(p-2\) on each boundary compenent, this data is refered to as boundary colors. The integer \(d\) is given by the celebrated Verlinde formula which is an explicit formula. As the coefficients of the representation lie in a number field, the representation can be modified using different embeddings of \(K_{p}\) in \(\mathbb{C}\), this is equivalent to say that the representation depends on a choice of a primitive \(p\)-th root of unity \(\zeta_{p}\). Moreover, for an appropriate choice of \(\zeta_{p}\in\mathbb{C}^{*}\), those are unitary representations.
The \(\mathrm{SO}(3)\)-WRT quantum representations give examples of finite dimensional (projective) unitary representations with infinite image[12][13] of \(\mathrm{Mod}(\Sigma)\), a phenomenon not observed for any other known representations of mapping class groups of surfaces of genus at least \(2.\) The \(\mathrm{SO}(3)\)-quantum representations seem to capture a lot of mysterious and deep information about the mapping class groups. First, they are asymptotically faithfully by [12] or [1], which can be used to recover that the mapping class groups of surfaces are residually finite. Moreover, the \(\mathrm{SO}(3)\)-quantum representations have been used to show that every finite group is involved in the mapping class group \(\mathrm{Mod}(\Sigma_{g})\) of any closed surface of genus \(g\geq 2\) in [14], or to construct finite covers of surfaces whose homology is not spanned by lifts of simple closed curves [15].
A lot remains unknown about the kernel and images of the representations \(\rho_{p}.\) While they are asymptotically faithful, at fixed \(p,\) the representation \(\rho_{p}\) is never faithful. Indeed if \(t_{\alpha}\) is the Dehn twist along any simple closed curve \(\alpha\) on \(\Sigma,\) then \(t_{\alpha}^{p}\in\mathrm{Ker}\,\rho_{p}.\) However, at the time of the writing, the only known kernel elements are products of \(p\)-th powers of Dehn twists, when the surface has genus \(g\geq 3.\) Moreover, recent work of Deroin and Marche[11] show that the subgroup of \(\mathrm{Mod}(\Sigma)\) generated by \(p\)-th powers of Dehn twists has finite index in the kernel of \(\rho_{p},\) when \(p=5\) and \(\Sigma\) is the surface \(\Sigma_{g,n}\) of genus \(g\) with \(n\) boundary components, with \((g,n)\in\{(0,4),(0,5),(1,2),(1,3),(2,1)\}.\) This raises the following question:
**Question 1.1**.: _Let \(p\geq 5\) be an odd prime, let \(\Sigma\) be a closed compact connected orientable surface of genus \(g\geq 3,\) let \(\rho_{p}\) be the \(\mathrm{SO}(3)\)-WRT quantum representation of \(\mathrm{Mod}(\Sigma).\) Is it true that \(\mathrm{Ker}\,\rho_{p}=T_{p},\) the subgroup generated by \(p\)-th powers of Dehn twists?_
We note that the restriction to \(p\) prime is motivated by the fact that \(\rho_{p}\) is irreducible when \(p\) is prime [10], while for \(p\) composite it is not necessarily irreducible [11].
Our approach to studying Question 1.1 is to use the \(h\)-adic expansion of quantum representations, introduced by Gilmer and Masbaum [1] and studied in [1][1] and [15]. First, we note that by a theorem of Gilmer and Masbaum, for \(p\) prime the representations \(\rho_{p}\) are (up to conjugation) valued in \(\mathrm{PGL}_{d}(\mathbb{Z}[\zeta_{p}]).\) Then the representation can be reduced modulo a power of \(h,\) where \(h=1-\zeta_{p}\) is an irreducible in the cyclotomic ring \(\mathbb{Z}[\zeta_{p}],\) and satisfies \(\mathbb{Z}[\zeta_{p}]/(h)\simeq\mathbb{F}_{p}.\)
It turns out that the representations \(\rho_{p,k}=\rho_{p}\) mod \(h^{k}\) have some nice compatibility with the Johnson filtration of the mapping class group. For \(k\geq 1,\) let \(J_{k}(\Sigma)\) be the \(k\)-th Johnson subgroup of \(\mathrm{Mod}(\Sigma),\) with the convention that \(J_{1}(\Sigma)\) is the Torelli
subgroup of \(\operatorname{Mod}(\Sigma)\), and \(J_{2}(\Sigma)\) is the subgroup generated by Dehn twists along separating curves (here \(g\geq 3\)). It is proved in [1] that \(\operatorname{Ker}\rho_{p,1}\) contains \(J_{2}(\Sigma)\).
Taking advantage of this result and the structure of the abelianization of the Torelli subgroup \(J_{1}(\Sigma)\) computed by Johnson [14], we can completely describe \(\operatorname{Ker}\rho_{p,1}\).
**Theorem 1.2**.: _Let \(p\geq 5\) be a prime, \(\Sigma\) a closed surface of genus \(g\geq 3,\) and \(\rho_{p}\) be the \(\operatorname{SO}(3)\)-WRT quantum representation at level \(p.\) Then_
\[\operatorname{Ker}(\rho_{p,1})=[J_{1}(\Sigma),J_{1}(\Sigma)]T_{p},\]
_where \(J_{1}(\Sigma)\) is the Torelli subgroup of \(\operatorname{Mod}(\Sigma)\) and \(T_{p}\) is the subgroup generated by \(p\)-th powers of Dehn twists. In particular the kernel of \(\rho_{p}\) is contained in \([J_{1}(\Sigma),J_{1}(\Sigma)]T_{p}.\)_
We can obtain a similar result for quantum representations of surface groups. Let \(\Sigma\) be a surface of genus at least two with one boundary component and let \(\hat{\Sigma}\) be the surface obtained by gluying a disc on the boundary component of \(\Sigma\). We can look at the boundary pushing subgroup of \(\operatorname{Mod}(\Sigma)\). This group is the kernel of the map \(\operatorname{Mod}(\Sigma)\to\operatorname{Mod}(\hat{\Sigma})\) and is naturally isomorphic to a central extension of the fundamental group of \(\hat{\Sigma}\). Following [13], the restriction of \(\rho_{p}\) to the boundary pushing subgroup of \(\operatorname{Mod}(\Sigma)\) gives a projective representation :
\[\rho_{p}^{s}:\pi_{1}(\hat{\Sigma})\longrightarrow\operatorname{PGL}_{d}( \mathbb{Z}[\zeta_{p}])\]
**Theorem 1.3**.: _Let \(\Sigma\) be a surface of genus at least two with one boundary and \(p\geq 5\) be prime. Suppose that the boundary of \(\Sigma\) is colored by \(2\) then the kernel of \(\rho_{p,1}^{s}:\pi_{1}(\hat{\Sigma})\to\operatorname{PGL}_{d}(\mathbb{F}_{p})\) is_
\[\operatorname{Ker}(\pi_{1}(\hat{\Sigma})\to H^{1}(\hat{\Sigma},\mathbb{F}_{p}))\]
_which is the kernel of the mod \(p\) abelianization of \(\pi_{1}(\hat{\Sigma})\)._
Our last result concerns the kernel of \(\rho_{p,2}\). In [1], a decomposition of the representations \(\rho_{p,1}\) as the sum of two irreducible representations of \(\operatorname{Mod}(\Sigma)\) over \(\mathbb{F}_{p}\) is introduced (which is called the _odd-even decomposition_ by Gilmer and Masbaum). Using this decomposition and the structure of the abelianization of the Johnson subgroup \(J_{2}(\Sigma)\) for surfaces \(\Sigma\) of genus \(g\geq 6,\) and the odd-even decomposition of \(\rho_{p},\) we prove:
**Theorem 1.4**.: _Let \(\Sigma\) a closed surface of genus \(g\geq 6.\) Then for all large enough primes \(p\) we have:_
\[\operatorname{Ker}(\rho_{p,2})=[J_{2}(\Sigma),J_{2}(\Sigma)]T_{p},\]
_where \(J_{2}(\Sigma)\) is the Johnson subgroup of \(\operatorname{Mod}(\Sigma)\) and \(T_{p}\) is the subgroup generated by \(p\)-th powers of Dehn twists. In particular, \(\operatorname{Ker}(\rho_{p})\subset[J_{2}(\Sigma),J_{2}(\Sigma)]T_{p},\)._
As another ingredient in the proof of Theorem 1.2 and 1.4, we prove some irreducibility results for some modular representations of \(\operatorname{Sp}_{2g}(\mathbb{F}_{p}),\) in order to identify the image of the representations \(\rho_{p,1}|_{J_{1}(\Sigma)}\) and \(\rho_{p,2}|_{J_{2}(\Sigma)}\) with the mod \(p\) abelianization of \(J_{1}(\Sigma)\) and \(J_{2}(\Sigma)\) respectively.
**Acknowledgements:** Over the course of this work, the first author was partially supported by the projects AlMaRe (ANR-19-CE40-0001-01) and by the project "CLICQ" of the Region Bourgogne Franche Comte. The authors thank Gwenael Massuyeau for helpful conversations.
## 2. Preliminaries
### Basic properties of \(\operatorname{SO}(3)\)-WRT representations
In this section, we will sketch a construction of the representation \(\rho_{p},\) and state some of their properties. Let \(p\) be an odd integer \(\geq 3,\) let \(M\) be a closed compact oriented \(3\)-manifold, and let \(L\) be a framed link in \(M.\) In [1], a topological invariant \(Z_{p}(M,L)\in\mathbb{Q}[\zeta_{p}]\) of the pair \((M,L)\) is defined. Roughly speaking, \(Z_{p}(M,L)\) is the evaluation at a primitive \(2\)p-th root of unity \(\zeta_{p}\) of a suitable linear combination of the colored Jones polynomials of the link \(L_{0}\cup L,\) where \(L_{0}\) is a surgery presentation of \(M.\) Here suitable means that the invariant thus defined is independent of the choice of surgery presentation \(L_{0}\) for \(M,\) this is achieved by coloring the surgery presentation \(L_{0}\) by the so-called _Kirby color_. We refer to [1] for all details on this construction; we will use only some of the properties of this invariant \(Z_{p}(M,L).\) First we note that as a consequence of the construction, the invariant \(Z_{p}(M,L)\) satisfies the Kauffman relations in terms of the link \(L.\) Moreover, by a theorem of Masbaum and Roberts [14], when \(p\) is prime then \(Z_{p}(M,L)\in\mathbb{Z}[\zeta_{p}].\)
Let now \(\Sigma\) be a closed compact oriented surface. For \(M\) and \(M^{\prime}\) two \(3\)-manifolds with a fixed identification of their boundaries \(\partial M,\partial M^{\prime}\) with \(\Sigma,\) we write \(\langle M,M^{\prime}\rangle=Z_{p}(M\underset{\Sigma}{\cup}\overline{M^{\prime}}).\) We can do the same construction for manifolds containing framed links \((M,L),(M^{\prime},L^{\prime}).\) Moreover, we can extend \(\langle,\rangle\) by bilinearity to get a sesquilinear form \(\langle,\rangle\) on the \(\mathbb{Q}[\zeta_{p}]\)-vector space \(\mathcal{V}_{p}(\Sigma)\) formally spanned by \(3\)-manifold \(M\) with a fixed isomorphism \(\partial M=\Sigma,\) and containing a link \(L.\)
As examples of elements in \(Z_{p}(\Sigma),\) consider \(H\) a handlebody with boundary \(\Sigma,\) and such that \(H\) is itself a tubular neighborhood of a banded trivalent graph \(\Gamma.\) A \(p\)-admissible coloring of the trivalent graph associates an non-negative even integer to each edge, with the additional conditions that colorings near a vertex satisfy triangle inequalities \(c_{1}\leq c_{2}+c_{2}\) and also \(c_{1}+c_{2}+c_{3}\leq 2p-4.\) Given a \(p\)-admissible coloring \(c\) of the trivalent graph \(\Gamma,\) we get an element of \(Z_{p}(\Sigma)\) as \((H,\Gamma(c)),\) where \(\Gamma(c)\) is the
linear combination of links obtained by cabling each edge \(e\) of \(\Gamma\) by the Jones-Wenzl idempotent \(f_{c_{e}}.\)
**Theorem 2.1**.: _[_BHMV92_]__For any odd integer \(p\geq 3,\) The vector space_
\[Z_{p}(\Sigma)=\mathcal{V}_{p}(\Sigma)/\operatorname{Ker}\langle,\rangle\]
_is a \(\mathbb{Q}[\zeta_{p}]\)-vector space of finite dimension \(d(g,p),\) on which \(\operatorname{Mod}(\Sigma)\) has a natural action by change of identification \(\partial M=\Sigma.\)_
_Moreover, for any handlebody \(H\) with \(\partial H=\Sigma,\) and any pants decomposition of \(\Sigma\) by curves which bound disks in \(H,\) colored trivalent graphs in \(H\) with \(p\)-admissible colors form a basis of \(Z_{p}(\Sigma).\)_
An explicit formula for \(\dim(Z_{p}(\Sigma))\) may be found in [BHMV92]. Now, if we suppose that \(p\) is prime, we can instead define \(\mathcal{V}_{p}(\Sigma)\) to be the free \(\mathbb{Z}[\zeta_{p}]\)-module spanned by pairs \((M,L)\) where \(M\) is \(3\)-manifold with a fixed identification \(\partial M=\Sigma,\) and we get that \(\mathcal{S}_{p}(\Sigma)=\mathcal{V}_{p}(\Sigma)/\operatorname{Ker}\langle,\rangle\) is a free \(\mathbb{Z}[\zeta_{p}]\)-module and a lattice in \(\mathbb{Z}_{p}(\Sigma),\) moreover \(\operatorname{Mod}(\Sigma)\) still admits a natural action on it. By abuse of notation, we will use \(Z_{p}(\Sigma)\) to refer to either the \(\mathbb{Q}[\zeta_{p}]\)-vector space or the \(\mathbb{Z}[\zeta_{p}]\)-module we have just defined, depending on context.
The previous basis of \(Z_{p}(\Sigma)\) as a \(\mathbb{Q}[\zeta_{p}]\)-vector space do not provide basis of \(Z_{p}(\Sigma)\) as a \(\mathbb{Z}[\zeta_{p}]\)-module. In [GM07], Gilmer and Masbaum provide integral basis for \(Z_{p}(\Sigma).\) Unlike the previous basis, they can only be associated to some special pants decomposition of \(\Sigma,\) the so-called _lollipop tree_ decompositions. We will say that a pants decomposition of \(\Sigma\) is a lollipop tree decomposition if it contains \(g\) non separating curves and \(2g-3\) separating curves.
Given a lollipop tree decomposition of \(\Sigma,\) we associate elements \(v(a,b)\in Z_{p}(\Sigma)\) to colorings of the edges of the trivalent graph \(\Gamma\) associated to the decomposition as follows. We take \(H\) to be a handlebody with \(\partial H=\Sigma,\) and such that the curve of the pants decomposition bound disks in \(H.\) We let the internal edges of \(\Gamma\) be colored by \((2a_{i})_{1\leq i\leq 2g-3},\) and we assume that \(2a_{1},\ldots,2a_{g}\) are the colors of the edges that are adjacent to one-edge loops. The loop edges are colored by \(a_{i}+b_{i}.\) We assume again that if \(c_{1},c_{2},c_{3}\) are the colors around a trivalent vertex, then we have the \(p\)-admissibility conditions: \(c_{1}\leq c_{2}+c_{3},\) and \(c_{1}+c_{2}+c_{3}\) is even and \(\leq 2p-4.\) Moreover, we ask for the colors \(a_{i}+b_{i}\) to be at most \((p-3)/2,\) which Gilmer and Masbaum call a _small_ coloring. We construct an element \(\Gamma(a,b)\in Z_{p}(\Sigma)\) by embedding the trivalent graph \(\Gamma\) in \(H,\) replacing the internal edges with \(2a_{i}\) parallel copies with the Jones-Wenzl idempotent \(f_{2a_{i}}\) inserted. However, for the loop edge colored by \(a_{i}+b_{i},\) we first cable the loop edge by the Jones-Wenzl idempotent \(f_{a_{i}},\) then we add a copy
of the loop colored by \((\frac{2+z}{h})^{b_{i}},\) where \(h=1-\zeta_{p}.\) (Here we use the convention that a framed link colored by \(z^{n}\) stands for \(n\) parallel copies of the link, and we can make sense of framed links colored by a colynomial in \(z\) by extending linearly). Finally, we define \(v(a,b)=h^{-\lfloor\frac{1}{2}(a_{1}+\ldots+a_{g})\rfloor}\Gamma(a,b).\)
**Theorem 2.2**.: _[_6_]_ _Let \(\Sigma\) be a closed orientable surface and fix a lollipop tree decomposition of \(\Sigma.\) Then the vectors \(v(a,b)\) where \((a,b)\) runs over all small \(p\)-admissible colorings, form a \(\mathbb{Z}[\zeta_{p}]\)-basis of \(Z_{p}(\Sigma)\) as a \(\mathbb{Z}[\zeta_{p}]\)-module._
In [6], Gilmer and Masbaum introduce a decomposition of \(Z_{p}(\Sigma)\) as a sum of two \(\mathbb{Z}[\zeta_{p}]\)-submodules. Let \(v(a_{i},b_{i})\) be a lollipop tree basis of \(Z_{p}(\Sigma),\) and assume (up to reindexing) that \(2a_{1},\ldots 2a_{g}\) are the colors of the edges that are adjacent to a one-edge loop. They define \(Z_{p}^{odd}(\Sigma)\) (resp. \(Z_{p}^{ev}(\Sigma)\)) to be the \(\mathbb{Z}[\zeta_{p}]\)-submodule spanned by the basis vectors \(v(a_{i},b_{i})\) such that \(a_{1}+\ldots+a_{g}\) is odd (resp. even).
While this decomposition of \(Z_{p}(\Sigma)\) depends on the choice of a lollipop tree basis of \(Z_{p}(\Sigma),\) Gilmer and Masbaum show that the associated decomposition of the \(\mathbb{Z}[\zeta_{p}]/(h)\simeq\mathbb{F}_{p}\)-module
\[F_{p}(\Sigma):=Z_{p}(\Sigma)\underset{\mathbb{Z}[\zeta_{p}]}{\otimes}\mathbb{ Z}[\zeta_{p}]/(h)\]
does not depend on such a choice.
**Theorem 2.3**.: _[_6_]_Let \(\Sigma\) be a closed compact oriented surface and \(p\geq 5\) be a prime._
_Then we have_
\[\mathbb{Z}[\zeta_{p}][\rho_{p}(\mathrm{Mod}(\Sigma))]=\begin{pmatrix}\mathrm{ End}(Z_{p}^{odd}(\Sigma))&\mathrm{End}(Z_{p}^{ev}(\Sigma),Z_{p}^{odd}(\Sigma)) \\ h\mathrm{End}(Z_{p}^{odd}(\Sigma),Z_{p}^{ev}(\Sigma))&\mathrm{End}(Z_{p}^{ev}( \Sigma))\end{pmatrix}\]
_where \(h=1-\zeta_{p}.\)_
The above theorem implies that the image of the Torelli and Johnson subgroups by \(\rho_{p}\) have the following structure:
**Corollary 2.4**.: _(i) If \(f\in J_{1}(\Sigma),\) then_
\[\rho_{p}(f)=\begin{pmatrix}id_{Z_{p}^{odd}(\Sigma)}&A\\ 0&id_{Z_{p}^{ev}(\Sigma)}\end{pmatrix}\ (\mathrm{mod}\ h)\]
_for some \(A\in\mathrm{End}(Z_{p}^{ev}(\Sigma),Z_{p}^{odd}(\Sigma)).\)_
_(ii) If \(f\in J_{2}(\Sigma),\) then_
\[\rho_{p}(f)=id_{Z_{p}(\Sigma)}+h\begin{pmatrix}A_{1}&A_{2}\\ 0&A_{3}\end{pmatrix}\ (\mathrm{mod}\ h^{2})\]
_for some \(A_{1}\in\operatorname{End}(Z^{odd}_{p}(\Sigma))\), \(A_{2}\in\operatorname{End}(Z^{ev}_{p}(\Sigma),Z^{odd}_{p}(\Sigma))\) and \(A_{3}\in\operatorname{End}(Z^{ev}_{p}(\Sigma))\)._
Proof.: The first part of the corollary is proved in [1, Lemma A1]. For the second part, notice that it is trivially true for \(f=t_{\alpha}\), a Dehn twist along a separating curve of the lollipop tree pants decomposition. Note that there is a Dehn twist of each possible genus among those. Conjugating with an element of \(\operatorname{Mod}(\Sigma)\) and by Theorem 2.3, it is then also true for any separating Dehn twist, and therefore for any \(f\in J_{2}(\Sigma)\) as \(J_{2}(\Sigma)\) is generated by separating twists.
### Computations of \(\rho_{p}\) on bounding pairs and separating twists
The goal of this subsection is to prove that certain boundary pairs act non trivially via \(\rho_{p,1}\). These results are technical (based on skein theoretical arguments) but are crucial for the proofs of Theorem 1.2 and 1.3.
Let \(p\geq 5\) be a prime and \(A\) be a \(2p\)-th primitive root of unit. Cutting the surface into smaller pieces is a usual trick used for computations in the setting of quantum representations.
Let \(B\) be a \(3\)-ball, let \(\mathcal{P}\) be a set of four banded points on \(\partial B\) where two of them are colored by \(1\) and two are colored by \(2\). Let \(\gamma\subset\partial B\) be the following simple closed curve :
Let \(V\) be the skein module over \(\mathbb{Q}[A]\) of \(B\) relative to \((\partial B,\mathcal{P})\). This \(\mathbb{Q}[A]\) vector space is two dimensional with basis
Recall that for \(k\) a positive integer, \([k]=\dfrac{A^{2k}-A^{-2k}}{A^{2}-A^{-2}}\).
**Lemma 2.5**.: _The action on \(V\) of the Dehn twist along \(\gamma\) has the following matrix in the basis \((e_{1},e_{2})\) :_
\[\begin{pmatrix}T_{1,1}&T_{1,2}\\ T_{2,1}&T_{2,2}\end{pmatrix}=\begin{pmatrix}-\dfrac{A^{3}+A^{19}+A^{11}}{[3]}&- \dfrac{A^{9}(A^{4}+A^{-4})(A^{2}-A^{-2})}{A^{2}+A^{-2}}\\ A^{9}(A^{4}-A^{-4})&-\dfrac{A^{7}+A^{-1}+A^{15}}{[3]}\end{pmatrix}\]
Proof.: This is obtained by straightforward skein theoretical computations.
**Lemma 2.6**.: _Let \(\Sigma\) be a closed surface of genus \(g\geq 3,\) then the image of a bounding pair of genus \(1\) by \(\rho_{p,1}\) is not trivial._
Proof.: We note that since all bounding pairs of genus \(1\) are conjugated, it suffices to prove the proposition for one particular bounding pair. Let \(\Gamma\) be the following graph:
Here \(\Sigma\) is the boundary of a regular neighborhood of the graph, which is a genus \(g\) handlebody and the curves \(c_{1},c_{2}\) are on \(\Sigma\). We want to compute the action of \(t_{c_{1}}t_{c_{2}}^{-1},\) which is a bounding pair of genus \(1.\) We define the following two vectors associated to an admissible coloring of \(\Gamma:\)
here the three dots means that the remaining colors are \(0\). The triplet
\[(h^{-1}u,h^{-1}v,h^{-2}(2+z_{2})u)\]
is part of the integral basis. Here \(z_{2}\) is the following curve
Let \(W\) be the \(\mathbb{Z}[\zeta_{p}]\)-module generated by \(\{h^{-1}u,h^{-1}v,h^{-2}(2+z_{2})u\}\). It is easy to check that \(\rho_{p}(t_{c_{1}})\) and \(\rho_{p}(t_{c_{2}}^{-1})\) stabilize \(W\). On the basis \((h^{-1}u,h^{-1}v,h^{-2}(2+z_{2})u)\) of \(W\), we compute
\[\rho_{p}(t_{c_{1}})_{|_{W}}=\begin{pmatrix}A^{8}&-2T_{1,2}&2h^{-1}(A^{8}-T_{1, 1})\\ 0&T_{2,2}&h^{-1}T_{2,1}\\ 0&hT_{1,2}&T_{1,1}\end{pmatrix}\,,\,\rho_{p}(t_{c_{2}})_{|_{W}}=\begin{pmatrix} 1&0&2h^{-1}(1+A^{3})\\ 0&-A^{3}&0\\ 0&0&-A^{3}\end{pmatrix}\]
Recall that \(A^{2}=\zeta_{p}\) and \(h=1-\zeta_{p}\), we have that
\[\rho_{p}(t_{c_{1}}t_{c_{2}}^{-1})_{|_{W}}\equiv\begin{pmatrix}1&0&0\\ 0&1&-4\\ 0&0&1\end{pmatrix}\quad(\text{mod }h)\]
Recall that if \(\Sigma\) be a surface of genus at least two with one boundary component, we denote by \(\hat{\Sigma}\) the surface obtained by gluying a disk on the boundary of \(\Sigma\). We also denote by \(\rho_{p}^{s}\) the representation of \(\pi_{1}(\hat{\Sigma})\) obtained by restriction of \(\rho_{p}\) where the boundary color is \(2\).
**Lemma 2.7**.: _If \(\gamma\in\pi_{1}(\hat{\Sigma})\) is freely homotopic to a non separating simple close curve then \(\rho_{p,1}^{s}(\gamma)\) is not trivial._
Proof.: If \(\gamma\) is a loop in \(\pi_{1}(\hat{\Sigma})\) and \(\varphi\in\text{Mod}(\Sigma)\) then
\[\rho_{p}(\varphi)\rho_{p}(\gamma)\rho_{p}(\varphi)^{-1}=\rho_{p}(\varphi( \gamma))\]
Moreover the action of \(\text{Mod}(\Sigma)\) is transitive on the set of non separating simple loops in \(\pi_{1}(\hat{\Sigma})\). Therefore it is enough to find a simple loop \(\gamma\in\pi_{1}(\hat{\Sigma})\) such that \(\rho_{p,1}^{s}(\gamma)\neq 1\)
Let \(\Gamma\) be the following graph
As before \(\hat{\Sigma}\) is the boundary of a regular neighborhood of the graph, the univalent vertex (marked by a dot) is attached to the banded point colored by \(2\) on \(\hat{\Sigma}\). Also the loop \(\gamma\) lies on \(\hat{\Sigma}\). Let \(c_{1}\) and \(c_{2}\) the two following curves on \(\Sigma\)
It is known the loop \(\gamma\) is \(t_{c_{1}}t_{c_{2}}^{-1}\) when viewed as an element of \(\operatorname{Mod}(\Sigma)\). We will omit the computations, as they are almost identical to the ones done for the proof of Lemma 2.6, but we can show that \(\rho_{p,1}(t_{c_{1}}t_{c_{2}}^{-1})\) is not trivial.
**Lemma 2.8**.: _Let \(\Sigma\) be a surface of genus \(g\geq 3,\) and let \(\alpha\) be a separating curve on \(\Sigma\) of genus \(1.\) Then there exists \(f\in J_{1}(\Sigma)\) such that \(\rho_{p,2}(ft_{\alpha}f^{-1})\neq\rho_{p,2}(t_{\alpha}),\) where \(t_{\alpha}\) is the Dehn twist along \(\alpha.\)_
Proof.: We will use the same notation as in the proof of Lemma 2.6. Let \(\alpha\) be the meridian curve of the base edge of the second lollipop of the graph \(\Gamma.\) Let also \(f=t_{c_{1}}t_{c_{2}}^{-1},\) the bounding pair considered in the proof of Lemma 2.6. We have that the \(\mathbb{Z}[\zeta_{p}]\)-submodule \(W\) of \(Z_{p}(\Sigma)\) spanned by \(h^{-1}u,h^{-1}v,h^{-2}(2+z_{2})u\) is invariant not only by \(f=t_{c_{1}}t_{c_{2}}^{-1}\) but also by \(t_{\alpha}.\) Indeed, in the basis \(\{h^{-1}u,h^{-1}v,h^{-2}(2+z_{2})u\},\) we have
\[\rho_{p}(t_{\alpha})|_{W}=\begin{pmatrix}1&0&0\\ 0&A^{8}&0\\ 0&0&1\end{pmatrix},\text{ thus }\rho_{p,2}(t_{\alpha})|_{W}=\begin{pmatrix}1&0&0 \\ 0&1-4h&0\\ 0&0&1\end{pmatrix}\text{ }(\text{mod }h^{2}).\]
Note that since \(\rho_{p}(t_{\alpha})=Id\mod\ h,\) it suffices to know \(\rho_{p}(f)\) mod \(h\) to compute \(\rho_{p}(ft_{\alpha}f^{-1})\) mod \(h^{2}.\) We get
\[\rho_{p,2}(ft_{\alpha}f^{-1})=\begin{pmatrix}1&0&0\\ 0&1&-4\\ 0&0&1\end{pmatrix}\begin{pmatrix}1&0&0\\ 0&1-4h&0\\ 0&0&1\end{pmatrix}\begin{pmatrix}1&0&0\\ 0&1&4\\ 0&0&1\end{pmatrix}=\begin{pmatrix}1&0&0\\ 0&1-4h&-16h\\ 0&0&1\end{pmatrix}\neq\rho_{p,2}(t_{\alpha}).\]
For \(f\in\operatorname{Mod}(\Sigma)\) and \(S\) a subsurface of \(\Sigma,\) we say that the _support_ of \(f\) is included in \(S\) if there is a representative of \(f\) which is the identity on \(\Sigma\setminus S.\) The mapping class \(f\) can be then be seen as a mapping class of the surface \(S^{\prime}\) obtained by from \(S\) by filling boundary components with disks. The surface \(S^{\prime}\) may have smaller genus than \(\Sigma.\) We will write \(\rho_{p}\) for the \(\operatorname{SO}(3)\)-WRT quantum representation of \(\operatorname{Mod}(\Sigma)\) or of \(\operatorname{Mod}(S^{\prime}),\) indifferently.
**Lemma 2.9**.: _Let \(\Sigma\) be a closed compact oriented surface and let \(f\in\operatorname{Mod}(\Sigma),\) such that the support of \(f\) is contained in a essential subsurface \(S\subset\Sigma.\) Let \(S^{\prime}\) be the closed surface obtained from \(S\) by filling each boundary component with a disk, and let \(f^{\prime}\in\operatorname{Mod}(S^{\prime})\) be the mapping class induced by \(f.\) Then, for any odd \(p\geq 5,\) we have_
1. \(\rho_{p}(f^{\prime})\notin\operatorname{Ker}\rho_{p}\Longrightarrow\rho_{p}(f) \notin\operatorname{Ker}\rho_{p}.\)__
2. _Let_ \(J\) _be an ideal of_ \(\mathbb{Z}[\zeta_{p}].\) _If furthermore_ \(\partial S\) _consists only of separating curves in_ \(\Sigma,\) _then_ \[\rho_{p}(f^{\prime})\neq Id\ \text{mod}\ J\Longrightarrow\rho_{p}(f)\neq Id\ \text{mod}\ J.\]
Proof.: The lemma is a direct consequence of the description of the basis of \(Z_{p}(\Sigma)\) as a \(\mathbb{Q}[\zeta_{p}]\)-vector space in Theorem 2.1 and the integral basis of \(Z_{p}(\Sigma)\) as a \(\mathbb{Z}[\zeta_{p}]\)-module in Theorem 2.2. In both case, we get that \(\rho_{p}(f)\) has a block conjugated to \(\rho_{p}(f^{\prime}),\) and thus is non trivial. Indeed, let us choose a pants decomposition of \(\Sigma\) (or lollipop tree pants decomposition of \(\Sigma\)) containing the boundary components of \(S\) as pants decomposition curves. Let \(\{\Gamma(a)\}\) or \(\{v(a,b)\}\) be the associated \(\mathbb{Q}[\zeta_{p}]\)- or \(\mathbb{Z}[\zeta_{p}]\)-basis of \(Z_{p}(\Sigma).\) The subspace of \(Z_{p}(\Sigma)\) spanned by vectors \(\Gamma(a)\) or \(v(a,b)\) such that the colors of edges not belonging to \(S\) are identically zero is isomorphic to \(Z_{p}(S^{\prime}),\) stable by \(\rho_{p}(f),\) and the corresponding block is conjugated to \(\rho_{p}(f^{\prime}).\)
### Proof of Theorem 1.3
In this short subsection we give a proof of Theorem 1.3. Let \(\Sigma\) be a surface of genus at least two with one boundary and \(p\geq 5\) be prime. Suppose that the boundary of \(\Sigma\) is colored by \(2,\) we want to understand the kernel of \(\rho_{p,1}^{s}:\pi_{1}(\hat{\Sigma})\to\operatorname{PGL}_{d}(\mathbb{F}_{p}).\)
Recall that any Dehn twist along a separating curve is trivial via \(\rho_{p,1}\). Now any separating simple loop \(\delta\in\pi_{1}(\hat{\Sigma})\), when viewed in \(\operatorname{Mod}(\Sigma)\), can be written as \(t_{c_{1}}t_{c_{2}}^{-1}\) where \(c_{1},c_{2}\) are separating curves. Therefore the image by \(\rho_{p,1}^{s}\) of any separating simple loop is trivial.
By [10, Lemma A.1], the group generated by separating simple loops on \(\hat{\Sigma}\) is \([\pi_{1}(\hat{\Sigma}),\pi_{1}(\hat{\Sigma})]\), so \(\rho_{p,1}^{s}\) factors through \(\pi_{1}(\hat{\Sigma})/[\pi_{1}(\hat{\Sigma}),\pi_{1}(\hat{\Sigma})]\). Moreover \(\rho_{p,1}^{s}\) kills the \(p\)-th powers of each generator of \(\pi_{1}(\hat{\Sigma})\), so \(\rho_{p,1}^{s}\) induces a map :
\[\bar{\rho}_{p,1}^{s}:H^{1}(\hat{\Sigma},\mathbb{F}_{p})\to\operatorname{PGL}_{ d}(\mathbb{F}_{p})\]
Let \(I_{p}\) be the kernel of \(\bar{\rho}_{p,1}^{s}\). To conclude, we need to prove that \(I_{p}=0\). The map \(\bar{\rho}_{p,1}^{s}\) is equivariant with respect to the mapping class group in the sense that if \([\gamma]\) is the cohomology class of the loop \(\gamma\) then
\[\bar{\rho}_{p,1}^{s}(\varphi_{*}[\gamma])=\rho_{p,1}^{s}(\varphi)\bar{\rho}_{ p,1}^{s}([\gamma])\rho_{p,1}^{s}(\varphi)^{-1}\]
for any \(\varphi\in\operatorname{Mod}(\Sigma)\). This property implies that \(I_{p}\) is a \(\mathbb{F}_{p}\)-subspace of \(H^{1}(\hat{\Sigma},\mathbb{F}_{p})\) invariant under \(\operatorname{Mod}(\Sigma)\). The action of \(\operatorname{Mod}(\Sigma)\) on \(H^{1}(\hat{\Sigma},\mathbb{F}_{p})\) is irreducible and by Lemma 2.7 the map \(\rho_{p,1}^{s}\) is not trivial, therefore \(I_{p}=0\).
### Structure of the abelianization of the Torelli and Johnson subgroups
For \(G\) a group, let \(\operatorname{Ab}(G)=G/[G,G]\) be its abelianization, which we can consider as a \(\mathbb{Z}\)-module, and let \(\operatorname{Ab}_{\mathbb{Q}}(G)=\operatorname{Ab}(G)\underset{\mathbb{Z}}{ \otimes}\mathbb{Q}\).
We will first describe the abelianization of the Torelli group \(J_{1}(\Sigma)\) of a closed surface of genus \(g\geq 3\), as described in the following theorem of Johnson [11]. Note that \(\operatorname{Mod}(\Sigma)\) acts on \(J_{1}(\Sigma)\) by conjugation. This action induces a \(\operatorname{Mod}(\Sigma)\) action on \(\operatorname{Ab}(J_{1}(\Sigma))\), which factors as a \(\operatorname{Sp}_{2g}(\mathbb{Z})\)-action.
**Theorem 2.10**.: _[_11_]_ _For any closed surface of genus \(g\geq 3,\) we have an isomorphism of \(\operatorname{Sp}_{2g}(\mathbb{Z})\)-modules:_
\[\operatorname{Ab}(J_{1}(\Sigma))\simeq\Lambda^{3}H_{1}(\Sigma,\mathbb{Z})/ \left(\omega\wedge H_{1}(\Sigma,\mathbb{Z})\right)\bigoplus T,\]
_where \(T\) is a \(\operatorname{Sp}_{2g}(\mathbb{Z})\)-module which as an abelian group is a finite rank \(2\)-torsion group._
The result of Johnson actually explicitely describes the \(\operatorname{Sp}_{2g}(\mathbb{Z})\)-module structure of the \(2\)-torsion group, but we will not need it here; we will only use that it is a \(2\)-torsion group.
Next we want to describe the abelianization of the Johnson subgroup \(J_{2}(\Sigma).\) As previously, the action of \(\operatorname{Mod}(\Sigma)\) by conjugation on \(J_{2}(\Sigma)\) induces a \(\mathcal{M}=\operatorname{Mod}(\Sigma)/J_{2}(\Sigma)\)
module structure on \(\operatorname{Ab}(J_{2}(\Sigma)).\) Since \(\mathcal{M}\simeq\operatorname{Im}\tau_{1}\rtimes\operatorname{Sp}_{2g}(\mathbb{Z}),\) we also get a \(\operatorname{Sp}_{2g}(\mathbb{Z})\)-module structure on \(\operatorname{Ab}(J_{2}(\Sigma)),\) albeit a non-canonical one.
At the time of this writing, only the rational abelianization of \(J_{2}(\Sigma)\) is known. It was first computed by Dimca, Hain and Papadima [1] after Dimca and Papadima showed that it was of finite rank[1]. The description that we will use here comes from the work of Morita, Sakasai and Suzuki[20].
**Theorem 2.11**.: _[_1_]__[_13_]_ _For any closed surface of genus \(g\geq 6,\) we have an isomorphism of \(\operatorname{Sp}_{2g}(\mathbb{Z})\)-modules:_
\[\operatorname{Ab}_{\mathbb{Q}}(J_{2}(\Sigma))\simeq\mathbb{Q}\oplus[2^{2}] \oplus[31^{2}],\]
_where \([2^{2}]\) and \([31^{2}]\) stands for the \(\operatorname{Sp}_{2g}(\mathbb{Q})\) representations associated to the Young diagrams \([2^{2}]\) and \([31^{2}].\) Moreover, those representations are absolutely irreducible representations of \(\operatorname{Sp}_{2g}(\mathbb{Z}).\)_
We note that just as the \(\operatorname{Sp}_{2g}(\mathbb{Z})\)-module structure, this splitting is not canonical. We will be interested also in the \(\mathcal{M}\)-module structure of \(\operatorname{Ab}_{\mathbb{Q}}(J_{2}(\Sigma)).\) We first remark that the Torelli group \(J_{1}(\Sigma)\) acts trivially on the factor \(\mathbb{Q}\oplus[2^{2}]\) of the above decomposition and non trivially on the factor \([31^{2}].\) Indeed, [20, Theorem 1.4] actually shows that the isomorphism between \(\operatorname{Ab}_{\mathbb{Q}}(J_{2}(\Sigma))\) and \(\mathbb{Q}\oplus[2^{2}]\oplus[31^{2}]\) comes from the map \((d,\overline{\tau}_{2}),\) where \(d:J_{2}(\Sigma)\longrightarrow\mathbb{Z}\) is the core of the Casson invariant, and \(\overline{\tau}_{2}\) is the refined second Johnson homomorphism. It follows from the description of this map in Section 7 of [20] that the projection on the first two factors vanishes on \([J_{1}(\Sigma),J_{2}(\Sigma)],\) but that the projection on the last factor does not.
**Lemma 2.12**.: _The factor \([31^{2}]\) of the decomposition has no \(J_{1}(\Sigma)\)-invariant vector._
Proof.: Indeed, we claim for any \(\mathcal{M}\)-representation, the space of \(J_{1}(\Sigma)\)-invariant vector is a \(\operatorname{Sp}_{2g}(\mathbb{Z})\)-subrepresentation (here we have fixed a splitting \(\operatorname{Sp}_{2g}(\mathbb{Z})\to\mathcal{M}\)). This is a direct consequence of the fact that the image of \(J_{1}(\Sigma)\) in \(\mathcal{M}\) is fixed under conjugation by \(\operatorname{Sp}_{2g}(\mathbb{Z}).\)
Since \([31^{2}]\) is an irreducible \(\operatorname{Sp}_{2g}(\mathbb{Z})\)-subrepresentation, its intersection with the subspace of \(J_{1}(\Sigma)\)-invariant vectors is trivial or itself; hence it is trivial since \(J_{1}(\Sigma)\) acts non-trivially on \([31^{2}].\)
We recall the following fact from [20, Remark 7.4]
**Lemma 2.13**.: _The image of \(J_{4}(\Sigma)\) under the rational abelianization of \(J_{2}(\Sigma)\) is equal to the trivial \(\operatorname{Sp}_{2g}(\mathbb{Z})\)-subrepresentation of \(\operatorname{Ab}_{\mathbb{Q}}(J_{2}(\Sigma))\)._
### Irreducibility of some modular representations of \(\operatorname{Sp}_{2g}(\mathbb{Z})\)
In this section, we gather some results about representations of \(\operatorname{Sp}_{2g}(\mathbb{Z})\) that are necessary for the proofs of our main theorems. We recall that we can view the group \(\operatorname{Sp}_{2g}(\mathbb{Z})\) as the image of the homology representation of a genus \(g\) surface. To stay coherent with the other sections of the paper, we will write \(H_{1}(\Sigma_{g},\mathbb{Z})\) for the fundamental representation of \(\operatorname{Sp}_{2g}(\mathbb{Z}).\) We will write \(\omega\in\Lambda^{2}H_{1}(\Sigma,\mathbb{Z})\) for the intersection form on \(\Sigma,\) which is a \(\operatorname{Sp}_{2g}(\mathbb{Z})\)-invariant vector in \(\Lambda^{2}H_{1}(\Sigma,\mathbb{Z}).\) An explicit formula for \(\omega\) is \(\omega=a_{1}\wedge b_{1}+\ldots+a_{g}\wedge b_{g},\) where \(a_{1},b_{1},\ldots,a_{g},b_{g}\) is any symplectic basis of \(H_{1}(\Sigma,\mathbb{Z}).\) Similarly, \(\omega\wedge H_{1}(\Sigma,\mathbb{Z})\) is a subrepresentation of \(\Lambda^{3}H_{1}(\Sigma,\mathbb{Z}).\) The _contraction map_\(\kappa\) on \(\Lambda^{3}H_{1}(\Sigma,\mathbb{Z})\) is also a surjective morphism of \(\operatorname{Sp}_{2g}(\mathbb{Z})\)-representation:
\[\begin{array}{rcl}\kappa:&\Lambda^{3}H_{1}(\Sigma,\mathbb{Z})&\longrightarrow &H_{1}(\Sigma,\mathbb{Z})\\ &a\wedge b\wedge c&\longmapsto&\omega(a,b)c+\omega(b,c)a+\omega(c,a)b\end{array}.\]
Let also \(\overline{\kappa}\) be the reduction mod \(p\) of \(\kappa,\) and let \(K=\operatorname{Ker}\overline{\kappa}/(\omega\wedge H_{1}(\Sigma,\mathbb{F}_{ p})).\)
**Proposition 2.14**.: _For any \(g\geq 3\) and for any odd prime \(p\geq 5,\) the representation \(V=\Lambda^{3}H_{1}(\Sigma,\mathbb{F}_{p})/\left(\omega\wedge H_{1}(\Sigma, \mathbb{F}_{p})\right)\) of \(\operatorname{Sp}_{2g}(\mathbb{Z})\) is irreducible if and only if \(p\) does not divide \(g-1.\)_
_Moreover, if \(p\) divides \(g-1,\) then the only subrepresentations of \(V\) are \(\{0\},\)\(V\) and \(K.\)_
We note that the above proposition is part of the results of [20] on the composition factors of Weyl modules for \(\operatorname{Sp}_{2g}(\mathbb{Z})\) (see also [19, Theorem 1.1]). However, for the convenience of the reader, we include an elementary proof.
Proof.: A direct computation shows that \(\kappa(\omega\wedge h)=(g-1)h\) for any \(h\in H_{1}(\Sigma,\mathbb{Z}).\) Hence if \(p\) divides \(g-1\) then \(\omega\wedge H_{1}(\Sigma,\mathbb{F}_{p})\subset\operatorname{Ker}\overline{\kappa},\) and \(K=\operatorname{Ker}\overline{\kappa}/(\omega\wedge H_{1}(\Sigma,\mathbb{F}_{p}))\) is a non trivial subrepresentation of \(V.\)
On the other direction, let \(W\) be a subrepresentation of \(V.\) Let \(c_{1},\ldots,c_{2g}=a_{1},b_{1},\ldots,a_{g},b_{g}\) be a symplectic basis of \(H_{1}(\Sigma,\mathbb{F}_{p}).\) We also write \(c_{i}^{\prime}=c_{i-1}\) if \(i\) is even, and \(c_{i}^{\prime}=c_{i+1}\) if \(i\) is odd, so that \(\{c_{i},c_{i}^{\prime}\}\) is always a pair \(\{a_{k},b_{k}\}.\) We note that the vectors \(c_{i}\wedge c_{j}\wedge c_{k}\) for \(i<j<k\) and \((i,j,k)\neq(i,g-1,g)\) or \((g-3,g-2,k)\) form a basis of \(V.\)
Claim 1: If \(W\) contains a vector \(c_{i}\wedge c_{j}\wedge c_{k}\) where \(\omega(c_{i},c_{j})=\omega(c_{j},c_{k})=\omega(c_{k},c_{j}),\) then \(W\) contains \(K.\)
Indeed, since \(\operatorname{Sp}_{2g}(\mathbb{Z})\) acts transitively on basis vectors \(c_{i}\wedge c_{j}\wedge c_{k}\) with \(c_{i},c_{j},c_{k}\) generating an isotropic subspace of \(H_{1}(\Sigma,\mathbb{F}_{p}),\) the subspace \(W\) would contain all such vectors. Since a generating set of \(K\) consists of those vectors and the vectors
\((a_{i}\wedge b_{i})\wedge c_{l}-(a_{j}\wedge b_{j})\wedge c_{l}\) where \(c_{l}\notin\{a_{i},b_{i},a_{j},b_{j}\}\), it suffices to show that the latter vectors are also in \(W.\) However,
\[t_{a_{i}+a_{j}}(b_{i}\wedge b_{j}\wedge c_{l})=(b_{i}+a_{i}+a_{j})\wedge(b_{j} +a_{i}+a_{j})\wedge c_{l}\\ =b_{i}\wedge b_{j}\wedge c_{l}+b_{i}\wedge a_{j}\wedge c_{l}+a_{i }\wedge b_{j}\wedge c_{l}+(a_{j}\wedge b_{j}\wedge c_{l}-a_{i}\wedge b_{i} \wedge c_{l})\,.\]
Hence \(W\) should also contain the vectors \(a_{j}\wedge b_{j}\wedge c_{l}-a_{i}\wedge b_{i}\wedge c_{l}\), and thus all of \(K.\)
Claim 2: Assume now that \(W\) contains a vector \(w\) which has a non zero coefficient along a basis vector \(c_{i}\wedge c_{j}\wedge c_{k},\) where \(\omega(c_{i},c_{j})=\omega(c_{j},c_{k})=\omega(c_{k},c_{i})=0.\) Then \(W\) contains \(K.\)
Indeed, Claim 2 will follow from Claim 1 if we show that we can realize the projection \(\pi\) on \(\operatorname{Span}(c_{i}\wedge c_{j}\wedge c_{k})\) as an element of \(\mathbb{F}_{p}[\operatorname{Sp}_{2g}(\mathbb{F}_{p})].\) For \(\lambda\in\mathbb{F}_{p}^{*},\) let \(\phi_{\lambda}^{l}\in\operatorname{Sp}_{2g}(\mathbb{F}_{p})\) such that \(\phi_{\lambda}^{l}(c_{l})=\lambda c_{l}\) and \(\phi_{\lambda}(c_{l}^{\prime})=\lambda^{-1}c_{l}^{\prime},\) and \(\phi_{\lambda}(c_{m})=c_{m}\) for the other basis vectors of \(H_{1}(\Sigma,\mathbb{F}_{p}).\) Then
\[\pi^{l}=-\sum\limits_{\lambda\in\mathbb{F}_{p}^{*}}\lambda^{-1}\phi_{\lambda}^ {l}\]
is the projection on the subspace spanned by basis vectors with one component equal to \(c_{l}\) and no \(c_{l}^{\prime}\) component. The above follows from the classical identity that \(\sum\limits_{\lambda\in\mathbb{F}_{p}^{*}}\lambda^{k}=0\) if \(k\) does not divide \(p-1,\) and \(=-1\) else.
Moreover, \(\pi=\pi^{i}\pi^{j}\pi^{k}\) is the projection on \(\operatorname{Span}(c_{i}\wedge c_{j}\wedge c_{k}).\)
Claim 3: \(K\) is an irreducible representation.
Let \(v\neq 0\in K,\) and let us assume that \(v\) has no non-zero coefficient along any \(c_{i}\wedge c_{j}\wedge c_{k}\) with \(\omega(c_{i},c_{j})=\omega(c_{j},c_{k})=\omega(c_{k},c_{i})=0.\) Up to applying a projection \(\pi^{i}\) as in the proof of claim 2, and an element of \(\operatorname{Sp}_{2g}(\mathbb{Z})\) that permutes the basis \(c_{i}\) of \(H_{1}(\Sigma,\mathbb{F}_{p}),\) we may assume that
\[v=\lambda_{1}a_{1}\wedge b_{1}\wedge a_{g}+\lambda_{2}a_{2}\wedge b_{2}\wedge a _{g}\ldots+\lambda_{g-1}a_{g-1}\wedge b_{g-1}\wedge a_{g},\]
for some coefficients \(\lambda_{i}\in\mathbb{F}_{p}.\) Since \(v\) is not in \(\omega\wedge H_{1}(\Sigma,\mathbb{F}_{p}),\) we can also assume that the \(\lambda_{i}\) are not all equal. Without loss of generality, we assume that \(\lambda_{1}\neq\lambda_{2}.\)
Then we have
\[t_{a_{1}+a_{2}}(v)-v=(\lambda_{1}-\lambda_{2})a_{1}\wedge a_{2}\wedge a_{g},\]
and therefore we can conclude by Claim 2.
Claim 4: Assume that \(p\) divides \(g-1.\) Let \(W\) be a subrepresentation of \(V\) not included in \(\operatorname{Ker}\overline{\kappa}/(\omega\wedge H_{1}(\Sigma,\mathbb{F}_{p})).\) Then \(W=V.\)
Since \(H_{1}(\Sigma,\mathbb{F}_{p})\) is an irreducible representation, without loss of generality, we can assume that \(W\) contains a vector \(v\) that maps to \(a_{g}.\) We claim that we can assume
that \(v\) is of the form
\[v=\lambda_{1}a_{1}\wedge b_{1}\wedge a_{g}+\ldots+\lambda_{g-1}a_{g-1}\wedge b_{g -1}\wedge a_{g}.\]
Let us introduce maps
\[\tilde{\pi}^{l}=-\sum_{\lambda\in\mathbb{F}_{p}^{*}}\phi_{\lambda}^{l}\]
where the maps \(\phi_{\lambda}^{l}\) were introduced in the proof of Claim 2. Then \(\tilde{\pi}^{l}\) is the projection on the subspace spanned by basis vectors \(c_{i}\wedge c_{j}\wedge c_{k}\) where either none of \(c_{i},c_{j},c_{k}\) is \(a_{l}\) or \(b_{l},\) or one is \(a_{l}\) and another one is \(b_{l}.\) Therefore we claim that \(\tilde{\pi}^{1}\ldots\tilde{\pi}^{g-1}\pi^{g}(v)\) is the form required and is in \(W.\) We conclude similarly as in the proof of Claim 3: since \(\overline{\kappa}(v)=a_{g}\) we have \(\lambda_{1}+\ldots+\lambda_{g-1}=1,\) therefore they are not all equal. WLOG assume that \(\lambda_{1}\neq\lambda_{2},\) then \(t_{a_{1}+a_{2}}(v)-v=(\lambda_{1}-\lambda_{2})a_{1}\wedge a_{2}\wedge a_{g},\) therefore \(W\) contains \(K\) by Claim 1, and thus contains all of \(V.\)
**Lemma 2.15**.: _For any \(g\geq 3\) and for any odd prime \(p,\) the representation_
\[\operatorname{Sp}_{2g}(\mathbb{F}_{p})\longrightarrow\operatorname{GL}\left( \Lambda^{3}H_{1}(\Sigma,\mathbb{F}_{p})/\left(\omega\wedge H_{1}(\Sigma, \mathbb{F}_{p})\right)\right)\]
_is faithful._
Proof.: Note that for \(\lambda\in\mathbb{F}_{p}^{*},\) the map \(\lambda\cdot\operatorname{id}\) acts as multiplication by \(\lambda^{2}\) on \(\omega,\) hence is in \(\operatorname{Sp}_{2g}(\mathbb{F}_{p})\) if and only if \(\lambda=\pm 1.\) Moreover, \(-\)id acts by multiplication by \(-1\) on \(\Lambda^{3}H_{1}(\Sigma,\mathbb{F}_{p}),\) and therefore \(-\)id is not in the kernel of this representation.
Now, it is well known that \(\operatorname{PSp}_{2g}(\mathbb{F}_{p})\) is a simple group for any \(g\geq 1\) and any prime \(p\geq 5,\) and that the only normal subgroups of \(\operatorname{Sp}_{2g}(\mathbb{F}_{p})\) are the trivial subgroup, \(\operatorname{Sp}_{2g}(\mathbb{F}_{p})\) and \(Z(\operatorname{Sp}_{2g}(\mathbb{F}_{p}))=\{\pm\mathrm{id}\},\) see for example [1]. Since the kernel does not contain \(-\)id, it must be trivial.
**Lemma 2.16**.: _Let \(G\) be a group and \(V\) a free \(\mathbb{Z}\)-module of finite rank, and let \(\rho:G\longrightarrow\operatorname{Aut}(V)\) be a representation. Assume that the induced representation \(\overline{\rho}:G\longrightarrow\operatorname{Aut}(V\underset{\mathbb{Z}}{ \otimes}\overline{\mathbb{Q}})\) is irreducible. Then for all large enough \(p,\) the representation \(\rho_{p}:G\longrightarrow\operatorname{Aut}(V\underset{\mathbb{Z}}{\otimes} \mathbb{F}_{p})\) is irreducible._
Proof.: Since \(\overline{\mathbb{Q}}\) is algebraically closed, \(\overline{\rho}\) is absolutely irreducible if and only if \(\overline{\mathbb{Q}}[\rho(G)]=\operatorname{End}_{\overline{\mathbb{Q}}}(V \underset{\mathbb{Z}}{\otimes}\overline{\mathbb{Q}}).\) However, since \(\rho(G)\subset\operatorname{End}_{\mathbb{Q}}(V\underset{\mathbb{Z}}{ \otimes}\mathbb{Q}),\) this is equivalent to \(\mathbb{Q}[\rho(G)]=\operatorname{End}_{\mathbb{Q}}(V\underset{\mathbb{Z}}{ \otimes}\mathbb{Q}).\) Since \(V\) has finite rank, we can conclude that there is an integer \(D\) such that \(D\cdot\operatorname{End}(V)\subset\mathbb{Z}[\rho(G)].\) Now take \(p\) be any prime number not dividing
\(D\), reducing mod \(p\) we get that \(\operatorname{End}(V\underset{\mathbb{Z}}{\otimes}\mathbb{F}_{p})\subset\mathbb{F}_ {p}[\rho_{p}(G)]\), which implies that \(\rho_{p}\) is irreducible.
### Simply intersecting pairs and the contraction map
Proposition 2.14 shows the importance of understanding the kernel of the contraction map
\[\kappa:\Lambda^{3}H_{1}(\Sigma,\mathbb{Z})/(\omega\wedge H_{1}(\Sigma, \mathbb{Z})\to H_{1}(\Sigma,\mathbb{Z}/(g-1)\mathbb{Z}).\]
In this section, we will introduce some elements of the Torelli subgroup \(J_{1}(\Sigma)\) whose images are in this kernel.
**Definition 2.17**.: Let \(\alpha\) and \(\beta\) be two non separating curves on \(\Sigma\), such that \(\alpha\) and \(\beta\) are geometrically intersecting twice, and their algebraic intersection is zero. Then we call the pair of curves \((\alpha,\beta)\) a simply intersecting pair (or SIP) and the element \([t_{\alpha},t_{\beta}]\) a SIP-map.
The work of Childers [10] shows that SIP-maps are in the kernel of the contraction map. More precisely, if \([t_{\alpha},t_{\beta}]\) is a SIP-map then a regular neighborhood of \(\alpha\cup\beta\) is a four-holed sphere with some curves \(x,y,z,w\) as boundary components. Then [10, Main Result 2] states that
\[\tau_{1}([t_{\alpha},t_{\beta}])=[x]\wedge[y]\wedge[z],\]
where \(\tau_{1}:J_{1}(\Sigma)\rightarrow\Lambda^{3}H_{1}(\Sigma,\mathbb{Z})/(\omega \wedge H_{1}(\Sigma,\mathbb{Z}))\) is the first Johnson homomorphism.
From this we get the following:
**Lemma 2.18**.: _There exists a SIP-map \([t_{\alpha},t_{\beta}]\) in a one-holed genus \(3\) surface such that \(\tau_{1}([t_{\alpha},t_{\beta}])\) is a primitive element of \(\Lambda^{3}H_{1}(\Sigma,\mathbb{Z})/\left(\omega\wedge H_{1}(\Sigma,\mathbb{Z} )\right).\)_
Proof.: We will construct a simply intersecting pair \((\alpha,\beta)\) in \(\Sigma\) such that \(x,y,z,w\) are the boundary components of \(N(\alpha\cup\beta).\) Thanks to Childers' formula, we see that the conclusion of the lemma will hold if the curves \(x,y,z\) are non separating in \(\Sigma\) and if their homology classes can be completed into a basis of \(H_{1}(\Sigma,\mathbb{Z}).\) The latter part is true if the union \(x\cup y\cup z\) is also non-separating. Start with a \(4\)-holed sphere with boundary components \(x,y,z,w.\) Gluing tubes or pants to connect the curves \(x,y,z,w,\) it is easy to see that one can construct a one-holed genus \(3\) surface where this holds.
### Casson invariant and quantum invariant
For \(M\) an integral homology \(3\)-sphere, let \(\lambda(M)\in\mathbb{Z}\) be the its Casson invariant. An important theorem by Murakami relates quantum invariants with Casson invariants. Recall that for \(p\geq 5\) a prime, \(\zeta_{p}\) denotes a \(p\)-th primitive root of unity and \(h=1-\zeta_{p}.\)
**Theorem 2.19** ([10]).: _If \(M\) is an integral homology \(3\)-sphere, then_
\[Z_{p}(M)=Z_{p}(S^{3})\big{(}1+6h\lambda(M)\big{)}\mod h^{2}\]
Now let \(\Sigma\) be a surface of genus at least \(6\), we choose \(H_{1}\) and \(H_{2}\) two handlebodies with boundary \(\Sigma\) such that \(H_{1}\underset{\mathrm{Id}}{\cup}\bar{H_{2}}=S^{3}\). If \(\varphi\in J_{1}(\Sigma)\), the \(3\)-manifold \(H_{1}\underset{\varphi}{\cup}\bar{H_{2}}\) is an integral homology \(3\)-sphere. In [10], Morita that the map
\[\lambda_{H_{1},H_{2}}:\varphi\in J_{2}(\Sigma)\mapsto\lambda\big{(}H_{1} \underset{\varphi}{\cup}\bar{H_{2}}\big{)}\in\mathbb{Z}\]
is a group homomorphism. Moreover he proved that when restricted to \(J_{3}(\Sigma)\), the map \(\lambda_{H_{1},H_{2}}\) is independent from the handlebodies \(H_{1}\) and \(H_{2}\). For \(\varphi\in J_{3}(\Sigma)\), we will simply write \(\lambda(\varphi)\) for \(\lambda_{H_{1},H_{2}}(\varphi)\). The map \(\lambda:\varphi\in J_{3}(\Sigma)\mapsto\lambda(\varphi)\in\mathbb{Z}\) is invariant by conjugation under \(\mathrm{Mod}(\Sigma)\) and is proportional to the so-called core of the Casson invariant. We will also need the following theorem by Hain
**Theorem 2.20** ([1]).: _For \(k\geq 3\), \(\lambda(J_{k})\neq\{0\}\)._
Although we will not need it here, we note that for the case \(k=4\), Faes in [11] showed that \(\lambda(J_{4})=\mathbb{Z}\).
## 3. \(h\)-adic expansion of quantum representations and Johnson filtration
### \(h\)-adic expansion of quantum representations
In all this section, we fix a prime \(p\geq 5.\) As in the previous section, let \(h=1-\zeta_{p}\in\mathbb{Z}[\zeta_{p}].\) We recall that \(h\) is an irreducible in \(\mathbb{Z}[\zeta_{p}],\) and that \(p\) is equal to \(xh^{p-1}\) for some unit \(x\in\mathbb{Z}[\zeta_{p}].\) We denote by \(\rho_{p,k}\) the representation \(\rho_{p}\) reduced modulo \(h^{k},\) whose coefficients then belong to \(\mathbb{Z}[\zeta_{p}]/(h^{k}).\)
**Lemma 3.1**.: _Let \(N_{k}\triangleleft\mathrm{Mod}(\Sigma)\) be the kernel of \(\rho_{p,k}.\) Then:_
* \(\forall k,l\geq 1,\) _one has_ \([N_{k},N_{l}]\subset N_{k+l}.\)__
* \(\forall k\geq 1,\) _one has_ \(N_{k}^{p}\subset N_{k+p-1},\) _where_ \(N_{k}^{p}\) _is the subgroup of_ \(\mathrm{Mod}(\Sigma)\) _generated by_ \(p\)_-th powers of elements of_ \(N_{k}.\)__
Proof.: An element \(f\in\mathrm{Mod}(\Sigma)\) is in \(N_{k}\) if and only if \(\rho_{p}(f)=\mathrm{id}_{Z_{p}(\Sigma)}+h^{k}u,\) where \(u\in\mathrm{End}_{\mathbb{Z}[\zeta_{p}]}(Z_{p}(\Sigma)).\) Let \(f\in N_{k}\) and \(g\in N_{l}\) and write \(\rho_{p}(f)=id_{Z_{p}(\Sigma)}+hu\) and \(\rho_{p}(g)=\mathrm{id}+h^{l}v.\) Then
\[\rho(f)^{-1}=\mathrm{id}_{Z_{p}(\Sigma)}-h^{k}u+h^{2k}u^{2}-\ldots\mod h^{k+l}.\]
One can write the same formula for \(\rho(g)^{-1}\) modulo \(h^{k+l}.\) Then a direct computation shows that \(\rho([f,g])=id_{Z_{p}(\Sigma)}\) modulo \(h^{k+l}.\)
As for point (ii), we recall that \(p\) is equal to \(h^{p-1}\) up to a unit in \(\mathbb{Z}[\zeta_{p}].\) Let us assume again \(f\in N_{k}\) and \(\rho_{p}(f)=id+hu,\) then we have
\[\rho_{p}(f)^{p}=id+\binom{p}{1}h^{k}u+\binom{p}{2}h^{2k}u^{2}+\ldots+h^{pk}u^{p}\]
Note that \(p\) divides all binomial coefficients \(\binom{p}{j}\) with \(1\leq j\leq p-1,\) so all terms \(h^{jk}\binom{p}{j}u^{j}\) with \(1\leq j\leq p-1\) are zero modulo \(h^{k+p-1}.\) Since \(pk\geq k+p-1,\) the last term is also zero modulo \(h^{k+p-1}\) and we get the second claim.
### Analysis of \(\operatorname{Ker}(\rho_{p,1})\)
In this section, we will study the representation \(\rho_{p,1}\) restricted to the Torelli group \(J_{1}(\Sigma)\) and give a proof of Theorem 1.2. First, note that since we restrict \(\rho_{p,1}\) to \(J_{1}(\Sigma),\) we can consider it a linear representation instead of just a projective representation. For \(G\) a group, we write \(\operatorname{Ab}(G)\) for its abelianization, and \(\operatorname{Ab}_{p}(G)\) for its mod \(p\) abelianization: \(\operatorname{Ab}_{p}(G)=G/\langle[G,G],G^{p}\rangle.\)
**Lemma 3.2**.: _The morphism_
\[\rho_{p,1}:J_{1}(\Sigma)\longrightarrow\operatorname{Aut}(Z_{p}(\Sigma))\text{ mod }h\]
_factors through \(\operatorname{Ab}_{p}(J_{1}(\Sigma)).\)_
Proof.: It is a consequence of Corollary 2.4 that \(J_{2}(\Sigma)\subset\operatorname{Ker}(\rho_{p,1}).\) Since \(J_{1}(\Sigma)/J_{2}(\Sigma)\) is abelian, \(\rho_{p,1}\) factors through \(\operatorname{Ab}(J_{1}(\Sigma)).\) Moreover, \(J_{1}(\Sigma)\) is generated by bounding pairs by Johnson's theorem [11]. Therefore it suffices to show that \(p\)-th powers of bounding pairs are in \(\operatorname{Ker}(\rho_{p,1}).\) However, the \(p\)-th power of a bounding pair \(\tau_{c}\tau_{c^{\prime}}^{-1}\) is \(\tau_{c}^{p}\tau_{c^{\prime}}^{-p},\) a product of \(p\)-powers of Dehn twists, which is in \(\operatorname{Ker}(\rho_{p}).\)
We can put a \(\operatorname{Mod}(\Sigma)\)-module structure on \(\operatorname{Im}\rho_{p,1}|_{J_{1}(\Sigma)}\) by defining for \(f\in\operatorname{Mod}(\Sigma),g\in J_{1}(\Sigma):\)
\[f\cdot\rho_{p,1}(g)=\rho_{p,1}(fgf^{-1}).\]
However, since \(\rho_{p,1}\) is abelian on \(J_{1}(\Sigma),\) this induces a \(\operatorname{Mod}(\Sigma)/J_{1}(\Sigma)\simeq\operatorname{Sp}_{2g}(\mathbb{ Z})\)-representation structure on \(\operatorname{Im}\rho_{p,1}|_{J_{1}(\Sigma)}.\)
**Proposition 3.3**.: _For any \(g\geq 3,\) and any prime \(p\geq 5\) the representation \(\rho_{p,1}\) induces an isomorphism of \(\operatorname{Sp}_{2g}(\mathbb{Z})\) representations \(Ab_{p}(J_{1}(\Sigma))\simeq\operatorname{Im}\rho_{p,1}|_{J_{1}(\Sigma)}.\)_
Proof.: We will first consider the case where \(p\) does not divide \(g-1.\) Thanks to Proposition 3.2, the map \(\rho_{p,1}:J_{1}(\Sigma)\longrightarrow\operatorname{Im}\rho_{p,1}|_{J_{1}(\Sigma)}\) induces a surjective morphism of \(\operatorname{Sp}_{2g}(\mathbb{Z})\)-representations from \(\operatorname{Ab}_{p}(J_{1}(\Sigma))\) to \(\operatorname{Im}\rho_{p,1}|_{J_{1}(\Sigma)}.\) However, by Proposition 2.14 and Theorem 2.10, when \(p\geq 5\) is not a divisor of \(g-1,\) we have that
\(\Lambda^{3}H_{1}(\Sigma,\mathbb{F}_{p})/\left(\omega\wedge H_{1}(\Sigma,\mathbb{F} _{p})\right)\) is an irreducible representation of \(\operatorname{Sp}_{2g}(\mathbb{Z}).\) This means that the morphism induced by \(\rho_{p,1}\) is either an isomorphism or the trivial morphism. However, since by Lemma 2.6, the image of a bounding pair of genus \(1\) by \(\rho_{p,1}\) is not trivial, we have that \(\rho_{p,1}\) is not trivial, and therefore induces an isomorphism \(Ab_{p}(J_{1}(\Sigma))\simeq\operatorname{Im}\rho_{p,1}|_{J_{1}(\Sigma)}.\)
Now, let us treat the case where \(p\) divides \(g-1.\) We still have that \(\rho_{p,1}\) induces a surjective morphism of \(\operatorname{Sp}_{2g}(\mathbb{Z})\) representation \(Ab_{p}(J_{1}(\Sigma))\longrightarrow\operatorname{Im}\rho_{p,1}|_{J_{1}( \Sigma)},\) however since \(Ab_{p}(J_{1}(\Sigma))\simeq\Lambda^{3}H_{1}(\Sigma,\mathbb{F}_{p})/(\omega \wedge H_{1}(\Sigma,\mathbb{F}_{p})),\) the kernel might not be trivial. Since the kernel of this map is a \(\operatorname{Sp}_{2g}(\mathbb{Z})\)-subrepresentation, by Proposition 2.14, we only have to exclude the possibility that \(\operatorname{Ker}\rho_{p,1}=\operatorname{Ker}\overline{\kappa}.\)
For this, let us introduce \(S,\) a subsurface of \(\Sigma\) of genus \(g^{\prime}=3\) with one boundary component, which is a separating curve in \(\Sigma.\) By Lemma 2.18, there is a SIP-map, \([t_{a},t_{b}],\) with support on \(S,\) and such that \(\tau_{1}([t_{a},t_{b}])\) is a primitive element of \(\Lambda^{3}H_{1}(\Sigma,\mathbb{Z})/(\omega\wedge H_{1}(\Sigma,\mathbb{Z})).\) In particular, \([t_{a},t_{b}]\) is mapped to a nonzero element of \(\operatorname{Ker}\overline{\kappa}\) for any prime \(p\geq 5.\) However, since \(p\) does not divide \(g^{\prime}-1=2,\) we have that the mapping class induced by \([t_{a},t_{b}]\) on \(\hat{S}\) is not in \(\operatorname{Ker}\rho_{p,1}.\) By Lemma 2.9, since the boundary of \(S\) is separating in \(\Sigma,\) the mapping class \([t_{a},t_{b}]\) as an element of \(\operatorname{Mod}(\Sigma),\) is also not in \(\operatorname{Ker}\rho_{p,1}.\) Hence \(\operatorname{Ker}\rho_{p,1}\) does not contain \(\operatorname{Ker}\overline{\kappa},\) and induces an isomorphism \(Ab_{p}(J_{1}(\Sigma))\simeq\operatorname{Im}\rho_{p,1}|_{J_{1}(\Sigma)}.\)
Proof of Theorem 1.2.: Let \(f\in\operatorname{Ker}\rho_{p}.\) Then \(f\) acts trivially on \(\operatorname{Im}\rho_{p,1}\) by conjugation. By Proposition 3.3, we have that \(\operatorname{Im}\rho_{p,1}\simeq\Lambda^{3}H_{1}(\Sigma,\mathbb{F}_{p})/ \left(\omega\wedge H_{1}(\Sigma,\mathbb{F}_{p})\right),\) and by Lemma 2.15, the group \(\operatorname{Sp}_{2g}(\mathbb{F}_{p})\) acts faithfully on \(\Lambda^{3}H_{1}(\Sigma,\mathbb{F}_{p})/\left(\omega\wedge H_{1}(\Sigma,\mathbb{ F}_{p})\right).\) Therefore the image of \(f\) in \(\operatorname{Sp}_{2g}(\mathbb{F}_{p})\) is trivial, that is, \(f\) in the Torelli mod \(p\) subgroup. Since \(g\geq 2,\) the Torelli mod \(p\) subgroup is the subgroup \(J_{1}(\Sigma)T_{p}\) (this is a consequence of the classical fact that the \(p\)-congruence subgroup of \(\operatorname{Sp}_{2g}(\mathbb{Z})\) is generated by \(p\)-th powers of transvections). Now since the subgroup \(T_{p}\) generated by \(p\)-th powers of Dehn twists is contained in \(\operatorname{Ker}\rho_{p},\) we must have \(f=f_{1}f_{2}\) with \(f_{2}\in T_{p}\) and \(f_{1}\in J_{1}(\Sigma)\cap\operatorname{Ker}\rho_{p}.\) However, \(J_{1}(\Sigma)\cap\operatorname{Ker}\rho_{p}\subset\operatorname{Ker}\rho_{p,1}| _{J_{1}(\Sigma)},\) and by Proposition 3.3, this kernel is the same of the kernel of mod \(p\) abelianization of the Torelli group. Since the Torelli group is generated by bounding pairs, whose \(p\)-th powers are in \(T_{p},\) this is the same as the subgroup generated by \([J_{1}(\Sigma),J_{1}(\Sigma)]\) and \(p\)-powers of bounding pairs, and therefore \(f\in[J_{1}(\Sigma),J_{1}(\Sigma)]T_{p}.\)
### Analysis of \(\operatorname{Ker}(\rho_{p,2})\)
In this section, we will analyze the kernel of \(\rho_{p,2}\) restricted to \(J_{2}(\Sigma),\) and deduce Theorem 1.4.
**Lemma 3.4**.: _The morphism \(\rho_{p,2}:J_{2}(\Sigma)\longrightarrow\operatorname{Im}\rho_{p,2}|_{J_{2}( \Sigma)}\) factors through \(Ab_{p}(J_{2}(\Sigma))\)._
Proof.: We know that \(J_{2}(\Sigma)\subset N_{1}=\operatorname{Ker}\rho_{p,1}\) by Corollary 2.4. By Lemma 3.1, we have that \([N_{1},N_{1}]\subset N_{2}=\operatorname{Ker}\rho_{p,2}\), hence \(\rho_{p,2}|_{J_{2}(\Sigma)}\) is abelian. Moreover, since the Johnson subgroup is generated by separating twists, whose \(p\)-th powers are in \(\operatorname{Ker}\rho_{p}\), the morphism \(\rho_{p,2}|_{J_{2}(\Sigma)}\) actually factors through \(Ab_{p}(J_{2}(\Sigma))\).
Now, as in the previous section, we put a \(\operatorname{Mod}(\Sigma)\)-module structure on \(\operatorname{Im}\rho_{p,2}|_{J_{2}(\Sigma)}\) by defining for \(f\in\operatorname{Mod}(\Sigma),g\in J_{2}(\Sigma):\)
\[f\cdot\rho_{p,2}(g)=\rho_{p,2}(fgf^{-1}).\]
Again, since \(\rho_{p,2}|_{J_{2}(\Sigma)}\) is abelian, this actually induces a \(\mathcal{M}=\operatorname{Mod}(\Sigma)/J_{2}(\Sigma)\)-module structure. Note that \(\mathcal{M}\simeq(\Lambda^{3}H_{1}(\Sigma,\mathbb{Z})/\left(\omega\wedge H_{1 }(\Sigma,\mathbb{Z})\right))\rtimes\operatorname{Sp}_{2g}(\mathbb{Z})\), hence we get also a \(\operatorname{Sp}_{2g}(\mathbb{Z})\)-module structure on \(\operatorname{Im}\rho_{p,2}\).
**Proposition 3.5**.: _Let \(\Sigma\) be a closed surface of genus \(g\geq 6.\) For all large enough prime \(p,\) the map_
\[\rho_{p,2}:Ab_{p}(J_{2}(\Sigma))\longrightarrow\operatorname{Im}\rho_{p,2}|_{ J_{2}(\Sigma)}\]
_is an isomorphism of \(\operatorname{Mod}(\Sigma)/J_{2}(\Sigma)\) modules._
Proof.: By a theorem of Church, Ershov and Putman [1], if \(g\geq 6\) then \(J_{2}(\Sigma)\) is finitely generated. This implies that \(Ab(J_{2}(\Sigma))\) is an abelian group of finite rank, and therefore that \(Ab(J_{2}(\Sigma))\) has no \(p\)-torsion for any large enough \(p.\) If that is the case then \(Ab_{p}(J_{2}(\Sigma))\) is isomorphic to \(Ab^{f}(J_{2}(\Sigma))\underset{\mathbb{Z}}{\otimes}\mathbb{F}_{p}\), where \(Ab^{f}(J_{2}(\Sigma))\) denotes the free part of the abelianization of \(J_{2}(\Sigma)\).
By Theorem 2.11, the rational abelianization \(\operatorname{Ab}_{\mathbb{Q}}(J_{2}(\Sigma))\) of \(J_{2}(\Sigma)\) is a sum of \(3\) absolutely irreducible \(\operatorname{Sp}_{2g}(\mathbb{Z})\) representations. Moreover \(\operatorname{Ab}_{\mathbb{Q}}(J_{2}(\Sigma))\simeq\operatorname{Ab}^{f}(J_{ 2}(\Sigma))\underset{\mathbb{Z}}{\otimes}\mathbb{Q}\). By Lemma 2.16, we get that \(\operatorname{Ab}_{p}(J_{2}(\Sigma))\) is also a sum of \(3\)-irreducible representations of \(\operatorname{Sp}_{2g}(\mathbb{Z})\) over \(\mathbb{F}_{p}\), whenever \(p\) is large enough. Then \(\operatorname{Im}\rho_{p,2}\) is a quotient \(\operatorname{Sp}_{2g}(\mathbb{Z})\)-representation of \(\operatorname{Ab}_{p}(J_{2}(\Sigma)).\) Note that \(\operatorname{Im}\rho_{p,2}\) is also a quotient \(\mathcal{M}\)-representation of \(\operatorname{Ab}_{p}(J_{2}(\Sigma))\), where \(\mathcal{M}=\operatorname{Mod}(\Sigma)/J_{2}(\Sigma)\).
We have to show that the \(3\) irreducible \(\operatorname{Sp}_{2g}(\mathbb{Z})\)-summands in \(\operatorname{Ab}_{p}(J_{2}(\Sigma))\) survive in \(\operatorname{Im}\rho_{p,2}\).
First, by Theorem 2.20, we can find \(\varphi\in J_{4}(\Sigma)\) with \(\lambda(\varphi)\neq 0\). In particular by Lemma 2.13, \(\varphi\) projects to a non-zero element in the \(\mathbb{Q}\)-summand of \(\operatorname{Ab}_{\mathbb{Q}}(J_{2}(\Sigma))\). Hence for \(p\) big enough \(\lambda(\varphi)\neq 0\) in \(\mathbb{F}_{p}\) and \(\varphi\) projects to a non-zero element of the \(\mathbb{F}_{p}\)-summand of \(\operatorname{Ab}_{p}(J_{2}(\Sigma))\). Now let \(H_{1}\) and \(H_{2}\) be two handlebodies with boundary
\(\Sigma\) such that \(H_{1}\underset{\mathrm{Id}}{\cup}\bar{H}_{2}=S^{3}\). By Theorem 2.19,
\[Z_{p}(H_{1}\underset{\varphi}{\cup}\bar{H}_{2})=Z_{p}(S^{3})\big{(}1+6h\lambda( \varphi)\big{)}\mod h^{2},\]
which implies that
\[Z_{p}(H_{1}\underset{\varphi}{\cup}\bar{H}_{2})\neq Z_{p}(S^{3})\mod h^{2} \tag{1}\]
On the other hand by the TQFT axioms \(Z_{p}(H_{1}\underset{\varphi}{\cup}\bar{H}_{2})=\langle Z_{p}(H_{1}),\rho_{p} (\varphi)Z_{p}(H_{2})\rangle\) and \(Z_{p}(S^{3})=\langle Z_{p}(H_{1}),Z_{p}(H_{2})\rangle\). As the vectors \(Z_{p}(H_{1})\) and \(Z_{p}(H_{2})\) belong to the lattice \(\mathcal{S}_{p}(\Sigma)\), Equation (1) implies that \(\rho_{p,2}(\varphi)\) is not trivial. Hence the \(\mathbb{F}_{p}\)-summand of \(\mathrm{Ab}_{p}(J_{2}(\Sigma))\) survives in \(\mathrm{Im}\,\rho_{p,2}\).
Next, by Lemma 2.8, \(\mathrm{Im}\,\rho_{p,2}\) is not invariant under \(J_{1}(\Sigma)\)-action. However, the first and second summands of \(\mathrm{Ab}_{p}(J_{2}(\Sigma))\) are both invariant under \(J_{1}(\Sigma)\)-action, hence the last summand has to survive in \(\mathrm{Im}\,\rho_{p,2}\).
Finally, for \(f\in J_{1}(\Sigma)\) and \(g\in J_{2}(\Sigma)\), by Corollary 2.4 we have that \(\rho_{p,2}([f,g])\) is of the form \(id_{Z_{p}(\Sigma)}+h\begin{pmatrix}0&B\\ 0&0\end{pmatrix}\) for some matrix \(B.\) This implies that \(\rho_{p,2}([f,g])\) is in the kernel of the map \(\lambda\) and that \(\rho_{p,2}([f,g])\) is \(J_{1}(\Sigma)\) invariant. Taking \(f\in J_{1}(\Sigma)\) and \(g=t_{\alpha}\in J_{2}(\Sigma)\) provided by Lemma 2.8, we get that \([f,g]\) is not killed by \(\rho_{p,2}\), which implies that the second summand survives in \(\mathrm{Im}\,\rho_{p,2}\).
Proof of Theorem 1.4.: By Theorem 1.2, we have that \(\mathrm{Ker}\,\rho_{p}\subset J_{2}(\Sigma)T_{p}.\) Since \(T_{p}\subset\mathrm{Ker}\,\rho_{p}\), we only need to describe which elements of \(J_{2}(\Sigma)\) are in \(\mathrm{Ker}\,\rho_{p}\). However, by Proposition 3.5, for \(p\) large enough, we have that \(f\in J_{2}(\Sigma)\) is in \(\mathrm{Ker}\,\rho_{p,2}\) if and only if \(f\) is in the kernel of the mod \(p\) abelianization morphism. Since \(J_{2}(\Sigma)\) is generated by separating twists, the kernel of the mod \(p\) abelianization is generated by commutators in \(J_{2}(\Sigma)\) and \(p\)-th powers of separating Dehn twists, both of which are in \([J_{2}(\Sigma),J_{2}(\Sigma)]T_{p}\).
## 4. Further comments
A consequence of Corollary 2.4 is that it allows us to define a morphism from \(\mathrm{Im}\,\rho_{p,2}|_{J_{2}(\Sigma)}\) to \(\mathbb{F}_{p}:\)
**Lemma 4.1**.: _For \(f\in J_{2}(\Sigma),\) let us write \(\rho_{p,2}(f)=id_{Z_{p}(\Sigma)}+h\begin{pmatrix}A_{1}(f)&A_{3}(f)\\ 0&A_{2}(f)\end{pmatrix}\) as in Corollary 2.4. Then the map_
\[\begin{array}{rcc}d:&J_{2}(\Sigma)&\longrightarrow&\mathbb{F}_{p}\\ &f&\longrightarrow&\operatorname{Tr}(A_{1}(f))\ \text{\rm mod}\ h\end{array}\]
_is a \(\operatorname{Mod}(\Sigma)\)-invariant morphism._
Proof.: The fact that \(d\) is a morphism is a direct consequence of Corollary 2.4-(ii), while the fact that it is invariant under \(\operatorname{Mod}(\Sigma)\)-action is a direct consequence of Theorem 2.3.
Lemma 4.1 makes it tempting to think of the morphism \(d\) as (a scalar multiple of) the reduction mod \(p\) of the core of the Casson invariant, which is the unique (up to scalar) \(\operatorname{Mod}(\Sigma)\)-invariant morphism \(J_{2}(\Sigma)\to\mathbb{Z}_{p}\) by the work of Morita [10]. Unfortunately, nothing garanties that the morphism \(d\) is not identically zero.
To check whether this is the case, it is possible to compute \(d\) on a separating Dehn twist of genus \(1,\) thanks to Gilmer and Masbaum's formulae for the dimensions of the odd-even decomposition of \(Z_{p}(\Sigma)\) (see [1, Proposition 7.2]). By performing this analysis, it is possible to check that for each genus \(g,\) the morphism \(d\) is non trivial for small values of \(p,\) and trivial for all large enough primes \(p.\) Since the proof of this negative result is a bit involved, we will not include it here.
As another remark, a naive approach would be to define another \(\operatorname{Mod}(\Sigma)\)-invariant morphism \(d^{\prime}\) by setting \(d^{\prime}(f)=\operatorname{Tr}(A_{1}(f)+A_{2}(f))\) mod \(h.\) Then as a consequence of Verlinde's formula, it would be possible to show that \(d^{\prime}\) is always the trivial morphism, for any genus \(g\) and any prime \(p\geq 5.\)
|
2309.05973 | Circuit Breaking: Removing Model Behaviors with Targeted Ablation | Language models often exhibit behaviors that improve performance on a
pre-training objective but harm performance on downstream tasks. We propose a
novel approach to removing undesirable behaviors by ablating a small number of
causal pathways between model components, with the intention of disabling the
computational circuit responsible for the bad behavior. Given a small dataset
of inputs where the model behaves poorly, we learn to ablate a small number of
important causal pathways. In the setting of reducing GPT-2 toxic language
generation, we find ablating just 12 of the 11.6K causal edges mitigates toxic
generation with minimal degradation of performance on other inputs. | Maximilian Li, Xander Davies, Max Nadeau | 2023-09-12T05:51:56Z | http://arxiv.org/abs/2309.05973v2 | # Circuit Breaking: Removing Model Behaviors with Targeted Ablation
###### Abstract
Language models often exhibit behaviors that improve performance on a pre-training objective but harm performance on downstream tasks. We propose a novel approach to removing undesirable behaviors by ablating a small number of causal pathways between model components, with the intention of disabling the computational circuit responsible for the bad behavior. Given a small dataset of inputs where the model behaves poorly, we learn to ablate a small number of important causal pathways. In the setting of reducing GPT-2 toxic language generation, we find ablating just 12 of the 11.6K causal edges mitigates toxic generation with minimal degradation of performance on other inputs.
Machine Learning, ICML, ICML
## 1 Introduction
Language models (LMs) often exhibit undesirable behaviors useful during pre-training that prove hard to remove during fine-tuning. This has resulted in capable LMs which competently hallucinate, lie, manipulate, and exhibit undesirable biases (OpenAI, 2023; Brown et al., 2020).
In this work, we propose a new method for removing undesirable behaviors: _targeted edge ablation_. In targeted edge ablation, we target a bad behavior by removing a small number of causal pathways through the model at inference time (Figure 1). Targeted edge ablation follows recent work in using causal mediation to discover computational _circuits_ responsible for particular model behaviors (Wang et al., 2022; Goldowsky-Dill et al., 2023; Geiger et al., 2023a). Rather than discovering circuits, targeted edge ablation discovers causal cuts through circuits, disabling circuits responsible for bad behaviors.
Main Contributions.We formulate the problem of behavior removal and propose targeted edge ablation as a possible solution (Section 3). We then present preliminary results in performing targeted edge ablation to harm performance in toxic language generation (Section 4).
Figure 1: In targeted ablation, we (1) rewrite our model as a computation graph of a desired granularity, (2) learn a binary mask over edges while regularizing to penalize ablations, and (3) ablate edges at inference time to avoid the target bad behavior.
## 2 Background
Circuit analysis.We can write any model as a connected directed acyclic graph (DAG) with source nodes representing the model's (typically vector-valued) input, sink nodes representing the model's output, and intermediate nodes representing units of computation (e.g. Figure 1, left; see Appendix B). Circuit analysis attempts to mechanistically understand model computation by identifying a subgraph of this DAG that is responsible for a given behavior, and assigning semantic meaning to (groups of) nodes (Wang et al., 2022; Raukur et al., 2022; Chan et al., 2022). Circuits have also been discussed in the context of treating nodes as "features," usually defined as directions in the latent space (Olah, 2022; Cammarata et al., 2020).
Ablating edges in a computational graph.Since edges in the model's computational graph represent dependencies between nodes, we can simulate what the model would have computed without a certain node-to-node dependency by performing ablation on an edge in the graph. While previous work has largely focused on ablation of _nodes_(Ghorbani and Zou, 2020), an advantage of our strategy of ablating edges rather than nodes is the mitigation of polysemantic behavior of model components (Olah et al., 2020), since we investigate the causal importance of each causal path into and out of the component. In our experiments, we use _zero ablation_, in which we compute the destination node as if the source node's value were zero, and _mean ablation_(Wang et al., 2022), in which we compute the destination node as if the source node's value were set to its mean value over the training set. See Appendix C for more.
## 3 Targeted Ablation for Behavior Removal
Let \(\mathcal{L}(M,\mathcal{D})\) indicate the loss of model \(M\) on a distribution \(\mathcal{D}\) over input-label pairs. We specify a _behavior_ as some distribution \(\mathcal{D}\) on which the model achieves low loss \(\mathcal{L}(M,\mathcal{D})<K\) for some appropriate hyperparameter \(K\). We can define the _disjointness_\(\delta(\mathcal{D},\mathcal{D}^{\prime})\) for behaviors \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) to be the total variation distance between \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\). In particular, the total variation distance is 1 if \(\mathcal{D}\) assigns probability 0 to all regions that \(D^{\prime}\) assigns positive probability and vice versa.
**Definition 3.1** (Behavior Removal).: Given a model \(\mathcal{M}\) and unlimited access to training samples, produce a model \(\mathcal{M}^{*}\) which achieves high loss \(\mathcal{L}(\mathcal{M}^{*},\mathcal{D})>K\), without harming distinct behaviors. In particular, for all behaviors \(\mathcal{D}^{\prime}\) completely disjoint from \(\mathcal{D}\), i.e. \(\delta(\mathcal{D},\mathcal{D}^{\prime})=1\), we wish to preserve \(\mathcal{L}(\mathcal{M}^{*},\mathcal{D}^{\prime})\leq\mathcal{L}(\mathcal{M},\mathcal{D}^{\prime})\).
Thus, behavior removal has two goals: _efficacy_ - the edited model should achieve high loss on \(\mathcal{D}\); and _specificity_ - the edited model should achieve low loss on all disjoint behaviors \(\mathcal{D}^{\prime}\) for which the original model achieves low loss.
Let \(D_{\text{train}}\) be our train set, and \(D_{\text{behavior}}\) be samples from \(\mathcal{D}\). One reason the model might exhibit a behavior is if \(\mathcal{D}\) overlaps with the training distribution, which would incentivize the model to produce low loss on \(\mathcal{D}\). Thus, it is reasonable to assume \(D_{\text{train}}\) and \(\mathcal{D}\) may not be completely disjoint.
### Baseline: Finetuning
We form an approximate objective function by encouraging preserving performance on the training set, while increasing loss on the bad behavior set:
\[\mathcal{L}(\mathcal{M},D_{\text{train}})-\alpha\cdot\mathcal{L}(\mathcal{M},D_{\text{behavior}}) \tag{1}\]
where \(\alpha\) is a hyperparameter. We can now finetune using Equation 1. Since \(D_{\text{behavior}}\) is often small, we use early stopping to avoid overfitting.
### Baseline: Task Arithmetic
In task arithmetic (Ilharco et al., 2023), we finetune \(\mathcal{M}\) on \(\mathcal{L}(\mathcal{M},D_{\text{behavior}})\)_towards_ the bad behaviors, and find the "task vector", or difference in weights between the finetuned model and \(\mathcal{M}\). We then form \(\mathcal{M}^{*}\) by adding the negated task vector to \(\mathcal{M}\).
### Targeted Edge Ablation
Following Figure 1, we describe targeted edge ablation as three steps.
**1. Rewrite the model.** We first choose at what level of granularity to represent the model's computation. Since we learn a mask over edges in the resulting graph, increasing the granularity results in a more expressive ablation process. We call the specified graph \(G\), and call its set of edges \(E_{G}\).
**2. Learn an ablation mask.** Let \(G_{-E}\) be our graph \(G\) with the edges in \(E\) ablated. Then we wish to select \(E\subset E_{G}\) that minimizes
\[\mathcal{L}(G_{-E},D_{\text{train}})-\alpha\cdot\mathcal{L}(G_{-E},D_{\text {behavior}})+\lambda\cdot R(E) \tag{2}\]
for hyperparameters \(\alpha,\lambda\) and some regularization function \(R\).1 To compute an optimal edge subset \(E\), we optimize an edge mask \(W_{\text{mask}}\) on a continuous relaxation of Equation 2. Every edge \(e=(A,B)\) is given a learnable weight \(w_{e}\in[0,1]\), where \(w_{e}=0\) corresponds to ablating \(e\), \(w_{e}=1\) corresponds to preserving \(e\), and \(0<w_{e}<1\) corresponds to node \(B\) observing the following convex combination of the preserved value (\(v_{A}\)) and the ablated value (\(\mu_{A}\)) for node \(A\):
Footnote 1: The regularization term penalizes large sizes of \(E\) to apply pressure to find a minimal subset of edges that disables the behavior.
\[w_{e}\cdot v_{A}+(1-w_{e})\cdot\mu_{A} \tag{3}\]
When \(w_{e}=0\), node \(B\)'s observation of node \(A\) is replaced by its ablated value, and when \(w_{e}=1\), node \(B\) fully observes the value of node \(A\). We initialize the mask parameters \(W_{\text{mask}}\) to a vector of \(1\)s (indicating fully faithful model computation) and train \(W_{\text{mask}}\) on the loss function
\[\mathcal{L}(W_{\text{mask}};\alpha,\lambda,R) =\mathcal{L}(W_{\text{mask}},D_{\text{train}})\] \[\quad-\alpha\cdot\mathcal{L}(W_{\text{mask}},D_{\text{bad behavior}})\] \[\quad+\lambda(t)\cdot R(W_{\text{mask}}) \tag{4}\]
We train with a regularization weight \(\lambda(t)\) that increases over time, since we find that this training dynamic encourages the edge mask to find a set of ablations that removes the bad behavior and then revise it to minimize the number of ablations. When training is finished, we then round all the mask weights to either 0 or 1 by selecting the set of ablated edges to be \(\hat{E}^{*}=\{e\mid w_{e}\leq\tau\}\) for some threshold \(\tau\in(0,1)\).
**3. Ablate during inference.** We form \(\mathcal{M}^{*}\) by ablating the edges learned in step (2) at inference time.
### Conceptual Advantages over Fine-Tuning
Limited Expressivity.LMs and other large models may have millions or billions of parameters and thus may be vastly overparameterized for the task of performing poorly on the bad-behavior examples, especially if generating bad-behavior examples is expensive and the set of examples is small.2
Footnote 2: For example, collecting jailbreaks to remove jailbreaking behavior is challenging and expensive.
A particular advantage of limiting the expressivity of our solution class is avoiding the negative effects of training on a mis-specified objective function like Equation 1, which encourages low loss on samples in \(D_{\text{train}}\) which exhibit the behavior but are not included in \(D_{\text{behavior}}\). Allowing the model to overfit to this loss function may result in memorization of the points in \(D_{\text{behavior}}\) to maintain low loss on _all_ of \(D_{\text{train}}\), including those points which have high likelihood in \(\mathcal{D}\). On the other hand, edge ablation limits the expressivity of the solution space and relies on the model's previously learned specialization of causal pathways.
Preserving Structure.Since edge ablation edits the model at a high level, it preserves most of the model's mechanistic calculus. Even subtle fine-tuning has the potential to entirely reorganize the model's reasoning process, disrupting any mechanistic interpretability work that has already been performed. Targeted edge ablation is unlikely to induce the model to change its reasoning structure or increase its knowledge because it strictly decreases the amount of information available to the model's computation.
## 4 Removing Toxicity in GPT-2
We apply our model editing methodology to preventing the generation of toxic (e.g. offensive, swear-filled) sequences in a pre-trained GPT-2 Small (Radford et al., 2019). Our goal is to edit GPT-2 so that it achieves high loss on toxic sequences, so our \(\mathcal{D}\) is a distribution over toxic sequences for which the model achieves low loss.3
Footnote 3: All code is available at [https://anonymous.4open.science/r/circuit-breaking-5DE5/](https://anonymous.4open.science/r/circuit-breaking-5DE5/).
As an approximation of our train set \(D_{\text{train}}\), we use 10,000 samples from OpenWebText (OWT) (Gokaslan and Cohen, 2019). See Appendix E for results in removing a sub-class in an image classification model.
Constructing a bad behavior dataset.We sample excerpts from highly toxic comments posted to the Politically Incorrect board of 4chan imageboard forum (Papasavva et al., 2020). We sample from posts assigned a toxicity score of greater than 0.9, as calculated by Google's Perspective API Toxicity V6 (Google, 2023).
### Learning Edge Mask Details
Similar to (Goldowsky-Dill et al., 2023; Wang et al., 2022), we write GPT-2 as a graph consisting of the input, the output, attention heads, and MLPs (158 nodes total) by considering a "residual rewrite" of the model's computational structure. The canonical description of a transformer model expresses the attention head \(A_{i,j}\) (the \(j\)th attention head in layer _i_) as taking an argument \(R_{i-1}\), the residual from the previous layer. However, since \(R_{0}=I\) (where \(I\) represents the input embeddings) and \(R_{i}=R_{i-1}+\sum_{j}A_{i,j}+M_{i}\) (where \(M_{i}\) is the output of the MLP node in layer _i_), we can instead consider attention head \(A_{i,j}\) as operating on the sum \(S_{i}^{A}=I+\sum_{i^{\prime}<i}\left(M_{i^{\prime}}+\sum_{j^{\prime}}A_{i^{ \prime},j^{\prime}}\right)\), and taking all nodes in previous layers as separate input arguments. Similarly, we can consider MLP node \(M_{i}\) as operating on the sum \(S_{i}^{M}=I+\sum_{i^{\prime}<i}M_{i^{\prime}}+\sum_{i^{\prime}\leq i}\sum_{j^{ \prime}}A_{i^{\prime},j^{\prime}}\), and the output node as operating on the sum of the input embeddings and all attention head and MLP outputs. In total, this residual rewrite gives us a nearly-dense graph containing 11,611 edges: one between every pair of (attention head, MLP, input, and output) nodes, except for attention heads in the same layer, which do not communicate with each other. Concretely, ablating an edge from \(A_{i^{\prime},j^{\prime}}\) to \(A_{i,j}\) entails replacing the \(A_{i^{\prime},j^{\prime}}\) term in \(S_{i}^{A}\) for the input to attention head \(A_{i,j}\) with zero (for zero ablation) or the mean value of head \(A_{i^{\prime},j^{\prime}}\) (for mean ablation).
We train two ablated models using a continuous edge mask. First, we train a zero-ablation mask against \(\mathcal{L}(W_{\text{mask}};\alpha,\lambda,R)\) described in equation 4, with \(\alpha=0.2\), \(\lambda(t)=(t-20)/10000\), and \(R(W_{\text{mask}})=\sum_{e\in E_{G}}w_{e}\). This
search process finds a mask that ablates 12 edges (Figure 2) and mitigates toxicity while preserving coherence. Second, we train a mean-ablation mask with \(\alpha=0.15\) and using the same hyperparameters otherwise, which finds a mask that ablates 84 edges and produces a similar effect.
As a baseline, we fine-tune on the loss given by Equation 1 directly, with \(\alpha=0.2\). We use early stopping with a validation set to prevent overfitting.4 We also compare to task arithmetic (Ilharco et al., 2023) (Section 3.2).
Footnote 4: We note this is a stronger baseline than naively training for high loss on our bad behavior set as done in (Ilharco et al., 2023), which we call “gradient ascent” in Table 1.
### Evaluation Metrics
Following Definition 3.1, we evaluate both the model's avoidance of toxic generation (_efficacy_) and the detriment to other behaviors (_specificity_). Since our goal is for the ablated model to achieve high loss on all toxic sequences (i.e. minimizing its probability of predicting subsequent tokens that would cause the sequence to be toxic), we evaluate efficacy in a few ways. First, we consider the ablated model's loss on with-held toxic text and in particular its loss on sequences for which the original model achieves low (\(<5\)) loss. Second, we consider the toxicity of the model's completions when prompted with toxic text, as measured by the score in \([0,1]\), 0 being the least toxic, given by the toxic-comment classifier Detoxif. We emphasize the toxicity of model completions on the specific prompts for which the original model produces highly toxic (\(>0.9\)) output.
We evaluate specificity by using the perplexity on withheld sequences from OWT, along with the perplexity on withheld OWT sequences prepended with toxic content. The original model produces low loss (4.617) on these sequences, and we choose to highlight the behavior of retaining coherence when prompted with toxic text as one that is particularly likely to be inadvertently removed when editing the model to produce high loss on toxic text.
### Results
Results are shown in Table 1. We train a model with 12 edges zero-ablated that substantially mitigates toxic generation, decreasing the average toxicity score on model generations for toxic prompts from 0.458 to 0.328 and in particular for the most toxic-inducing prompts from 0.944 to 0.567. This minimal edge ablation outperforms task arithmetic on every efficacy and specificity metric, and causes a lower increase in incoherence following toxic prompts than joint fine-tuning, though it does not eradicate the model's toxicity. Our mean-ablation mask with 84 edges achieves a similar result, greatly mitigating toxic generations without detracting from the model's other behaviors.
## 5 Related Work
Causal mediation for circuit analysis.Causal mediation (Pearl, 2009; Iwasaki and Simon, 1994) has been proposed as a framework for evaluating mechanistic causal explanations for model outputs (Goldowsky-Dill et al., 2023; Geiger et al., 2023; Vig et al., 2020). Experimental evaluation for
Figure 2: **Ablating GPT-2 Small to remove toxicity.**_Left:_ Grey nodes are attention heads, and purple nodes are MLPs. Computation proceeds upwards, with horizontal alignment corresponding to layers. The computational graph has 11,611 edges; red edges are the 12 ablations learned to remove toxicity. _Right:_ Examples of improved non-toxic generation.
causal explanations involves performing a set of ablation experiments to check whether they match hypothesized effects. For example, ablating allegedly unimportant paths should have little impact on the target behavior. Previous work has used the causal mediation framework to discover circuits, including in transformers (Chan et al., 2022; Wang et al., 2022; Nanda et al., 2023).
Existing causal mediation tests and circuit discovery methods built upon these tests evaluate whether a given set of edges are _sufficient_ for a given model behavior (i.e. if they contain a vertical path along the circuit), while our circuit breaking technique finds a set of edges that are _necessary_ for the behavior (i.e. a horizontal "cut" through the circuit).
Automated circuit discovery.Recent work has explored automated approaches to discovering circuits, including greedy algorithms which crawl the computational graph and remove edges which preserve behavior above a fixed threshold (Conmy et al., 2023), and gradient descent-based methods which use interchange intervention training (Geiger et al., 2022) to learn alignments between a source model and a proposed high-level causal model (Geiger et al., 2023b). Our work differs in attempting to find neither single features (Vig et al., 2020; Gurnee et al., 2023) nor full computational circuits (Geiger et al., 2023b; Goldowsky-Dill et al., 2023; Wang et al., 2022); instead we discover edges where removing their causal effect _breaks_ a given behavior.
Weight-masking and model pruning.Much prior work has sought to compress models by masking parameters (Leun et al., 1989; Hassibi and Stork, 1992). Most relevant to our work are approaches which learn masks from data by encouraging sparsity and preserving performance (Louizos et al., 2017; Wang et al., 2019; Cao et al., 2021). In our work, we _disincentivize_ sparsity (since we want _fewer_ ablations), and use an objective function tailored to removing a specific behavior instead of preserving general performance. Additionally, our edge-masking technique is more general than weight-masking, since we can ablate internal connections between high-level model components that do not correspond directly to particular weights, such as communication channels between pairs of attention heads. Finally, we prune using mean ablation instead of zero ablation.
Model editing to change or remove behaviors.Recent work has made changes to model behavior by making targeted edits to model weights (Meng et al., 2022) or activations (Hernandez et al., 2023), which differ from our goal of removing behaviors. (Gandikota et al., 2023) propose a fine-tuning approach to erasing concepts from diffusion models. (Elazar et al., 2021) remove information from a language model's representation by iteratively learning linear probes to extract the information and projecting onto the null space. Compared to such work, we consider coarser ablations, allow editing around multiple components, and seek to break behaviors as opposed to erasing information. Like us, (Ilharco et al., 2023) attempt to remove the toxic generation behavior in GPT-2, but do so by fine-tuning on bad behavior and subtracting the weight-difference from the original model.
## 6 Conclusion
Using a small dataset of examples of inputs on which a neural network exhibits a "bad behavior," we find that our method can make high-level modifications to the network that mitigate the bad behavior on the provided examples, generalize to removing the bad behavior across other inputs that trigger it, and cause only small amounts of damage to the model's performance on all other inputs (see D for limitations). We conjecture that model editing may be an alternate tool for targeted behavioral modification to fine-tuning, and encourage future work further investigating our approach.
\begin{table}
\begin{tabular}{l|c c c c|c c} & Toxic-loss & Toxic-loss (filtered) & Toxic generation & Toxic generation (filtered) & Incoherence & TPP Incoherence \\ \hline Original & 4.954 & 4.435 & 0.453 & 0.944 & 4.264 & 4.617 \\ \hline Gradient Ascent & **21.339** & **20.980** & 0.015 & 0.013 & 15.287 & 18.415 \\ Task Arithmetic & 5.357 & 4.827 & 0.351 & 0.631 & 4.427 & 4.731 \\ Joint Fine-Tuned & 11.817 & 13.020 & **0.009** & **0.008** & 4.240 & 7.402 \\ \hline Ablated (12 edges) & 5.027 & 4.486 & 0.328 & 0.567 & 4.280 & 4.623 \\ Ablated (84 edges) & 4.895 & 4.470 & 0.280 & 0.441 & **4.180** & **4.515** \\ \end{tabular}
\end{table}
Table 1: Toxic-loss measures the model’s loss on toxic prompts. Toxic generation measures the average toxicity score of model generations on toxic prompts, according to the Detoxify classifier. The filtered columns denote the loss or generation toxicity on test samples filtered by the original model achieving low loss (\(<5\)) or highly toxic generation (\(>.9\)). Incoherence measures the model’s loss on OWT. Toxic Pre-Pended (TPP) incoherence measures the model’s loss after on OWT sequences that have been preceded by toxic text. |
2309.14329 | Innovative Digital Storytelling with AIGC: Exploration and Discussion of
Recent Advances | Digital storytelling, as an art form, has struggled with cost-quality
balance. The emergence of AI-generated Content (AIGC) is considered as a
potential solution for efficient digital storytelling production. However, the
specific form, effects, and impacts of this fusion remain unclear, leaving the
boundaries of AIGC combined with storytelling undefined. This work explores the
current integration state of AIGC and digital storytelling, investigates the
artistic value of their fusion in a sample project, and addresses common issues
through interviews. Through our study, we conclude that AIGC, while proficient
in image creation, voiceover production, and music composition, falls short of
replacing humans due to the irreplaceable elements of human creativity and
aesthetic sensibilities at present, especially in complex character animations,
facial expressions, and sound effects. The research objective is to increase
public awareness of the current state, limitations, and challenges arising from
combining AIGC and digital storytelling. | Rongzhang Gu, Hui Li, Changyue Su, Wayne Wu | 2023-09-25T17:54:29Z | http://arxiv.org/abs/2309.14329v2 | # Innovative Digital Storytelling with AIGC: Exploration and Discussion of Recent Advances
###### Abstract.
Digital storytelling, as an art form, has struggled with cost-quality balance. The emergence of AI-generated Content (AIGC) is considered as a potential solution for efficient digital storytelling production. However, the specific form, effects, and impacts of this fusion remain unclear, leaving the boundaries of AIGC combined with storytelling undefined. This work explores the current integration state of AIGC and digital storytelling, investigates the artistic value of their fusion in a sample project, and addresses common issues through interviews. Through our study, we conclude that AIGC, while proficient in image creation, voiceover production, and music composition, falls short of replacing humans due to the irreplaceable elements of human creativity and aesthetic sensibilities at present, especially in complex character animations, facial expressions, and sound effects1. The research objective is to increase public awareness of the current state, limitations, and challenges arising from combining AIGC and digital storytelling.
Footnote 1: Project page: [https://lggm-demo.github.io/Leveraging-recent-advances-of-foundation-models-for-story-telling/](https://lggm-demo.github.io/Leveraging-recent-advances-of-foundation-models-for-story-telling/)
## 1. Introduction
Digital storytelling plays a vital role in the contemporary multimedia society, permeating various facets of today's Internet, offering substantial values across different objectives, including concept explanation, personal experience reflection, and political argument. As articulated by Joe Lambert (Joe, 1999), the core of digital stories is "bringing narratives to life", which properly demonstrates the fundamental elements of its creation process. Digital storytelling conventionally leverages a sophisticated amalgamation of multimedia content to craft attractive, immersive, and interactive experiences, involving several pivotal components such as narratives, storyboards, animation, video, and audio (Sundhi et al., 2017).
The production of digital storytelling has been revolutionized by the rapid advancements in computer techniques, including graphics, visualization, and internet platforms. This surge has yielded the development of digital tools, empowering professionals to create increasingly high-quality content. Nevertheless, digital storytelling still requires expertise in various digital art working processes. Recently, the great explosion of AI-generated Content (AIGC) has opened up a compelling avenue for novices to create digital content that satisfies their own ideas with greater ease and efficiency from prompts and sketches, as depicted in Figure 1. This development augments the capacity of artists in ideation, concept refinement, and content production on an unprecedented scale.
Although AIGC has a range of advantages, the exploration of building comprehensive and efficient AIGC technology chains, while simultaneously ensuring the accurate and meaningful expression of artistic ideas, remains an ongoing endeavor that requires further refinement and investigation. In this study, we do a pioneering experiment called Naked Monkey's Happy Discovery (N.M.H.D) that involves minimal human intervention, using AIGC as much as possible. Our focus is to preserve nothing but the core idea of the narrative while applying AIGC tools to generate various multimedia elements, such as detailed scripts, character appearances, scene images, animations, audio, and video content. Through this work, we demonstrate a promising technical digital story creation pipeline that heavily relies on AIGC. Additionally, we have provided the website to display further details to the public: [https://lggm-demo.github.io/Leveraging-recent-advances-of-foundation-models-for-story-telling/](https://lggm-demo.github.io/Leveraging-recent-advances-of-foundation-models-for-story-telling/)
Despite we have showed a promising path with our AIGC-driven solution, several lingering issues persist, extending from practical to ethical aspects, raising society-wide concerns about the relationships between art and technology. Consequently, we formulate a thorough interview within the realm of creative industries and collect constructive insights regarding the preceding concerns. We summarize the opinions and insights, offering advice about the technological development of AIGC and the evolving landscape of digital storytelling.
## 2. Related Work
### Digital Narrative
Digital storytelling consists of four parts: story script, pictures, audio, and animation (Sundhi et al., 2017). Its flexibility and dynamics incorporate multisensory experiences and utilize cognitive processes for learning (Sundhi et al., 2017). Animation and Film provide unique paths for narrative expression by creating the sense of a virtual world and incorporating dynamic and visual effects (Bradner, 2017).
### Technical Development
Early animation production used manual drawing to create motion trajectories (Sundhi et al., 2017). In 1970, computer technologies gradually started
to assist animation (Kumar et al., 2017). Later, motion capture used trajectory data to create character animation (Kumar et al., 2017). Recently, Generative AI models, initially limited to academia, have led to a surge in AI art for the upgrades of some prominent frameworks (Bahdan et al., 2017). The first is the Generative Adversarial Networks, which attracted great attention in the art industry (Kumar et al., 2017). Then Diffusion Models came out, giving better results and providing multi-modality generation (Dosov et al., 2017). Later, more convenient and efficient AI tools emerged, breaching the boundaries between AI and art creation thanks to the development of Large Language Models like GPT (Kumar et al., 2017).
## 3. AIGC Pipeline
In this section, we introduce a novel AIGC-based pipeline for creating narrative-centric digital stories using the world's famous fairy tale "The Emperor's New Clothes" as the adapted example. This fairy tale represents a compelling foundation for AI-driven digital storytelling, owing to a multitude of salient factors. Firstly, its enduring classic status ensures its resonance across diverse cultural and temporal contexts. Secondly, the universally comprehensible themes it encapsulates--namely, hypocrisy, authority, and societal conformity--render it an ideal narrative for engaging with a broad audience. Furthermore, the tale imparts profound moral lessons
Figure 1. The comparison of the conventional process of computer-assisted digital storytelling creation and the process integrating AIGC. The AIGC approach surpasses the traditional method in terms of output efficiency, time, and resource utilization across four production stages (@The author of the paper.)
that have permuted through generations, contributing to its enduring relevance. Its inherent dramatic elements and unexpected plot twists captivate and sustain the interest of viewers, enhancing its suitability for digital storytelling. Importantly, its adaptability for customization to cater to various purposes and target audiences underscores its versatility as a narrative substrate.
As illustrated in Figure 2, we employ multiple state-of-the-art AIGC models for generating various kinds of digital content. Our process commences with the adapted version of textual narrative generation through the large language model. Then, we use a series of image generation models to create original characters and scenes from text prompts, and we employ diverse AI tools and algorithms to composite scene and character animations based on the generated images. Subsequently, various AI-generated audio elements, such as dubbing and music, are created. Lastly, all the materials are integrated to synthesize the ultimate experimental outcomes.
### Text Narrative
Figure 3 presents the results of our three conversations with ChatGPT. Notably, we deliberately avoided specifying a predetermined format for the output script or the classification of scenes, allowing ChatGPT to autonomously craft the character of the "Old Magic Tree." This character, which was not originally conceived by humans, plays a crucial role in enhancing the coherence and plausibility of the narrative, adding a distinctive element to the story.
On the one hand, it is important to highlight that the introduction of "Old Magic Tree" exhibits a certain level of randomness (underlined in Figure 3), which affirms AI's ability to imagine and
Figure 2. The designed AIGC-driven pipeline for digital storytelling production. We synthesize multimedia intermediate contents with different AIGC tools (green icons) and compose all the results with professional Adobe software (gray icons) for the final demo: [https://lsgm-demo.github.io/Leveraging-recent-advances-of-foundation-models-for-story-telling/](https://lsgm-demo.github.io/Leveraging-recent-advances-of-foundation-models-for-story-telling/) (@The author of the paper.)
generate creative narratives, offering a valuable example to demonstrate AI's current comprehension and creativity. On the other hand, while ChatGPT excels at summarizing the main points in a text and condensing information (Huang et al., 2017), the construction of compelling, profound implications or moral lessons within a fable heavily relies on human interpretation, including the understanding of contextual understanding and the skillful interweaving of storytelling elements.
Thus, although we firmly believe that AI's creative capabilities are well-suited to assist artists in creating more comprehensive and captivating content, human ideas continue to play an indispensable role in the process of text generation in the realm of digital storytelling.
### Images
The image generation node demonstrated exceptional comprehension and generation skills in capturing the intended content and visual atmosphere. However, human intervention is indispensable to refine the image quality to align with aesthetic preferences. As displayed in Figure 4, we used Midjourney for raw image generation and Runway for initial adjustments like erasing or replacing suboptimal elements. For a small number of unsatisfactory images, we manually rectified their flaws with Photoshop.
We utilized two Midjourney versions: V4 (pre-V5 release) and V5. While V5 accurately interprets natural language prompts (Huang et al., 2017), we applied an AI-transferred step to convert scripts into the specific prompt format for V4 image generation. The general idea is that we informed ChatGPT about Midjourney's parameters and prompt guidelines first, and then we supplied sample prompts with descriptive phrases from the script, image style, frame ratio, and camera settings as the preconditions for further prompt generation.
Achieving character consistency in diverse scenes and angles was a formidable task in this experimental phase. Initially, we planned to train the character model by establishing the database with stable images that possess the same Midjourney seed value. However, this method proved difficult due to the open-ended nature of our prompt. Therefore, we manually refined the collection of the most prompt-related images in Runway and Photoshop, addressing concerns like extra fingers and missing elements, while ensuring the original character's features remained intact.
During this node, we observed significant improvements in prompt understanding and image quality when using AIGC for image generation, surpassing its previous performance. However, there is still considerable untapped potential for its further development in generation consistency and capturing intricate details.
### Scene and Character Animation
To enhance the visual appeal and immense the audience in the narrative, dynamic scenes were created by VoluMax AI and VoluMax Landscape in conjunction with Photoshop's Neural Filters and their depth map engine, as shown in Figure 5. Interestingly, ControlNet's depth detection algorithm (Krizhevsky et al., 2015) was integrated into the scene animation production, enabling seamless weather and environmental transformations within the same scene.
Figure 3. The inquiries and feedback of our three conversations with ChatGPT. We intentionally avoid specifying any predetermined format for the output script or the classification of scenes when inputting text. Instead, we focus on incorporating more restrictions and guidelines to improve the quality of ChatGPT’s feedback (@The author of the paper.)
Furthermore, character animations were produced through various technical combinations. We utilized Crazytalk's facial animation function, which involved facial landmark detection, tracking, and expression mapping, and the First Order Motion Model (FOMM) to create lifelike facial expressions when the characters were not speaking, which added depth and emotional realism, making the characters relatable. To ensure seamless transitions between natural facial expressions in close-up shots, we integrated temporal smoothing and curve fitting features found in RE: Flex, further enhancing the overall visual experience.
However, the facial expression changes in the demo are insufficient and lack of real emotional expression. These limitations primarily stem from challenges in technology, data, and computing resources. Emotional expression is intricate and demands precise capturing of subtle facial expressions and body language. Current deep learning models are not yet sensitive enough to these minor
Figure 4. The image generation node is exemplified by creating different perspective images for each character. Initially, the character’s original image is generated by the prompts derived from the story script. Subsequently, the initial multi-view images are generated according to corresponding prompts based on the original image. After modification and refinement processes using AI tools, the final results are completed (©The author of the paper.)
Figure 5. The overview of the animation production pipeline. We divide the animation section into four categories and apply different AIGC technologies integrations to achieve the animated effects. Then the relevant dynamic results are assembled using Adobe software to synthesize the whole clip. We strongly suggest visiting our website for better visualization: [https://lsgm-demo.github.io/Leveraging-recent-advances-of-foundation-models-for-story-telling/](https://lsgm-demo.github.io/Leveraging-recent-advances-of-foundation-models-for-story-telling/) (©The author of the paper.)
emotional nuances. Furthermore, AIGC lacks the ability to recognize individual differences, making it difficult to generate unique vivid facial animations based on the characteristics and personalities of different characters.
Simultaneously, we used an automatic lip-syncing algorithm to synchronize the characters' mouth movements with the lines generated by AI for realistic speech portrayal. Also, to efficiently animate characters in full body motion, texture synthesis and style transfer algorithms from Ebsynth were employed to build the animation out of certain static frames. Using this technique, the characters' movements can be visually appealing and stylistically consistent throughout the production.
When comparing the animation effects achieved by current AIGC technologies with those produced through traditional processes, it becomes evident that AIGC is excellent at coordinating the dynamics of the characters when they are speaking, while its animation effect derived from AIGC-generated images is not competitive, particularly in terms of dynamic expression and three-dimensional motion. This observation highlights the considerable scope for improvement in authenticity, coherence, and personalization in the future development of AIGC.
### Audio
TTS tools like Azure, illustrated in Figure 6, enable effective voice-over generation and voice acting for digital storytelling (Santos et al., 2017). Its extensive customization options, such as adjusting speaking rate, pitch, and volume (Santos et al., 2017), create immersive voice narration for allegory scripts and unique character voices, enhancing the emotional impact and audience engagement.
We also utilized Tianyin and SOUNDRAW to generate two AI-produced background music pieces that contain customized music length, tempo, composition, and instruments. These customized factors helped create the atmosphere and mood, build tension, create a sense of wonder, and amplify the storytelling experience.
Moreover, sound effects play a crucial role in digital storytelling by adding depth and realism to the narrative. Thus, we selected copyright-free sound effects from Freesound and processed all audio materials, such as reducing noise and adding reverb with Adobe Audition, to ensure the high quality of our audio.
Meanwhile, We endeavored to employ AI platforms such as MyEdit, Plugger.AI, and jsfxr, for the generation of sound effects; however, these attempts proved unsuccessful. This failure stemmed from the inherent requirement for sound effects in practical applications to exhibit a heightened level of contextual accuracy and emotional expression. Consequently, the efficacy of these sound effects necessitated a greater degree of verisimilitude to ensure that the audience could immerse themselves more completely in the auditory experience.
Furthermore, sound effects are tasked with capturing a diverse range of auditory elements that must be synchronized with the corresponding visual actions to achieve coherence and authenticity in the audio-visual narrative. In contrast, music production typically operates within a more abstract and emotionally driven framework, and it does not require precise synchronization with every visual action.
Consequently, AI-generated music and dubbing have undoubtedly achieved impressive results in practice, whereas sound effect production demands a higher degree of manual intervention and meticulous adjustment to satisfy the exacting requirements of the audio-visual experience. While the technology offers convenience and efficiency, it raises questions about authenticity and human creativity. It is essential to critically evaluate the balance between automation and artistic expression, ensuring that AI remains a tool rather than a replacement for human ingenuity.
## 4. Voices
We conducted insightful discussions with experts and artists from diverse creative industries to collect their feedback about the influences of applying AIGC within the digital storytelling arena. On the premise of introducing our experimental concept and prototype to the interviewees and acquainting them with AIGC's current capabilities and performance, our discussions encompassed the interviewees' perspectives, such as aesthetic connotations, substitutability, transformational development, and original creativity, on the current state of AIGC and its impact within their respective industries.
### Willingness
After explaining our research concept and displaying the prototype, a critical examination of the interviewees' responses reveals an overwhelmingly positive reception towards incorporating AIGC into art creation. A game industry expert highlighted the rapid development of AIGC and its impact on the art industry, he stated,
_AIGC benefits artistic creation and communication, and its rapid growth empowers emerging industry sectors._
They encouraged artists to embrace this trend for enhanced artistic expression. Henriikka and Matti (Henriikka and Matti, 2018) mentioned that although AI's high automation can lead to conflicts in the actualization of creativity due to the lack of complementary resources, it is precisely these conflicts that serve as catalysts for innovation. However, it is important to critically analyze the extent to which AI-driven creations can capture the essence of human creativity and emotional depth.
An interviewee from the Human-Computer Interaction (HCI) domain saw AIGC as an opportunity to generate perspectives beyond human imagination:
_During human evolution, thinking tends to become fixed, but AI might inspire humans in the form of machine language._
This raises questions about the nature of creativity and whether AI can offer unique aesthetic values. It also prompts us to evaluate the role of AI in rediscovering neglected artistic elements and the risk of homogenization or dilution of artistic diversity.
An AI educator emphasized the value of current AI technology as a tool for enhancing artists' efficiency in the creative process:
_From an artistic design standpoint, AIGC can produce various concept diagrams based on artists' input prompts, making early-stage concept creation more
efficient and helping artists find inspiration or make selections._
The above viewpoints are consistent with Hong et al. (2018), which found that integrating AI into art creation does not negatively affect the artistic process or alter how viewers perceive the artistic value of the resulting works. However, it is crucial to assess the potential consequences of excessive reliance on AIGC. Over-reliance might hinder the development of essential artistic skills and discourage artists from exploring their own creativity, leading to dependence on preprogrammed algorithms for artistic expression.
### Replacement
The interviewees were all questioned about their concerns regarding the potential replacement of human artists by AI. Without exception, they firmly believe that the current forms of AIGC are incapable of completely supplanting humans in the art industry. Marian et al. (2018) substantiated this viewpoint, asserting that AI and artists don't compete but collaborate in artistic creation, dispelling the notion that AI could replace artists. One interviewee who specializes in HCI pointed out the fundamental flaw in AI's capacity to comprehend human values and emotions:
_AI cannot replace essential work in the field of art. The most valuable aspect for artists is their creativity, which relies on human values and emotions. AI can only supplant rudimentary labor-intensive tasks and is inherently deficient in comprehending human values and emotions._
Another respondent working in the animation field argues that current AI technology falls short compared to humans when it comes to crucial creative tasks such as designing camera language as she stated,
_While using AIGC can save resources and time, replacing directors and producers is currently impossible because AI cannot replicate the aesthetic sensibilities and creative insights derived from human filmmakers._
An educator who teaches AI underscores the indispensable role played by human artists in selecting and curating art elements with their "unutterable power", despite the remarkable visual generation by AIGC. In the context of the production process, Hong W also acknowledges this perspective by positing that AI is a product of data training, and its innovation is largely contingent upon the updating and iteration of algorithmic models. In terms of creativity, there remains a notable disparity between AI and human artists.
These perspectives are mirrored in our own work, as the artwork produced through these AI-driven processes often lacks human reasoning and spirit. Consequently, it becomes evident that the art industry still heavily relies on the unique contributions of human artists.
Artists typically express themselves through their unique perspectives, emotions, and creativity. While AIGC can generate content, they lack the depth of emotion, experience, and human thought, making it challenging to replace the originality of artists. The essence of artists lies in their creativity and uniqueness, whereas
Figure 6. The schematic demonstration of customization function in TTS tool. The features of the TTS tool allow users to customize their AI-dubbed texts, including choosing voice actors and talking style, setting the pitch curve to match the ideal voice tone, and adjusting the rate and volume of each dubbing sentence (%The author of the paper.)
AIGC serves more as a tool to enhance or expand upon an artist's creativity.
### New Industry Format
Experts in the game, animation, and HCI industries agree that AI has shown potential in optimizing art production and simplifying digital storytelling. They also believe that AI's rapid development will lead to industry shifts, potentially creating more convenient art workflows and a new industry form during society's digital transformation, even redistributing social values. This general sentiment is echoed by many theorists and practitioners, such as Chen et al. (Chen et al., 2018), Gao et al. (Gao et al., 2019), and Li et al. (Li et al., 2019). However, AI will not completely replace any of the current creative industries because it is in a stage of rapid development, during which the new format is still unclear, and the industry regulations have not yet been standardized and improved to address AI specifically.
### The controls of AI
After we presented our experiment result to our interviewees, over half of the interviewees expressed apprehension about the potential consequences of AIGC's increasing dominance, raising valid concerns about the urgency to establish appropriate standards. A professor in the AI industry took photo contests as an example:
_Should AI-generated photos be considered on par with real-life scenario shots in a photo contest in terms of artistic merit? Is it necessary to assess these two types of creations separately? Furthermore, who holds the copyright for AI-generated photos?_
It is evident that artists harbor a genuine fear of relinquishing control over their creative process and becoming mere conduits for AI-generated productions. This anxiety may stem from the realization that AIGC's influence on art can be counterproductive. Furthermore, evaluating AI-assisted artwork presents a complex challenge due to inconsistent quality and the necessity for guidance on aesthetic enhancement.
Meanwhile, AIGC applications are increasingly impacting various industries through text, images, and audio generation, raising concerns about privacy, security, and copyright issues. To address these challenges, Chen et al. (Chen et al., 2018) has previously proposed solutions such as integrating advanced technologies like blockchain and privacy computing to enhance user privacy and security and strengthening and enforcing relevant laws to tackle AIGC-related copyright problems.
Experts believe that the development of comprehensive standards and the prioritization of human decision-making are crucial steps to effectively integrate AIGC into the artistic realm. These measures are imperative to mitigate the potential risks and ensure that AI serves as a valuable tool rather than overshadowing human creativity.
### Summary
In an era of inevitable technological revolution, the development of traditional art and AIGC occurs in parallel, a noncompetitive relationship that perpetually adheres to the aesthetic consciousness of creators. Technical professionals, including AIGC practitioners and other engineers, are responsible for developing and maintaining artificial intelligence systems. Their role is to ensure the smooth application of technology in the realm of art while also understanding the essence and needs of art to better support artists and creators. Technology serves as the foundation of AIGC, and its data originates from the market, so its developmental principles should be centered around creativity, rather than utilizing relatively singular visual representations that overshadow the diversity of artistic expressions.
Looking ahead, a human-level intelligent tool like AIGC helps drive the fusion of art creation and technological innovation, and promotes the democratization of art, and makes artistic expression ferer and fairer while urging the public to have a deeper reflection on the nature of art. Thus, when using this inevitably derived new industry form, AIGC practitioners should continuously enhance their aesthetic sense and refine creative ideas from their very roots.
Under the circumstance that this dynamic field constantly introduces new industry forms and technologies, requiring practitioners to remain flexible and incorporate novel tools and methods into their work, cultivating a strong aesthetic sense is significant. Practitioners should start the creative process with meticulous ideation and concept development, rather than rushing into production, ensuring that creative ideas are nurtured and refined from their inception, thus setting a robust foundation for the generation of high-quality AIGC content.
To achieve this, AIGC practitioners should embrace a multifaceted approach. The specific suggestions involve staying updated with design trends and studying design history to draw inspiration from the masters. They should actively seek out constructive criticism, collaborate with diverse teams, and understand user psychology to create designs that resonate deeply. This path is the rational development essence for future AIGC professionals instead of blindly following the highly efficient expressive capabilities bestowed by technological advancements.
## 5. Conclusion
This study represents a guiding attempt in the transition of the traditional art industry toward incorporating AI as part of the creative process. The focal project of this study allows extensive investigation of the boundaries of mainly text-driven AIGC creative tools, and effective possibilities contained in various AIGC software and platforms, affirming their generation quality while also highlighting their limitations.
Furthermore, this research serves as a guide, refinement, and catalyst for the current stage of incomplete AIGC creative standards and industry norms. The experimental portion contributes to a more compsssrehensive understanding of AIGC's current capabilities, while the insights that have been gathered through communications with abundant creative industry insiders section plays a beneficial role in directing the future growth of AIGC in a positive manner.
**Acknowledgement.** We would like to express our gratitude to Shikai Li, Kwan-Yee Lin, and Huiwen Luo for their constructive suggestions and support throughout the research process. Additionally, we would like to acknowledge the Shanghai AI Laboratory and OpenXDLab that provided resources and facilities that were crucial to the completion of this work. |
2309.12980 | Quantum enhanced SU(1,1) matter wave interferometry in a ring cavity | Quantum squeezed states offer metrological enhancement as compared to their
classical counterparts. Here, we devise and numerically explore a novel method
for performing SU(1,1) interferometry beyond the standard quantum limit, using
quasi-cyclic nonlinear wave mixing dynamics of ultracold atoms in a ring
cavity. The method is based on generating quantum correlations between many
atoms via photon mediated optomechanical interaction. Timescales of the
interferometer operation are here given by the inverse of photonic recoil
frequency, and are orders of magnitude shorter than the timescales of
collisional spin-mixing based interferometers. Such shorter timescales should
enable not only faster measurement cycles, but also lower atomic losses from
the trap during measurement, which may lead to significant quantum metrological
gain of matter wave interferometry in state of the art cavity setups. | Ivor Krešić, Thorsten Ackemann | 2023-09-22T16:23:19Z | http://arxiv.org/abs/2309.12980v1 | # Quantum enhanced SU(1,1) matter wave interferometry in a ring cavity
###### Abstract
Quantum squeezed states offer metrological enhancement as compared to their classical counterparts. Here, we devise and numerically explore a novel method for performing SU(1,1) interferometry beyond the standard quantum limit, using quasi-cyclic nonlinear wave mixing dynamics of ultracold atoms in a ring cavity. The method is based on generating quantum correlations between many atoms via photon mediated optomechanical interaction. Timescales of the interferometer operation are here given by the inverse of photonic recoil frequency, and are orders of magnitude shorter than the timescales of collisional spin-mixing based interferometers. Such shorter timescales should enable not only faster measurement cycles, but also lower atomic losses from the trap during measurement, which may lead to significant quantum metrological gain of matter wave interferometry in state of the art cavity setups.
The study of light mediated atomic self-organization has advanced greatly since the pioneering experiments in hot alkali vapours [1; 2; 3; 4]. With the maturation of laser cooling and trapping techniques, self-organizing instabilities in laser driven ultracold atoms have subsequently been researched in a wide variety of feedback schemes, establishing a rich subfield of atomic physics [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27].
The earlier works on the quantum aspects of ultracold atom-cavity interaction have concentrated on studying steady state quantum correlations between light and atoms [28; 29; 30; 31; 10; 32; 33]. Recently, the generation of correlated atomic pairs via cavity light-mediated interaction and self-organization, has also come into focus [34; 35; 36], inspired by the earlier work on photon quantum correlations in optical parametric amplifiers and self-organized optical structures in nonlinear crystals [37; 38; 39; 40; 41; 42]. These recent works shift the attention from light-atom entanglement, which was studied in [28; 29; 30; 10; 31; 32; 33], towards light-mediated atom-atom entanglement generation in a cavity.
The importance of quantum entangled states in quantum technologies lies in their ability to speed up a number of computational [43] and metrological tasks [44; 45]. Regarding the latter, quantum enhanced measurement schemes with internal atomic degrees of freedom [34; 46; 47; 56], and also the motional ones [35; 57; 58; 59; 60; 61], have been explored recently.
In this Article, we start with a U(1) symmetric Hamiltonian describing optomechanical stripe ordering in a Bose-Einstein condensate (BEC) placed inside a transversely pumped ring cavity, and show that its transient dynamics near pump threshold can be described by a SU(1,1) Hamiltonian [62; 63; 64; 65; 66]. By applying the insight from [67] that cyclic dynamics can lead to effective time reversal in such a quantum system, we numerically demonstrate quantum enhanced SU(1,1) matter wave interferometry with the ring cavity scheme. Interferometric estimation of the phase shift using measurements of mean value and variance of the atomic on-axis momentum mode number operator [67], allows for precision measurements of the optical transition recoil frequency. Combining this quantity, with the result of a corresponding transition wavelength measurement, can be used to determine the fine structure constant [68; 69; 70], and inertial mass at microscopic scales [71].
In contrast to the previously studied schemes for nonlinear SU(1,1) spin state interferometry with Bose-Einstein condensates [67; 72; 73], our proposal employs atoms with a single ground state (spin-0), for matter wave (motional state) interferometry. Due to relative simplicity of the setup, these results highlight the potential of employing ultracold atomic self-organization for quantum technologies.
The setup is shown in Fig. 1a). It consists of a prolate shaped Bose-Einstein condensate (BEC) held inside a ring cavity, and pumped along the \(-z\) direction by a coherent field with pump rate \(\eta\), frequency \(\omega\) and wavenumber \(k\). As in earlier work on transversely pumped cavities [7; 10], we study the 1D situation, where the recoil along the \(z\) axis is neglected due to a trap confining the atoms along \(y\)- and \(z\)-axes [74]. A similar setup has been experimentally implemented in [24]. Contrary to the similar recently utilized mechanisms for entanglement generation using atoms with multilevel transitions [34; 35], the situation studied here relies on atoms and light interacting via a two-level optical transition.
The free space photon scattering can be greatly suppressed in atom-cavity systems with collective strong coupling [5; 35; 75], such that the atom-light interaction is well described by taking into account only the intracavity photon modes. For light far-detuned from the atomic transition, the excited state can be adiabatically eliminated, leading to a Hamiltonian describing optomechanical interaction. Using the three optomechanical mode approximation, which is a good description at \(\eta\) values near threshold [74], the atomic motion can be described by a zero-order mode with \(p_{x}=0\), and left- and right-moving modes with \(p_{x}=\mp\hbar k_{c}\), with annihilation operators \(b_{j}\) where \(j=0,+,-\), and the field operator given by:
\[\psi(x)=\frac{1}{\sqrt{V}}\left(b_{0}+b_{+}e^{ik_{c}x}+b_{-}e^{-ik_{c}x} \right), \tag{1}\]
with \(V\) being the volume of the system and \(k_{c}\) the wavenumber of the ring cavity modes. As the pump-cavity detunings we use are many orders of magnitude smaller than the cavity frequency, in the above we have taken \(k=k_{c}\). We here assume
that relevant system dynamics is significantly faster than the cloud expansion in the harmonic trap, such that the description of the cloud as a quantum degenerate gas with three modes is valid throughout [35].
Adiabatically eliminating the photonic fields, the unitary evolution of the atomic degrees of freedom is determined by the effective Hamiltonian (see Appendix A for a detailed derivation):
\[H_{c}=\frac{g_{c}}{2N}[2b_{+}^{\dagger}b_{-}^{\dagger}b_{0}b_{0}+2 b_{0}^{\dagger}b_{0}^{\dagger}b_{+}b_{-} \tag{2}\] \[+(2N_{0}-1)(N_{+}+N_{-})]-qN_{0}, \tag{3}\]
where \(g_{c}=2N\tilde{\Delta}_{c}\eta^{2}/(\tilde{\Delta}_{c}^{2}+\kappa^{2})=-\omega _{\rm R}\eta^{2}/(2\eta_{c}^{2})\), \(q=\omega_{\rm R}+g_{c}/N\), with \(\eta_{c}=\sqrt{-\omega_{\rm R}(\tilde{\Delta}_{c}^{2}+\kappa^{2})/(4\tilde{ \Delta}_{c}N)}\), and \(N_{j}=b_{j}^{\dagger}b_{j}\). Here \(N=N_{0}+N_{+}+N_{-}\) is the total number of atoms, kept constant in the simulations presented, \(\tilde{\Delta}_{c}\) is the detuning of the pump laser from the cavity mode, \(\kappa\) is the cavity photon decay rate, \(\omega_{\rm R}=\hbar^{2}k_{c}^{2}/(2m)\) is the photon recoil frequency, and \(\eta_{c}\) is the threshold pump rate for self-organization.
The first two terms in Eq. (2) describe the creation and destruction (mixing) of correlated atom pairs with opposite momenta \(\pm\hbar k_{c}\), from the initial polar state \(|N\rangle_{0}|0\rangle_{+}|0\rangle_{-}\). The third term describes the energy shift caused by the photon-mediated interatomic elastic collision processes that do not produce correlated atom pairs. The fourth term describes the energy shift of the momentum ordered modes with \(p_{x}=\pm\hbar k_{c}\) with respect to the homogeneous mode \(p_{x}=0\). For \(g_{c}<0\), the system undergoes self-organization above a quantum critical point at \(q=2|g_{c}|\)[67; 54; 76], where it becomes more energetically favorable to populate the \(p_{x}=\pm\hbar k_{c}\) states via the mixing terms, as illustrated in Fig. 1c). For large \(N\), \(q=2|g_{c}|\) corresponds to the semiclassical threshold condition \(\eta=\eta_{c}\). In the semiclassical picture, self-organization in atomic density occurs above threshold due to an optical lattice arising from interference of superradiantly scattered light in the co- and counter-propagating cavity modes [74; 24].
The generation of momentum correlated pairs of atoms via \(H_{c}\) can be explained by the process illustrated in Fig. 1b). Above the self-organization threshold, the scattering of photons into the counterclockwise (clockwise) [77], initially unpopulated, ring cavity mode, leads to an atom receiving a momentum kick of \(-\hbar k_{c}\) (\(\hbar k_{c}\)) along the \(x\)-axis. When this photon is scattered back into the driving field \(\eta\), provided that it has not decayed out of the cavity, another atom receives a momentum kick of \(\hbar k_{c}\) (\(-\hbar k_{c}\)) along the \(x\)-axis. As the same photon scatters off this atomic pair, the atoms are quantum cor
Figure 2: SU(1,1) matter wave metrology using quasi-cyclic dynamics, by unitary evolution via \(H_{c}\). (a) Principle of operation. The initial state \(|N\rangle_{0}|0\rangle_{+}|0\rangle_{-}\) is split into the three momentum modes via unitary evolution with \(H_{c}\), after which a phase shift \(\phi=-2\phi_{\rm U}=2\omega_{\rm R}\tau\) is imprinted on the zero-order mode at \(t=t_{1}\), where \(\tau\) is the short time for which \(\eta=0\) (see text). The quasi-cyclic evolution leads to near return to the initial state for \(\phi=0\), and a phase-dependent state for \(\phi>0\), at \(t=t_{2}\). At the end of the cycle, the population in the zero-order mode can be measured via absorption imaging in the momentum space. (b) Unitary evolution of \(\langle p_{0}\rangle\) for \(\phi=0\) (blue, solid) and \(\phi=0.012\) (red, dashed). Vertical dashed lines indicate the pump time \(t_{1}\) and measurement time \(t_{2}\) (see text). Parameters: \(N=10000\), \(\eta=1.4\eta_{c}\), \(\omega_{\rm R}=2\pi\times 14.5\) kHz, \(\tilde{\Delta}_{c}=-1\) GHz.
Figure 1: Principle of entanglement generation in a Bose-Einstein condensate (BEC) placed inside a transversely pumped ring cavity. (a) Self-organization of laser pumped atoms in a ring cavity (\(\eta\) - pump rate) with photon leakage rate \(\kappa\). The two-level atomic optical transition with frequency \(\omega_{\rm R}\) is driven by a far off-resonant laser beam of frequency \(\omega\). (b) An atom in the condensate gets a momentum kick of \(-\hbar k_{c}\) (\(\hbar k_{c}\)) by scattering a drive photon with wavenumber \(k=k_{c}\) into the initially empty counterclockwise (clockwise) cavity mode with wavenumber \(k_{c}\). A correlated atom with \(\hbar k_{c}\) (\(-\hbar k_{c}\)) can then be created if this cavity photon does not decay out of the cavity but scatters back into the driving field. (c) Scan of largest \(\langle\rho_{+}+\rho_{-}\rangle=\langle N_{+}+N_{-}\rangle/N\) attained during unitary evolution (see text), against pump rate \(\eta\). For \(\eta<\eta_{c}\), the energy cost prohibits the excitation of atoms into \(p_{x}=\pm\hbar k_{c}\) states, leading to a homogeneous BEC. In contrast, macroscopic populations in the \(p_{x}=\pm\hbar k_{c}\) states occur when \(\eta>\eta_{c}\), leading to striped order. Parameters: \(N=1000\), \(\tilde{\Delta}_{c}=-1\) GHz, \(\omega_{\rm R}=2\pi\times 14.5\) kHz.
related, which can lead to the appearance of momentum entangled Dicke squeezed states with reduced variance of \(N_{+}-N_{-}\)[55; 78; 79; 80; 81; 82; 83], described in the SU(2) algebra of two modes with \(p_{x}=\pm\hbar k_{c}\). Note that such states are used in linear interferometry, whereas for SU(1,1) interferometry the squeezing is best described in the three mode SU(3) algebra [84]. In this case the squeezing of the polar state, achieved via nonlinear pendulum-like quantum dynamics, leads to sensitivity to external perturbations [66; 67].
Fig. 1c) depicts the maximal \(\langle\rho_{+}+\rho_{-}\rangle=\langle N_{+}+N_{-}\rangle/N\) reached during unitary evolution for a duration of \(20/\omega_{R}\). Due to the vanishing commutator \([N_{+}-N_{-},H_{c}]=0\), the unitary evolution (\(\kappa=0\)) is numerically tractable by exact diagonalization even for large \(N\) values [62]. Note that we here take \(N\) to be conserved due to the relatively short timescales of system evolution as compared to [67]. Below threshold, the system stays in the zero-order mode. At \(\eta>\eta_{c}\), a macroscopic population starts appearing in the \(p_{x}=\pm\hbar k_{c}\) states, which is a signature of atomic momentum ordering.
The typical unitary evolution of \(\rho_{0}=N_{0}/N\) expectation values is given by the solid line in Fig. 2b). The \(\langle\rho_{0}\rangle\) performs quasi-cyclic oscillations. Such behavior is a signature of many-body nonlinear wave mixing, and has been studied using spin models similar to \(H_{c}\) in Refs. [62; 63; 64; 65; 66; 67; 68; 84]. The problem can be viewed as a nonlinear pendulum in the semiclassical treatment [66]. In the context of optomechanical pattern formation, the quasi-oscillations of \(\langle\rho_{0}\rangle\) indicate sloshing dynamics, stemming from the atoms falling into and out of the optical potential wells of the self-organized lattice. For thermal atoms in the semiclassical limit, this behavior was for short timescales modeled by the Kuramoto model of coupled oscillators [85].
Within the quantum description, the system starting in the polar state \(|N\rangle_{0}|0\rangle_{+}|0\rangle_{-}\) transiently evolves to a highly squeezed state during dynamics via \(H_{c}\), in which the system is highly sensitive to perturbations from the environment [66; 67; 84]. Applying a small phase shift to such a state can lead to a significantly change in the final state reached at \(t=t_{2}\), see Fig. 2b). In contrast to the rather slow evolution on the timescale of 100 ms, observed in spin-1 condensates interacting by direct interatomic collisions, here the evolution takes place on much shorter timescales of \(2\pi/\omega_{R}\sim 100\)\(\mu\)s (for \(\omega_{R}=2\pi\times 14.5\) kHz).
Self-organization via \(H_{c}\) can also be viewed as an atomic momentum parametric amplifier, see Fig. 2a). After evolution under \(H_{c}\) for a variable time \(t_{1}\), a relative phase shift of \(\phi=\phi_{+}+\phi_{-}-2\phi_{0}\) can be imprinted on the three momentum states [67]. In our case a phase shift of \(\phi=-2\phi_{0}=2\omega_{R}\tau\) is imprinted onto the atoms by rapidly switching off the pump laser to suppress the wave mixing dynamics, and letting the system evolve via \(H_{c}\) with \(g_{c}=0\) for a short time \(\tau\), see Fig. 2b). The switch off time for the laser is on the order of a few nanoseconds, and the intracavity photons take a time \(\sim 1/\kappa\) to decay out of the cavity. For \(\kappa\) values \(\kappa\lesssim 5\,\omega_{R}\), the decay may lead to noticeable effects on the atom dynamics. However, it was shown that switching off the drive field at an appropriate time can lead to atoms reaching the desired motional state even for such small \(\kappa\) values [36]. The laser switch off dynamics is here approximated as an instantaneous quench of the Hamiltonian, and the optimal switch-off sequences for populating the desired atomic momentum states at \(t=t_{1}\) will be studied in future work.
Due to quasi-cyclic dynamics, the system for \(\phi=0\) returns to approximately the initial state \(|N\rangle_{0}|0\rangle_{+}|0\rangle_{-}\) at some time \(t=t_{2}\). Measuring the proportion of atoms in the zero-order mode \(\langle\rho_{0}\rangle\) and the variance thereof, via absorption imaging in momentum space, allows one to determine the value of the phase shift \(\phi\). Using the value of \(\tau\), which is in typical experiments known to a high degree of precision, the value of \(\omega_{R}\) can be determined from \(\phi\).
For atom numbers up to \(N=500\), we here use the Schrodinger equation with a time-dependent Hamiltonian to simulate the system evolution, whereas for higher atom numbers exact diagonalization is used. In the latter case, the phase shift is imprinted by acting on the system with an operator \(U_{p}=e^{i\phi\hbar_{0}/2}\) at \(t=t_{1}\).
The phase sensitivity of the SU(1,1) interferometer is given by the error propagation formula [67]:
\[\Delta\phi=\frac{\Delta\rho_{0}}{\left|\frac{d\langle\rho_{0}\rangle}{d\phi} \right|}. \tag{4}\]
The quantum metrological gain is given by:
\[\text{Gain}=-20\log\left(\frac{\Delta\phi}{\Delta\phi_{SQL}}\right), \tag{5}\]
Figure 3: Quantum enhancement of phase \(\phi\) measurements using the quasi-cyclic evolution method illustrated in Fig. 2a). (a) \(\langle\rho_{0}\rangle\) and (b) \(\Delta\rho_{0}\) dependence on the imprinted phase \(\phi\), for \(N=250\) (blue), \(N=500\) (orange) \(N=1000\) (green) and \(N=10000\) (red). (c) Quantum metrological gain for the same simulations, given by Eq. (5). The horizontal dashed line indicates the standard quantum limit. Parameters: \(\eta=1.4\eta_{c}\), \(\omega_{R}=2\pi\times 14.5\) kHz, \(\tilde{\Delta}_{c}=-1\) GHz.
where \(\Delta\phi_{SQL}=2/\sqrt{N}\) is the phase sensitivity in the standard quantum limit, derived e.g. in [67].
The comparison of measurement sensitivities for \(N=250,\ 500,\ 1000,\ 10000\) is shown in Fig. 3. For each \(N\) and \(\eta\) (see Fig. 4), the \(t_{1}\) is chosen at the time with largest derivative \(d\langle\rho_{0}\rangle/dt\), while \(t_{2}\) is taken at the second peak of the \(\langle\rho_{0}\rangle\) quasi-oscillation cycle, see Fig. 2b). Increasing the atom number leads to an increase in maximum quantum metrological gain, due to an increase in the slope of \(d\langle\rho_{0}\rangle/d\phi\), see Fig. 3a,b). The \(\phi\) value with maximum gain gets smaller for increasing \(N\). Note that for increasing \(N\), the system for \(\phi=0\) returns more closely to the initial state at \(t=t_{2}\), as \(\Delta\rho_{0}\) gets closer to \(0\) and \(\langle\rho_{0}\rangle\) gets closer to unity, see Fig. 3a,b).
The scans of maximal achieved gain with respect to \(\eta\) and \(N\) is shown in Fig. 4a,b). Increasing the \(\eta\) near and above threshold values, the maximal achieved gain initially grows. However, the growth quickly saturates, achieving the highest value of \(24.6\) dB, for \(N=10000\) at \(\eta=1.7\eta_{c}\). Comparing the \(N\) scaling of the values at \(\eta=1.7\eta_{c}\) with the quantum metrological gain at the Heisenberg limit of \(\Delta\phi_{Heis}/\Delta\phi_{SQL}=1/\sqrt{N}\)[44], the growth is approximately parallel. The largest gain shown in Fig. 4a) is comparable to the values reported in state of the art spin squeezing experiments based on photon-mediated interaction [53; 55; 86].
The main source of noise in the setup stems from the decay of quantum correlations arising due to photons decaying out of the cavity with a rate \(\kappa\). In the regime of \(|\bar{\Delta}_{c}|\gg\kappa\), the transient dynamics is determined more by the coherent light-matter interaction than the photonic decay [35]. In Appendix B, we use the Lindblad master equation and Monte Carlo wave function simulations [87] to demonstrate that, for the experimentally available values \(\bar{\Delta}_{c}=-1\) GHz and \(\kappa=2\pi\times 14.5\) kHz [24; 88], irreversible dynamics at relevant timescales is nearly indistinguishable from unitary dynamics. Increasing the \(\kappa\) values further, the irreversible dynamics leads to larger deviations of \(\langle\rho_{0}\rangle\) and \(\Delta\rho_{0}\) from the values for the unitary case. Namely, the \(\langle\rho_{0}\rangle\) oscillations dephase more rapidly, while \(\Delta\rho_{0}\) does not dramatically increase but stays approximately constant. Although a detailed study of the influence of \(\kappa\) on interferometer sensitivity is beyond the scope of this Article, the simulations of irreversible dynamics give an indication that quantum enhanced SU(1,1) interferometry may be achievable for large cavity detunings even in moderate to low finesse cavities.
To conclude, we have devised and numerically explored a procedure for performing SU(1,1) matter wave interferometry beyond the standard quantum limit, with self-organized atomic momentum states in a transversely pumped ring cavity. The advantage of this light-induced SU(1,1) interferometer with respect to the procedures utilizing spin-mixing interaction, see e.g. [67], is the orders of magnitude speed enhancement, which allows one to neglect the atom loss out of the condensate during the relevant temporal evolution. Including the excitation of higher order momentum modes and the quantum noise arising from photon decay into the picture, will lead to complex quantum dynamics, to be explored in subsequent work. Optimization of the interferometer sensitivity in various experimental conditions is a significant future challenge, which may be researched using optimal control theory [89] or machine learning techniques [90]. Finally, we note that our results also have implications for the recently studied situations of [35; 36]. The proposal considered in this Article has potential for realizing quantum enhanced ultracold atom SU(1,1) matter wave interferometry in state of the art ring cavity experimental setups [24].
_Acknowledgements._ We thank Paul Griffin, Helmut Ritsch and Karol Gietka for helpful discussions. The work of I. K. was funded by the Austrian Science Fund (FWF) Lise Meitner Postdoctoral Fellowship M3011 and an ESQ Discovery grant from the Austrian Academy of Sciences (OAW). The dynamical evolution equations were solved numerically by using the open-source framework QuantumOptics.jl [91]. The computational results presented here have been achieved using the Vienna Scientific Cluster (VSC).
## Appendix A
We start by writing the Hamiltonian for a transversely pumped ring cavity, studied in [74], given by:
\[H= -\hbar\Delta_{c}(n_{+}+n_{-})+\int_{V}d^{3}r\psi^{\dagger}(\mathbf{r})H_{ eff}^{(1)}\psi(\mathbf{r}), \tag{6}\]
where \(\Delta_{c}=\omega-\omega_{c}\) is the laser-cavity detuning, \(n_{\pm}=a_{\pm}^{\dagger}a_{\pm}\), and the effective single-particle Hamiltonian is given by:
\[\begin{split} H_{eff}^{(1)}=&\frac{\mathbf{p}^{2} }{2m}+\hbar U_{0}(n_{+}+n_{-}+a_{+}^{\dagger}a_{-}e^{-2ik_{x}x}+a_{-}^{\dagger }a_{+}e^{2ik_{x}x})\\ +&\hbar\eta(a_{+}e^{ik_{x}x}+a_{-}e^{-ik_{x}x}+ \mathrm{H.c.}),\end{split} \tag{7}\]
where \(\eta=G_{0}\Omega/\Delta_{a}\) is the maximum depth of the optical potential per photon due to the scattering between pump and cavity modes (i.e. \(\eta\) - cavity pump rate) and \(U_{0}=G_{0}^{2}/\Delta_{a}\) is the maximum depth of the optical potential per photon due to the scattering between cavity modes, with \(G_{0}\) being the cavity mode
Figure 4: Scaling of the maximal achieved gain with (a) pump rate \(\eta\) and (b) total atom number \(N\). (a) \(N=100\) (blue triangles), \(N=1000\) (orange squares) and \(N=10000\) (green dots). (b) Numerical data (blue triangles) and the Heisenberg limit (orange dashed line), given by \(20\log\sqrt{N}\). Solid lines are guide to the eyes. Parameters: \(\omega_{\mathrm{R}}=2\pi\times 14.5\) kHz, \(\bar{\Delta}_{c}=-1\) GHz.
coupling strength, \(\Omega\) the Rabi frequency and \(\Delta_{a}=\omega-\omega_{a}\) the laser detuning from the atomic optical transition.
Taking only the zeroth and first order momentum modes into account, the atomic field operator is given by:
\[\psi(\mathbf{r})=\frac{1}{\sqrt{V}}\left(b_{0}+b_{+}e^{ik_{c}x}+b_{-}e^{-ik_{c}x }\right), \tag{8}\]
where \(b_{j}\) is the bosonic annihilation operator of the \(j\)-th transverse atomic momentum mode.
We insert Eq. (8) into Eq. (6) for a real-valued pump rate \(\eta\) and perform the integration over the BEC cloud volume \(V\) to get the effective total Hamiltonian \(H=H_{0}+H_{int}\), where the noninteracting part \(H_{0}\) has now the form (\(\hbar=1\)):
\[H_{0}=-\bar{\Delta}_{c}(n_{+}+n_{-})+\omega_{R}(N_{+}+N_{-}), \tag{9}\]
where \(\bar{\Delta}_{c}=\Delta_{c}-NU_{0}\), \(N_{\pm}=b_{\pm}^{\dagger}b_{\pm}\), and the light-matter interaction terms are:
\[H_{int}=U_{0}a_{+}^{\dagger}a_{-}b_{-}^{\dagger}b_{+}+\eta\,(a_ {+}+a_{-}^{\dagger})(b_{+}^{\dagger}b_{0}+b_{0}^{\dagger}b_{-})+\text{H.c.}. \tag{10}\]
Near threshold and/or for \(\Omega\gg G_{0}\), the term \(U_{0}a_{+}^{\dagger}a_{-}b_{-}^{\dagger}b_{+}\) is small compared to the terms proportional to \(\eta\). Using now the Hamiltonian \(H_{int}^{\prime}\):
\[H_{int}^{\prime}=\eta\,(a_{+}^{\dagger}b_{-}^{\dagger}b_{0}+a_{+ }b_{+}^{\dagger}b_{0}+a_{-}^{\dagger}b_{+}^{\dagger}b_{0}+a_{-}b_{-}^{\dagger} b_{0})+\text{H.c.}, \tag{11}\]
one gets for the input-output equations of the intracavity field operators \(a_{\pm}\)[92; 93]:
\[\frac{da_{\pm}}{dt}=(i\bar{\Delta}_{c}-\kappa)a_{\pm}-i\eta(b_{ \mp}^{\dagger}b_{0}+b_{0}^{\dagger}b_{\pm})+\xi_{\pm}(t), \tag{12}\]
where \(\xi_{\pm}(t)\) are the quantum noise operators of the cavity modes.
We now adiabatically eliminate the photonic degrees of freedom \(a_{\pm}\) by neglecting the \(\xi_{\pm}(t)\) terms in the above equations and setting \(\dot{a}_{\pm}=0\). Inserting this \(a_{\pm}\) into the Hamiltonian \(H^{\prime}=H_{0}+H_{int}^{\prime}\), we get the Hamiltonian for the atomic momentum subsystem:
\[H_{c}=g_{c}^{\prime}[2b_{+}^{\dagger}b_{-}^{\dagger}b_{0}b_{0}+2 b_{0}^{\dagger}b_{0}^{\dagger}b_{+}b_{-} \tag{13}\] \[+(2N_{0}-1)(N_{+}+N_{-})-2N_{0}]-\omega_{R}N_{0}, \tag{14}\]
where \(g_{c}^{\prime}=\bar{\Delta}_{c}\eta^{2}/(\bar{\Delta}_{c}^{2}+\kappa^{2})= \omega_{R}\eta^{2}/(4N\eta_{c}^{2})\), \(\eta_{c}=\sqrt{-\omega_{R}(\bar{\Delta}_{c}^{2}+\kappa^{2})/(4\bar{\Delta}_{c} N)}\), and we have used \(N=N_{0}+N_{+}+N_{-}\).
Note that in deriving \(H_{c}\) we have neglected the photonic quantum noise terms in the input-output formalism. The reasoning for this is that the photonic modes are initially in a vacuum state, and we work in the limit \(|\bar{\Delta}_{c}|\gg\kappa\)[35; 94], where the photon decay is expected to only weakly influence the atomic motion.
The cavity dissipation for the adiabatically eliminated photonic modes is below included at the level of the Lindblad master equation, which describes the influence of cavity photon decay on the creation of atomic momentum pairs. For transverse patterns in a longitudinally pumped ring cavity setup, this treatment was corroborated by numerical results, and excellent agreement with experimental results was also reported for self-organization in a single mode Fabry-Perot resonator with two-level ground state atoms, exhibiting similar physics [35; 36].
Note also that for a single mode cavity driven longitudinally near resonance [95], an atomic diffusion term was shown to arise due to photonic quantum noise [29]. This was interpreted as a consequence of backaction on the atomic momentum, arising due to photodetection measurement of the photons leaking out of the cavity. This backaction is related to the fact that, for single mode cavities, the measurement of the number of photons leaking out of the cavity can provide information about the collective atomic position (i.e. density distribution), which is an operator conjugate to collective momentum. The analysis of the magnitude of the backaction term, and its influence on the quantum dynamics of the system, for the continuously translationally symmetric Hamiltonian \(H\), is an intriguing topic for future research.
## Appendix B
The influence of cavity photon dissipation on the evolution of atomic degrees of freedom can be described by the Lindblad equation [36]:
\[\frac{d\rho}{dt}=-\frac{i}{\hbar}[H_{c},\rho] \tag{15}\] \[+\gamma\sum_{j=\pm}(2K_{j}\rho K_{j}^{\dagger}-K_{j}^{\dagger}K_ {j}\rho-\rho K_{j}^{\dagger}K_{j}), \tag{16}\]
with:
\[\gamma=\frac{\kappa\eta^{2}}{(\bar{\Delta}_{c}^{2}+\kappa^{2})},\ K_{\pm}=(b_{ \mp}^{\dagger}b_{0}+b_{0}^{\dagger}b_{\pm}), \tag{17}\]
describing the influence of cavity photon decay on the atomic momentum pair creation. The typical cavity dissipation rates \(\kappa/(2\pi)\) in ultracold atom experiments range from values on the order of a few MHz [96; 97], down to values of a few kHz [88; 24]. Note also that free spectral ranges for commonly used cavities are on the order of a few GHz, and in our simulations we fix the detuning at \(\bar{\Delta}_{c}=-1\) GHz.
Along with solving the Lindblad equation, irreversible evolution of the system was studied using Monte Carlo wave function calculations [87], with jump operators \(\sqrt{2\gamma}K_{\pm}\). The influence of experimentally realistic \(\kappa\) values on the evolution of \(\langle\rho_{0}\rangle\) and standard deviation \(\Delta\rho_{0}=\sqrt{\langle\rho_{0}^{2}\rangle-\langle\rho_{0}\rangle^{2}}\) is shown in Fig. 5. At high finesse cavity value \(\kappa/2\pi=14.5\) kHz, the curves closely follow the ones of the unitary evolving case. For increasing the \(\kappa\) further, more noticeable deviations from the unitary case are observed. For \(\langle\rho_{0}\rangle\), the oscillations start going out of phase from the unitary case, with the oscillation amplitude reducing for longer times at larger \(\kappa\)'s. The
does not dramatically increase for increasing \(\kappa\), which is a promising indication for potential experimental realizations.
|
2309.11251 | Closed form expressions for the Green's function of a quantum graph -- a
scattering approach | In this work we present a three step procedure for generating a closed form
expression of the Green's function on both closed and open finite quantum
graphs with general self-adjoint matching conditions. We first generalize and
simplify the approach by Barra and Gaspard [Barra F and Gaspard P 2001, Phys.
Rev. E {\bf 65}, 016205] and then discuss the validity of the explicit
expressions. For compact graphs, we show that the explicit expression is
equivalent to the spectral decomposition as a sum over poles at the discrete
energy eigenvalues with residues that contain projector kernel onto the
corresponding eigenstate. The derivation of the Green's function is based on
the scattering approach, in which stationary solutions are constructed by
treating each vertex or subgraph as a scattering site described by a scattering
matrix. The latter can then be given in a simple closed form from which the
Green's function is derived. The relevant scattering matrices contain inverse
operators which are not well defined for wave numbers at which bound states in
the continuum exists. It is shown that the singularities in the scattering
matrix related to these bound states or perfect scars can be regularised.
Green's functions or scattering matrices can then be expressed as a sum of a
regular and a singular part where the singular part contains the projection
kernel onto the perfect scar. | Tristan Lawrie, Sven Gnutzmann, Gregor Tanner | 2023-09-20T12:22:32Z | http://arxiv.org/abs/2309.11251v1 | # Closed form expressions for the Green's function of a quantum graph - a scattering approach
###### Abstract.
In this work we present a three step procedure for generating a closed form expression of the Green's function on both closed and open finite quantum graphs with general self-adjoint matching conditions. We first generalize and simplify the approach by Barra and Gaspard [Barra F and Gaspard P 2001, Phys. Rev. E **65**, 016205] and then discuss the validity of the explicit expressions. For compact graphs, we show that the explicit expression is equivalent to the spectral decomposition as a sum over poles at the discrete energy eigenvalues with residues that contain projector kernel onto the corresponding eigenstate.
The derivation of the Green's function is based on the scattering approach, in which stationary solutions are constructed by treating each vertex or subgraph as a scattering site described by a scattering matrix. The latter can then be given in a simple closed form from which the Green's function is derived.
The relevant scattering matrices contain inverse operators which are not well defined for wave numbers at which bound states in the continuum exists. It is shown that the singularities in the scattering matrix related to these bound states or perfect scars can be regularised. Green's functions or scattering matrices can then be expressed as a sum of a regular and a singular part where the singular part contains the projection kernel onto the perfect scar.
_Keywords:_ Quantum Graphs, Green's functions, Wave scattering.
## 1. Introduction
Quantum graphs as metric graphs endowed with a Schrodinger operator and related similar models have a long history in mathematics, physics and theoretical chemistry [1, 2, 3, 4, 5, 6, 7]. Due to the simplicity of the model and the richness of properties and effects it can represent, quantum graphs have grown into an important tool in physics and mathematics. In spectral theory, they allow for a rigorous treatment of topics that are usually related to the study of (self-adjoint) partial differential operators, see [8] for an introduction and overview. The scattering approach to quantum graphs was introduced in 1997 by Kottos and Smilansky [9] and led to a wide range of applications in quantum chaos, see [10] for an overview. In this approach, the graph vertices are treated as scattering sites from which stationary solutions (energy eigenstates) are constructed. This approach has also been used for many physical applications beyond quantum chaos, including meta-material design [11], modelling the vibrations of coupled plates [12], as well as in formulating quantum random walks [13, 14] and quantum search algorithms [15]. One advantage of the scattering approach is that eigenvalue conditions can be written in terms of a secular equation involving the determinant of a unitary matrix of finite dimension \(N\), where \(N\) typically equals twice the number of edges on the graph. Similarly, the scattering matrix of an open quantum graph can be given in terms of a closed form expression involving finite dimensional matrices of size \(N\)[16, 17].
In 2001, Barra and Gaspard [17] used the scattering approach to express the Green's function of a quantum graph as a sum over trajectories in the spirit of semiclassical quantum mechanics. At the time, it was not yet clear within the physics community what scattering matrices are connected to matching conditions related to a well-defined self-adjoint Schrodinger operator on the metric graph. We generalize and simplify the approach [17] by using a simple three step procedure that leads to the Green's function for general self-adjoint matching conditions for closed and open graphs with a finite number of edges. This directly provides a number of closed form expressions that,
Introduction
The study of the quantum graph theory of graphs has been initiated in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 223, 224, 225, 226, 227, 228, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 35, 36, 37, 38, 39, 31, 33, 34, 36, 38, 39, 32, 34, 36, 38, 39, 30, 31, 34, 36, 38, 39, 32, 35, 39, 33, 36, 37, 38, 39, 34, 38, 39, 35, 39, 36, 39, 37, 39, 38, 39, 30, 31, 32, 32, 33, 34, 36, 38, 39, 31, 34, 39, 35, 37, 39, 38, 39, 31, 35, 39, 36, 37, 39, 38, 39, 30, 31, 32, 34, 36, 39, 37, 38, 39, 39, 32, 39, 33, 34, 35, 39, 36, 39, 37, 38, 39, 39, 30, 32, 35, 39, 31, 36, 39, 37, 38, 39, 31, 39, 32, 33, 35, 39, 34, 36, 39, 37, 38, 39, 39, 30, 33, 38, 39, 31, 32, 34, 35, 39, 36, 37, 39, 38, 39, 31, 39, 30, 32, 33, 34, 35, 39, 32, 36, 39, 34, 37, 38, 39, 35, 39, 36, 37, 39, 38, 39, 37, 39, 39, 38, 39, 30, 31, 32, 33, 35, 39, 34, 38, 39, 35, 36, 37, 39, 38, 39, 39, 39, 31, 38, 39, 32, 39, 33, 34, 35, 39, 36, 39, 37, 38, 39, 39, 30, 34, 39, 35, 39, 38, 39, 31, 39, 32, 36, 39, 37, 38, 39, 39, 38, 39, 30, 31, 32, 34, 35, 39, 36, 38, 39, 37, 39, 38, 39, 39, 30, 31, 32, 33, 34, 35, 39, 34, 36, 39, 37, 38, 39, 39, 30, 32, 35, 39, 31, 34, 35, 39, 36, 37, 38, 39, 39, 31, 38, 39, 32, 39, 33, 34, 35, 39, 36, 37, 39, 38, 39, 39, 30, 31, 32, 34, 39, 35, 39, 31, 34, 37, 38, 39, 31, 35, 36, 39, 37, 38, 39, 39, 30, 31, 38, 39, 32, 39, 33, 34, 35, 39, 36, 37, 38, 39, 31, 39, 30, 32, 39, 33, 35, 39, 31, 34, 35, 39, 36, 37, 38, 39, 31, 39, 30, 32, 39, 31, 35, 39, 30, 33, 31, 36, 39, 31, 37, 38, 39, 32, 38, 39, 30, 31, 34, 35, 39, 32, 36, 37, 38, 39, 31, 39, 32, 39, 33, 34, 38, 39, 35, 36, 39, 37, 38, 39, 30, 34, 39, 35, 39, 31, 38, 39, 30, 31, 39, 32, 30, 33, 34, 35, 39, 36, 37, 38, 39, 31, 38, 39, 32, 39, 30, 33, 34, 35, 39, 30, 35, 36, 37, 39, 38, 39, 31, 39, 30, 32, 37, 38, 39, 31, 30, 34, 35, 39, 36, 37, 38, 39, 30, 31, 38, 39, 30, 32, 39, 31, 30, 34, 35, 39, 30, 35, 37, 38, 39, 31, 36, 39, 32, 37, 38, 39, 30, 31, 32, 39, 33, 34, 35, 39, 36, 37, 38, 39, 39, 30, 31, 38, 39, 30, 32, 39, 31, 30, 33, 34, 35, 39, 36, 37, 39, 38, 39, 30, 31, 32, 39, 33, 34, 35, 39, 30, 32, 39, 31, 30, 33, 34, 35, 39, 36, 37, 38, 39, 31, 30, 32, 39, 30, 31, 32, 39, 33, 32, 34, 35, 39, 36, 37, 38, 39, 39, 30, 31, 32, 39, 33, 34, 35, 39, 30, 31, 32, 35, 39, 31, 36, 37, 39, 32, 39, 33, 33, 34, 35, 39, 30, 34, 36, 37, 38, 39, 31, 39,
i. Closed compact graphs where all edges are bonds and the number of edges \(N_{\mathcal{E}}=|\mathcal{E}|\) is finite. Here, both ends of each edge are connected to a vertex. ii. Open scattering graphs which consist of a compact graph with the addition of a finite set of leads. The leads are connected to a single vertex at one end. One may write the edge set as a union \(\mathcal{E}=\mathcal{L}\cup\mathcal{B}\). With \(N_{\mathcal{L}}=|\mathcal{L}|\) and \(N_{\mathcal{B}}=|\mathcal{B}|\), one has \(N_{\mathcal{E}}=N_{\mathcal{B}}+N_{\mathcal{L}}\).
For each bond \(e\in\mathcal{B}\), we use a coordinate \(x_{e}\in[0,\ell_{e}]\) with some (arbitrary but fixed) choice of direction. The coordinate defines a position on an edge such that \(x_{e}=0\) and \(x_{e}=\ell_{e}\) correspond to the vertices connected by the bond. For a lead \(e\in\mathcal{L}\), coordinates \(x_{e}\in[0,\infty)\) are defined such that \(x_{e}=0\) corresponds to the vertex where the lead is attached. For each edge \(e\), we refer to the directed edges as \(e_{s}\) with \(s=\pm\) indicating the direction in which \(x_{e}\) increases (\(s=+\)) or decreases (\(s=-\)). A point on the graph is a pair \(\mathbf{x}=(e,x_{e})\) of an edge and a coordinate.
The metric graph is turned into a quantum graph by adding a Schrodinger operator \(\hat{H}\) which requires a set of boundary conditions on the graph vertices in order to become a self-adjoint problem. For this, we consider the Hilbert space \(L^{2}(\mathcal{G})\equiv\bigoplus_{e\in E}L^{2}([0,\ell_{e}])\) of square integrable complex-valued functions \(\mathbf{\Phi}(\mathbf{x})=\{\phi_{e}(x_{e})\}_{e\in\mathcal{E}}\) and define
\[\left[\hat{H}\mathbf{\Phi}(\mathbf{x})\right]_{e}=-\frac{d^{2}}{dx_{e}^{2}} \phi_{e}(x_{e})+V_{e}(x_{e})\phi_{e}(x_{e}) \tag{1}\]
with a potential \(\boldsymbol{V}(\mathbf{x})=\{V_{e}(x_{e})\}_{e\in\mathcal{E}}\), that is, a real valued scalar function defined on \(\mathcal{G}\). We will only consider free Schrodinger operators, that is, negative Laplacians, where \(\boldsymbol{V}(\mathbf{x})=0\). To ensure that the second derivative is well defined and square integrable, one needs to restrict the domain of \(\hat{H}\) to an appropriate Sobolev space. Apart from this standard restriction, the domain of \(\hat{H}\) has to be further specified by appropriate boundary conditions at each vertex \(v\) in order for \(\hat{H}\) to define a self-adjoint operator. According to a theorem by Kostrykin and Schrader [26], the most general such boundary conditions at the vertex \(v\) may be written in the form
\[\sum_{\tilde{e}}\boldsymbol{A}_{e\tilde{e}}\phi_{\tilde{e}}(0)+\boldsymbol{B} _{e\tilde{e}}\frac{d\phi_{\tilde{e}}}{dx_{\tilde{e}}}(0)=0 \tag{2}\]
for any \(e\) connected to \(v\) and the sum extends over edges \(\tilde{e}\) connected to \(v\). (We assumed here for simplicity that \(x_{e}=0\) at the vertex for each edge \(e\) connected to \(v\).) The complex coefficients \(\boldsymbol{A}_{e\tilde{e}}\) and \(\boldsymbol{B}_{e\tilde{e}}\) refer to the elements \(e\tilde{e}\) of two square matrices \(\boldsymbol{A}\) and \(\boldsymbol{B}\) of dimension \(d_{v}\), the number of edges connected to \(v\). In [26], it was proven that the matching conditions preserve self-adjointness if and only if two conditions are satisfied. First, the set of equations need to be independent which means that the rectangular \(d_{v}\times 2d_{v}\) matrix \((\boldsymbol{A},\boldsymbol{B})\), i.e. \(\boldsymbol{A}\) and \(\boldsymbol{B}\) being horizontally stacked, must have full rank \(d_{v}\). Second, the product \(\boldsymbol{A}\boldsymbol{B}^{\dagger}=\boldsymbol{B}\boldsymbol{A}^{\dagger}\) is a Hermitian matrix. The matrices \(\boldsymbol{A}\) and \(\boldsymbol{B}\) may be chosen independently for each vertex and we will often write \(\boldsymbol{A}^{(v)}\) and \(\boldsymbol{B}^{(v)}\) to indicate the vertex where these matrices act.
The self-adjointness of \(\hat{H}\) implies a unitary evolution of the time-dependent Schrodinger equation \(i\frac{d}{dt}\boldsymbol{\Phi}(t)=\hat{H}\mathbf{\Phi}(t)\). The stationary solutions \(\mathbf{\Phi}(t)=e^{-iEt}\mathbf{\Psi}\) satisfy the (homogeneous) eigenproblem
\[\left[\left(E-\hat{H}\right)\mathbf{\Psi}(\mathbf{x})\right]_{e}=\left(E+\frac {d^{2}}{dx_{e}^{2}}\right)\psi_{e}(x_{e})=0. \tag{3}\]
Here, \(E\) is the energy. It implies furthermore that solutions to (3) only exist for real values of \(E\) and the set of all such (generalized) eigenvalues forms the spectrum of \(\hat{H}\). In the remainder we will only consider the positive part of their spectrum and write \(E=k^{2}>0\) with the wave number \(k>0\). In the following constructions, the energy appears as a variable that is not restricted to the spectrum.
Any solution to equation (3) fulfilling the prescribed boundary conditions at the vertices is expressed as a superposition of counter propagating plane waves, that is,
\[\begin{split}\psi_{e}(x_{e})=& a_{e_{-}}^{\text{in}}e^{- ikx_{e}}+a_{e_{+}}^{\text{out}}e^{ikx_{e}}\\ =& a_{e_{-}}^{\text{out}}e^{-ik(x_{e}-\ell_{e})}+a_{e _{+}}^{\text{in}}e^{ik(x_{e}-\ell_{e})}\\ =& a_{e_{-}}^{\text{in}}e^{-ikx_{e}}+a_{e_{+}}^{\text {in}}e^{ik(x_{e}-\ell_{e})}\.\end{split} \tag{4}\]
Here, \(a_{e_{\pm}}^{\text{in/out}}\) is the complex wave amplitude on edge \(e\) propagating in the direction of increasing (\(+\)) or decreasing (\(-\)) \(x_{e}\), heading in or out of a vertex. If \(e\) is a lead only the amplitudes \(a_{e_{\pm}}^{\text{in/out}}\) at \(x_{e}=0\) are used.
Introducing the \(2N_{B}\)-dimensional diagonal length matrix
\[\mathbf{L}_{\vec{e}_{2}e_{s}}=\delta_{e\vec{e}}\delta_{s\vec{s}}\ell_{e} \tag{5}\]
(where each edge length appears twice) the bond wave amplitudes can be mapped to one another by the diagonal square \(2N_{B}\)-dimensional matrix
\[\mathbf{T}(k)=e^{ik\mathbf{L}} \tag{6}\]
that takes account of the phase difference between wave amplitudes across all bonds, that is,
\[\mathbf{a}_{\mathcal{B}}^{\text{in}}=\mathbf{T}(k)\ \mathbf{a}_{\mathcal{B}}^{ \text{out}}. \tag{7}\]
Here, \(\mathbf{a}_{\mathcal{B}}^{\text{in/out}}\) refers to the \(2N_{\mathcal{B}}\) vector of plane wave coefficients on the directed bonds.
In addition, the graph wave amplitudes can be mapped onto one another across the vertices by taking account of the imposed vertex boundary conditions. For this one writes the matching conditions at a given vertex \(v\) in the form of a \(d_{v}\times d_{v}\) vertex scattering matrix \(\mathbf{\Sigma}^{(v)}\), that is,
\[\mathbf{a}^{(v),\text{out}}=\mathbf{\Sigma}^{(v)}\mathbf{a}^{(v),\text{in}} \tag{8}\]
where \(\mathbf{a}^{(v),\text{in/out}}\) are \(d_{v}\) dimensional vectors that collect all incoming/outgoing amplitudes of plane waves on the edges \(e\) in the neighborhood of vertex \(v\). With the prescribed boundary conditions given in Eq. (2), \(\mathbf{\Sigma}^{(v)}\) takes on the form
\[\mathbf{\Sigma}^{(v)}(k)=-\left(\mathbf{A}^{(v)}+ik\mathbf{B}^{(v)}\right)^{-1}\left( \mathbf{A}^{(v)}-ik\mathbf{B}^{(v)}\right). \tag{9}\]
For real \(k\) (\(E>0\)), this is a well-defined unitary matrix due to the conditions on \(\mathbf{A}^{(v)}\) and \(\mathbf{B}^{(v)}\) which imply that \(\mathbf{A}^{(v)}+ik\mathbf{B}^{(v)}\) is invertible. Note, however, that neither \(\mathbf{A}^{(v)}\) nor \(\mathbf{B}^{(v)}\) need to be invertible by themselves (in general neither is) and one needs to take care at \(k=0\), for instance, where it remains well defined as a limit. Another consequence is that the explicit dependence on \(k\) may drop for some choices of matching conditions. Indeed, this is the case for the so-called Neumann-Kirchhoff matching conditions most widely used in the literature [8, 9, 10]. They require continuity of the wave function at the vertex \(\phi_{e}(0)=\phi_{\hat{e}}(0)\) (for any \(e\) and \(\hat{e}\) connected to \(v\)) and a vanishing sum of outward derivatives on the edges connected to this vertex \(\sum_{e}\frac{d\phi_{e}}{dx_{e}}(0)=0\) (where the sum is over all edges connected to \(v\)). This yields
\[\mathbf{\Sigma}^{(v),\text{NK}}=-\mathbb{I}+\frac{2}{d_{v}}\mathbb{E}_{d_{v}}, \tag{10}\]
where \(\mathbb{I}\) is the identity matrix and \(\mathbb{E}_{d_{v}}\) is the matrix of dimension \(d_{v}\) with all entries equal to one.
It is worth noting that in the physics literature including [17], the stationary problem is often defined on a quantum graph by prescribing arbitrary unitary matrices \(\Sigma^{(v)}\) at the vertices \(v\). While this does in general not define an operator in a Hilbert space (self-adjoint or not) this is of obvious value for an effective description of a physical system if appropriate caution is used. For instance, one should not expect eigenstates to be orthogonal and time-dependent solutions obtained by superposition may not preserve probability (the norm). In some applications that focus on spectral
properties, for instance many applications in quantum chaos, these issues are not physically relevant, see [10] and many references therein. Moreover, they may be given physical meaning by assuming that a vertex stands for a hidden part of the system, such as a scattering region, thus also 'hiding' parts of the Hilbert space. In the following, we will assume that scattering matrices are of the form (9) that ensures a self-adjoint operator. Most of our results remain valid if arbitrary scattering matrices are prescribed as long as they do not depend explicitly on the wave number.
One may combine all vertex scattering matrices into a single (directed) edge scattering matrix \(\boldsymbol{\Sigma}\), such that
\[\mathbf{a}^{\mathrm{out}}=\boldsymbol{\Sigma}\;\mathbf{a}^{\mathrm{in}}. \tag{11}\]
Here, \(\mathbf{a}^{\mathrm{in/out}}\) is a \(2N_{B}+N_{L}\) dimensional vector that collects all the incoming/outgoing amplitudes for all graph bonds and leads. The scattering matrix elements are expressed in terms of the individual vertex scattering matrices \(\boldsymbol{\Sigma}^{(v)}\), such that, after ordering the directed edges in an appropriate way,
\[\boldsymbol{\Sigma}=\boldsymbol{\Pi}\begin{pmatrix}\boldsymbol{\Sigma}^{(1)}& 0&\ldots&0\\ 0&\boldsymbol{\Sigma}^{(2)}&\ldots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\ldots&\boldsymbol{\Sigma}^{(N_{V})}\end{pmatrix}=\boldsymbol{\Pi}\, \boldsymbol{\hat{\Sigma}}\,. \tag{12}\]
Here, \(\boldsymbol{\Pi}\) is a permutation matrix that interchanges the two directions on a given edge with matrix elements given as
\[\boldsymbol{\Pi}_{\bar{e}_{\bar{s}}e_{s}}=\delta_{\bar{e}e}\delta_{\bar{s}(- s)}. \tag{13}\]
### Compact quantum graph eigenstates in the scattering representation
In the case of a compact quantum graph, we have \(\mathbf{a}_{\mathcal{B}}^{\mathrm{in/out}}\equiv\mathbf{a}^{\mathrm{in/out}}\). The two relations (7) and (11) combine to give one condition,
\[\mathbf{a}^{\mathrm{in}}=\mathbf{U}(k)\;\mathbf{a}^{\mathrm{in}}, \tag{14}\]
forming the \(2N_{\mathcal{B}}\) dimensional quantum map
\[\mathbf{U}(k)=\mathbf{T}(k)\boldsymbol{\Sigma}(k), \tag{15}\]
where we stress that the edge scattering matrix \(\boldsymbol{\Sigma}(k)\) can be \(k\) dependent. Non-trivial solutions to (14) exist for wave numbers \(k\) for which the quantum map \(\mathbf{U}\) has a unit eigenvalue, that is, for wave numbers that satisfy the secular equation
\[\xi(k)\equiv\det\left(\mathbb{I}-\mathbf{U}(k)\right)=0. \tag{16}\]
The positive (discrete) energy spectrum of the quantum graph corresponds one-to-one to the zeros of \(\xi(k)\) with \(k>0\)[9, 26, 27]. The corresponding eigenstates can be obtained from (14).
### Scattering states on open quantum graphs
Let us consider the positive energy states for open quantum graphs next. Generically, these consist of an \(N_{\mathcal{L}}\)-fold degenerate continuum of scattering states. Physically, the \(N_{\mathcal{L}}\)-fold degeneracy is obvious from the ability to choose \(N_{\mathcal{L}}\) independent incoming plane waves along the leads. To describe the scattering states, let us write the unitary edge scattering matrix in block form
\[\boldsymbol{\Sigma}(k)=\begin{pmatrix}\boldsymbol{\Sigma}(k)_{\mathcal{L} \mathcal{L}}&\boldsymbol{\Sigma}(k)_{\mathcal{L}\mathcal{B}}\\ \boldsymbol{\Sigma}(k)_{\mathcal{B}\mathcal{L}}&\boldsymbol{\Sigma}(k)_{ \mathcal{B}\mathcal{B}}\end{pmatrix}=\begin{pmatrix}\mathbb{I}&0\\ 0&\boldsymbol{\Pi}\end{pmatrix}\boldsymbol{\hat{\Sigma}}(k), \tag{17}\]
where the block-indices \(\mathcal{B}\) and \(\mathcal{L}\) refer to \(2N_{\mathcal{B}}\) directed bonds and \(N_{\mathcal{L}}\) leads. In the second equality, we have expressed this explicitly in terms of the matrix \(\boldsymbol{\hat{\Sigma}}(k)\) defined in (12) which is block-diagonal in the vertex scattering matrices and the permutation matrix \(\boldsymbol{\Pi}\) that interchanges the two directions for any two bonds as defined in (13). For an open quantum graph, \(\boldsymbol{\Pi}\) only acts on
bonds. Analogously to the compact case in Eq. (15), we introduce the unitary quantum map for an open graph, again expressed in block form,
\[\mathbf{U}(k)\equiv\begin{pmatrix}\mathbf{U}(k)_{\mathcal{L}\mathcal{L}}& \mathbf{U}(k)_{\mathcal{L}\mathcal{B}}\\ \mathbf{U}(k)_{\mathcal{B}\mathcal{L}}&\mathbf{U}(k)_{\mathcal{B}\mathcal{B}} \end{pmatrix}=\begin{pmatrix}\mathbf{\Sigma}(k)_{\mathcal{L}\mathcal{L}}& \mathbf{\Sigma}(k)_{\mathcal{L}\mathcal{B}}\\ \mathbf{T}(k)\mathbf{\Sigma}(k)_{\mathcal{B}\mathcal{L}}&\mathbf{T}(k) \mathbf{\Sigma}(k)_{\mathcal{B}\mathcal{B}}\end{pmatrix}. \tag{18}\]
The scattering states are spanned by the \(N_{L}\)-dimensional vector \(\mathbf{a}_{\mathcal{L}}^{\mathrm{in}}\) of incoming plane wave amplitudes on the leads. The outgoing amplitudes \(\mathbf{a}_{\mathcal{L}}^{\mathrm{out}}\) and the incoming amplitudes on the directed bonds \(\mathbf{a}_{\mathcal{B}}^{\mathrm{in}}\) then result from solving the set of linear equations
\[\begin{pmatrix}\mathbf{a}(k)_{\mathcal{L}}^{\mathrm{out}}\\ \mathbf{a}(k)_{\mathcal{B}}^{\mathrm{in}}\end{pmatrix}=\begin{pmatrix}\mathbf{U }(k)_{\mathcal{L}\mathcal{L}}&\mathbf{U}(k)_{\mathcal{L}\mathcal{B}}\\ \mathbf{U}(k)_{\mathcal{B}\mathcal{L}}&\mathbf{U}(k)_{\mathcal{B}\mathcal{B}} \end{pmatrix}\begin{pmatrix}\mathbf{a}_{\mathcal{L}}^{\mathrm{in}}\\ \mathbf{a}(k)_{\mathcal{B}}^{\mathrm{in}}\end{pmatrix} \tag{19}\]
which follows again from (7) and (11). Solving these equations, one obtains for the outgoing amplitudes on the leads
\[\mathbf{a}(k)_{\mathcal{L}}^{\mathrm{out}}=\boldsymbol{\sigma}(k)\mathbf{a}_{ \mathcal{L}}^{\mathrm{in}} \tag{20}\]
where the unitary graph scattering matrix is given as
\[\boldsymbol{\sigma}(k)=\mathbf{U}(k)_{\mathcal{L}\mathcal{L}}+\mathbf{U}(k)_{ \mathcal{L}\mathcal{B}}\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k)_{\mathcal{B }\mathcal{B}}}\mathbf{U}(k)_{\mathcal{B}\mathcal{L}}. \tag{21}\]
The plane wave amplitudes on the directed bonds can be expressed as
\[\mathbf{a}(k)_{\mathcal{B}}^{\mathrm{in}}=\boldsymbol{\rho}(k)\mathbf{a}_{ \mathcal{L}}^{\mathrm{in}} \tag{22}\]
with the rectangular \(2N_{\mathcal{B}}\times N_{\mathcal{L}}\) matrix
\[\boldsymbol{\rho}(k)=\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k)_{\mathcal{B} \mathcal{B}}}\mathbf{U}(k)_{\mathcal{B}\mathcal{L}}. \tag{23}\]
The scattering matrix \(\boldsymbol{\sigma}(k)\) is related to the matrix \(\boldsymbol{\rho}(k)\) via
\[\boldsymbol{\sigma}(k)=\mathbf{U}(k)_{\mathcal{L}\mathcal{L}}+\mathbf{U}(k)_{ \mathcal{L}\mathcal{B}}\ \boldsymbol{\rho}(k). \tag{24}\]
We now have the required mathematical language for constructing Green's functions on quantum graphs.
One may rightfully question whether the matrix \(\mathbb{I}-\mathbf{U}(k)_{\mathcal{B}\mathcal{B}}\) can always be inverted as required in equations (21) and (23). This is related to the existence of bound states in the continuum (a pure point spectrum in mathematical terms). In the absence of such bound states \(\mathbf{U}(k)_{\mathcal{B}\mathcal{B}}\) does not have a unit eigenvalue and the expression is valid for all wave numbers \(k>0\). We will return to the discussion of this expression in the presence of bound states, also known as perfect scars, later in Sec. 4.
## 3. The scattering approach to the Green's function
The Green's function may be considered as the integral kernel of the resolvent operator \((E-\hat{H})^{-1}\) which has singularities at the spectrum of \(\hat{H}\). It has poles at the discrete spectrum and a branch cut along the continuous spectrum.
For a given (complex) energy \(E=k^{2}\) and two points \(\mathbf{x}=(e,x_{e})\) and \(\mathbf{x}^{\prime}=(e^{\prime},x_{e^{\prime}}^{\prime})\) on a quantum graph, the Green's function \(G(\mathbf{x},\mathbf{x}^{\prime},E)\) satisfies the inhomogeneous equation
\[\left(E-\hat{H}\right)G(\mathbf{x},\mathbf{x}^{\prime},E)=\delta(\mathbf{x}, \mathbf{x}^{\prime})\equiv\begin{cases}\delta(x_{e}-x_{e^{\prime}}^{\prime})& \text{if }e=e^{\prime}\\ 0&\text{if }e\neq e^{\prime}\end{cases}, \tag{25}\]
where \(\hat{H}\) acts on \(\mathbf{x}\). The solution of this differential equation (25) with given self-adjoint matching conditions at the vertices may not be unique or not exist at all. The latter happens when the energy \(E\) belongs to the discrete real eigenvalue spectrum. For complex energies with a non-vanishing imaginary part, one can always find a unique square integrable solution and this then
coincides with the integral kernel of the resolvent operator. The relation to the resolvent operator gives rise to the symmetry
\[G(\mathbf{x},\mathbf{x}^{\prime};E)=G(\mathbf{x}^{\prime},\mathbf{x};E^{*})^{*}. \tag{26}\]
We focus on the Green's function \(G_{+}(\mathbf{x},\mathbf{x}^{\prime},E)\equiv G(\mathbf{x},\mathbf{x}^{\prime },E_{+})\) with positive real and imaginary parts: \(E_{+}=k_{+}^{2}=E_{r}+iE_{i}\) with \(0<E_{r}\in\mathbb{R}\) and \(0<E_{i}\in\mathbb{R}\). For real energies that are not in the (discrete or continuous) eigenvalue spectrum, we allow the imaginary part to vanish, that is, \(E_{i}=0\), as the Green's function is well defined in that case. Solutions at real energies in the continuous spectrum require the limit \(E_{i}\to 0^{+}\) which is always implied. If \(E_{r}\) belongs to the discrete eigenvalue spectrum, the Green's function has a pole \(G(\mathbf{x},\mathbf{x}^{\prime};E)\sim\frac{P(\mathbf{x},\mathbf{x}^{\prime })}{E_{i}}\) (with a non-vanishing function \(P(\mathbf{x},\mathbf{x}^{\prime})\)) preventing the limit \(E_{i}\to 0^{+}\) to exist. For brevity we write \(E=E_{+}\) and \(k=k_{+}\) during the following derivations.
To construct the Green's function, we exploit the fact that for all \(\mathbf{x}\neq\mathbf{x}^{\prime}\) the solutions to equation (25) are solutions to the homogeneous wave equation in (3). This allows one to express the solutions again as a linear superposition of counter propagating plane waves as express in (4). The set of unknown coefficients are then chosen to satisfy the imposed vertex boundary conditions as well as the appropriate boundary conditions at the delta function excitation \(\mathbf{x}=\mathbf{x}^{\prime}\). This procedure is detailed via a scattering approach in the following.
### Construction of the Green's function for compact graphs
The Green's function on a graph can be constructed in a three step procedure as illustrated in Fig. 1.
1. Define the graph and the coordinate of the delta function excitation \(\mathbf{x}^{\prime}=(e^{\prime},x^{\prime}_{e^{\prime}})\). The delta function acts as a source which we model by creating an auxiliary open scattering graph by "cutting out" the excited edge \(e^{\prime}\) and replacing it with two auxiliary leads.
2. Treat the auxiliary graph as a scattering site and construct a lead scattering matrix for energy \(E_{+}\). This allows one to determine the two outgoing lead wave amplitudes in terms of the two incoming wave amplitudes which are free parameters.
3. Take the scattering solution on the auxiliary leads at distances \(x^{\prime}_{e^{\prime}}\) and \(\ell_{e^{\prime}}-x^{\prime}_{e^{\prime}}\) from the vertices and "glue" these solutions together such that the differential equation (25) is satisfied yielding a Dirac \(\delta\)-function at the position \(\mathbf{x}^{\prime}\). This determines all free parameters and results in the Green's function \(G(\mathbf{x},\mathbf{x}^{\prime};E_{+})\).
Let us now go through these steps in detail:
_Step 1._ Consider a compact quantum graph \(\mathcal{G}(\mathcal{V},\mathcal{E},L)\) as defined in Sec. 2 which we wish to excite with a delta function at location \(\mathbf{x}^{\prime}=(e^{\prime},x^{\prime}_{e^{\prime}})\in\mathcal{G}\). Let us denote the vertex at \(x_{e^{\prime}}=0\) as the 'tail' vertex \(v_{T}\) and the vertex at \(x_{e^{\prime}}=l_{e^{\prime}}\) as the 'head' vertex \(v_{H}\). We begin by cutting the excited edge \(e^{\prime}\) and replacing it by two leads attached at \(v_{T}\) and \(v_{H}\), respectively, thus creating the auxiliary open scattering graph \(\mathcal{G}_{\mathrm{aux},e^{\prime}}=\mathcal{G}_{\mathrm{aux},e^{\prime}}( \mathcal{V},\mathcal{E}_{\mathrm{aux},e^{\prime}},L_{\mathrm{aux},e^{\prime}})\), where \(\mathcal{E}_{\mathrm{aux},e^{\prime}}=\mathcal{L}_{\mathrm{aux},e^{\prime}} \cup(\mathcal{B}\setminus\{e^{\prime}\})\) and \(L_{\mathrm{aux},e^{\prime}}=L\setminus\{\ell_{e^{\prime}}\}\). The coordinates on the leads are set to be \(x_{T}=x_{H}=0\) at the vertices \(v_{T}\) and
Figure 1. This three step procedure is described in detail below.
\(v_{H}\), respectively. On each lead, the solutions are defined as
\[\begin{split}&\psi_{T}(x_{T})=a_{T}^{\mathrm{in}}e^{-ik_{+}x_{T}}+a_ {T}^{\mathrm{out}}e^{ik_{+}x_{T}},\\ &\psi_{H}(x_{H})=a_{H}^{\mathrm{in}}e^{-ik_{+}x_{H}}+a_{H}^{ \mathrm{out}}e^{ik_{+}x_{H}}.\end{split} \tag{27}\]
_Step 2._ Next, we construct the scattering states on the auxiliary graph. The quantum map of the auxiliary graph can then be written in the form Eq. (18) and only differs from the quantum map (15) of \(\mathcal{G}\) by excluding the rows corresponding to the excited edge \(e^{\prime}\). The wave amplitudes on the two leads are mapped from incoming to outgoing wave amplitudes by the graph scattering matrix \(\boldsymbol{\sigma}(k_{+})\) as defined in (20) with matrix elements
\[\begin{pmatrix}a_{\mathrm{H}}^{\mathrm{out}}\\ a_{\mathrm{T}}^{\mathrm{out}}\end{pmatrix}=\begin{pmatrix}\sigma(k_{+})_{HH}& \sigma(k_{+})_{HT}\\ \sigma(k_{+})_{TH}&\sigma(k_{+})_{TT}\end{pmatrix}\begin{pmatrix}a_{\mathrm{H }}^{\mathrm{in}}\\ a_{\mathrm{T}}^{\mathrm{in}}\end{pmatrix}. \tag{28}\]
The incoming wave amplitudes \(a_{\mathrm{H}}^{\mathrm{in}}\) and \(a_{\mathrm{T}}^{\mathrm{in}}\) are at this stage free parameters.
_Step 3._ We project the set of scattering solutions from the auxiliary graph onto the original graph by cutting the leads H and T at \(x_{\mathrm{T}}=x_{e^{\prime}}^{\prime}\) and \(x_{\mathrm{H}}=\ell_{e^{\prime}}-x_{e^{\prime}}^{\prime}\), then "gluing" the two ends together forming a single bond. The solution on \(e^{\prime}\) is then
\[\psi_{e^{\prime}}(x_{e^{\prime}})=\begin{cases}a_{\mathrm{T}}^{\mathrm{in}}e^ {-ik_{+}x_{e^{\prime}}}+\left(\sigma_{\mathrm{TH}}a_{\mathrm{H}}^{\mathrm{in}}+ \sigma_{\mathrm{TT}}a_{\mathrm{T}}^{\mathrm{in}}\right)e^{ik_{+}x_{e^{\prime}}}& \text{for }x_{e^{\prime}}<x_{e^{\prime}}^{\prime};\\ a_{\mathrm{H}}^{\mathrm{in}}e^{-ik_{+}(\ell_{e^{\prime}}-x_{e^{\prime}})}+ \left(\sigma_{\mathrm{HH}}a_{\mathrm{H}}^{\mathrm{in}}+\sigma_{\mathrm{HT}}a_ {\mathrm{T}}^{\mathrm{in}}\right)e^{ik_{+}(\ell_{e^{\prime}}-x_{e^{\prime}})}& \text{for }x_{e^{\prime}}>x_{e^{\prime}}^{\prime}.\end{cases} \tag{29}\]
One determines \(a_{\mathrm{H}}^{\mathrm{in}}\) and \(a_{\mathrm{T}}^{\mathrm{in}}\) by fulfilling equation (25) at \(x_{e^{\prime}}=x_{e^{\prime}}^{\prime}\); this leads to the following conditions:
i. continuity at \(x_{e^{\prime}}=x_{e^{\prime}}^{\prime}\)
\[\lim_{\alpha\to 0^{+}}\left[\psi_{e^{\prime}}(x_{e^{\prime}}^{\prime}+\alpha)- \psi_{e^{\prime}}(x_{e^{\prime}}^{\prime}-\alpha)\right]=0; \tag{30}\]
ii. a discontinuity of the derivatives of the form
\[\lim_{\alpha\to 0^{+}}\left[\frac{d\psi_{e^{\prime}}\left(x_{e^{\prime}}^{ \prime}+\alpha\right)}{dx_{e^{\prime}}}-\frac{d\psi_{e^{\prime}}\left(x_{e^{ \prime}}^{\prime}-\alpha\right)}{dx_{e^{\prime}}}\right]=1. \tag{31}\]
These two conditions result in a non-homogeneous system of linear equations for the two incoming scattering amplitudes. The unique solution of this system is
\[\begin{split} a_{\mathrm{T}}^{\mathrm{in}}=&\frac{e ^{ik_{+}\ell_{e^{\prime}}}\left(e^{-ik_{+}(\ell_{e^{\prime}}-x_{e^{\prime}}^{ \prime})}+\sigma_{\mathrm{HH}}e^{ik_{+}(\ell_{e^{\prime}}-x_{e^{\prime}}^{ \prime})}-\sigma_{\mathrm{TH}}e^{ik_{+}x_{e^{\prime}}^{\prime}}\right)}{2ik_{+} \left[(1-e^{ik_{+}\ell_{e^{\prime}}}\sigma_{\mathrm{HT}})(1-e^{ik_{+}\ell_{e^{ \prime}}}\sigma_{\mathrm{TH}})-e^{2ik_{+}\ell_{e^{\prime}}}\sigma_{\mathrm{HH} }\sigma_{\mathrm{TT}})\right]}\\ =&\frac{1}{2ik_{+}}\left[e^{ik_{+}x_{e^{\prime}}^{ \prime}}\left[\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k_{+})}\right]_{e_{-}^{ \prime}e_{-}^{\prime}}+e^{ik_{+}(\ell_{e^{\prime}}-x_{e^{\prime}}^{\prime})} \left[\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k_{+})}\right]_{e_{-}^{\prime}e_ {+}^{\prime}}\right]}\\ a_{\mathrm{H}}^{\mathrm{in}}=&\frac{e^{ik_{+}\ell_{e^{ \prime}}}\left(e^{-ik_{+}x_{e^{\prime}}^{\prime}}+\sigma_{\mathrm{TT}}e^{ik_{+ }x_{e^{\prime}}^{\prime}}-\sigma_{\mathrm{HT}}e^{ik_{+}(\ell_{e^{\prime}}-x_{e^ {\prime}}^{\prime})}\right)}{2ik_{+}\left[(1-e^{ik_{+}\ell_{e^{\prime}}}\sigma_{ \mathrm{HT}})(1-e^{ik_{+}\ell_{e^{\prime}}}\sigma_{\mathrm{TH}})-e^{2ik_{+} \ell_{e^{\prime}}}\sigma_{\mathrm{HH}}\sigma_{\mathrm{TT}})\right]}\\ =&\frac{1}{2ik_{+}}\left[e^{ik_{+}(\ell_{e^{\prime}}-x_{e^ {\prime}}^{\prime})}\left[\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k_{+})} \right]_{e_{+}^{\prime}e_{+}^{\prime}}+e^{ik_{+}x_{e^{\prime}}^{\prime}}\left[ \frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k_{+})}\right]_{e_{+}^{\prime}e_{-}^{ \prime}}\right]\.\end{split} \tag{32a}\]
The derivation of the expressions involving \((\mathbb{I}-\mathbf{U}(k_{+}))^{-1}\), the resolvent matrix of the quantum map, can be found in A. Inserting (32) into (29) and extending the solution to the entire graph
using (22), the Green's function of the compact graph \(\mathcal{G}\) can finally be written in the form
\[G(\mathbf{x},\mathbf{x}^{\prime},E_{+})=\frac{1}{2k_{+}i} \left[\delta_{ee^{\prime}}e^{ik_{+}|x_{e}-x_{e^{\prime}}^{\prime}|}+e ^{ik_{+}(x_{e}-x_{e^{\prime}}^{\prime}-\ell_{e}+\ell_{e^{\prime}})}\left[\tfrac {\mathbf{U}(k_{+})}{\mathbb{I}-\mathbf{U}(k_{+})}\right]_{e_{+}e_{+}^{\prime}}\right. \tag{33}\] \[\left.+e^{-ik_{+}(x_{e}-x_{e^{\prime}}^{\prime})}\left[\tfrac{ \mathbf{U}(k_{+})}{\mathbb{I}-\mathbf{U}(k_{+})}\right]_{e_{-}e_{-}^{\prime}}\right.\] \[\left.+e^{ik_{+}(x_{e}+x_{e^{\prime}}^{\prime}-\ell_{e})}\left[ \tfrac{\mathbf{U}(k_{+})}{\mathbb{I}-\mathbf{U}(k_{+})}\right]_{e_{+}e_{-}^{ \prime}}\right]\] \[\left.+e^{-ik_{+}(x_{e}+x_{e^{\prime}}^{\prime}-\ell_{e^{\prime}} )}\left[\tfrac{\mathbf{U}(k_{+})}{\mathbb{I}-\mathbf{U}(k_{+})}\right]_{e_{-} e_{+}^{\prime}}\right].\]
This is our main result in this section. We give here for the first time a closed form expression of the Green's function on a graph following the recipe from Barras and Gaspard [17].
By formally expanding \(\frac{\mathbf{U}}{\mathbb{I}-\mathbf{U}}=\sum_{n=1}^{\infty}\mathbf{U}^{n}\), one may express the Green's function as a sum over paths \(p\) on the metric graph starting at \(\mathbf{x}^{\prime}\) and ending at \(\mathbf{x}\), that is,
\[G(\mathbf{x},\mathbf{x}^{\prime},E_{+})=\frac{1}{2k_{+}i}\sum_{p}A_{p}(k_{+}) e^{iL_{p}k_{+}}\,. \tag{34}\]
Here, \(L_{p}\) is the metric length of the path and the amplitude \(A_{p}\) is the product of all scattering amplitudes along the trajectory. If \(e=e^{\prime}\), the direct path between \(x_{e^{\prime}}\) and \(x_{e^{\prime}}^{\prime}\) has \(L_{p}=|x_{e^{\prime}}-x_{e^{\prime}}^{\prime}|\) and \(A_{p}=1\). Eq. (34) is the starting point for the investigations in [17], which, however, makes it necessary to do an explicit summation over all possible paths - in general a cumbersome task. Note also that this expansion converges only if the imaginary part of \(k_{+}\) is positive and these expressions thus require a limit if used for real wave numbers. This is all well known for similar expansions into sums over paths in trace formulae and scattering systems, we refer to the textbook [8] and references therein.
Finally, let us shortly discuss the pole structure of the Green's function. For a compact graph, the eigenvalue spectrum is a discrete countable set \(\{E_{0},E_{1},\dots\}\). Let us assume that there are no degeneracies and all eigenvalues are positive, that is, \(E_{n}>0\). The spectral decomposition of the Schrodinger operator \(\hat{H}\) allows us to write the resolvent operator as
\[(E_{+}-\hat{H})^{-1}=\sum_{n=0}^{\infty}\frac{\hat{P}_{n}}{E_{+}-E_{n}} \tag{35}\]
where \(\hat{P}_{n}\) is the projection operator onto the subspace spanned by the \(n\)-th eigenvector. For the Green's function this implies
\[G(\mathbf{x},\mathbf{x}^{\prime},E_{+})=\sum_{n=0}^{\infty}\frac{P_{n}(\mathbf{ x},\mathbf{x}^{\prime})}{E_{+}-E_{n}} \tag{36}\]
where \(P_{n}(\mathbf{x},\mathbf{x}^{\prime})\) is the integral kernel of \(\hat{P}_{n}\). Let us now show that (33) and (36) are indeed equivalent. We start by considering the limit \(E_{+}\to E_{n}\) for some given eigenvalue \(E_{n}=k_{n}^{2}\) and by showing that the singular part of the Green's function (33) in this limit is given by \(\frac{P_{n}(\mathbf{x},\mathbf{x}^{\prime})}{E_{+}-E_{n}}\). Let us extract first the singular part of the matrix
\[\frac{\mathbf{U}(k_{+})}{\mathbb{I}-\mathbf{U}(k_{+})}\sim\frac{\mathbf{P}}{-i (k_{+}-k_{n})C}. \tag{37}\]
Here, \(\mathbf{P}=\mathbf{b}^{\text{in}}\mathbf{b}^{\text{in}\dagger}\) is the projection matrix with matrix elements on the corresponding unit eigenvector \(\mathbf{U}(k_{n})\mathbf{b}^{\text{in}}=\mathbf{b}^{\text{in}}\) and
\[C=\mathbf{b}^{\text{in}\dagger}\left[k_{n}\mathbf{L}+\sin(k_{n}\mathbf{L}) \mathbf{\Pi}\right]\mathbf{b}^{\text{in}}>0 \tag{38}\]
is a positive constant and \(\mathbf{L}\) is a \(2N_{B}\) dimensional diagonal matrices with diagonal entries \(\ell_{e}\). We refer to B for a detailed derivation of (37) and (38). With \(2k_{+}(k_{+}-k_{n})\sim E_{+}-E_{n}\) one then finds
\[G(\mathbf{x},\mathbf{x}^{\prime},E_{+})\sim \frac{\left(a_{e-}^{\mathrm{in}}e^{-ik_{n}x_{e}}+a_{e_{+}}^{ \mathrm{in}}e^{ik_{n}(x_{e}-\ell_{e})}\right)^{*}\!\left(a_{e_{-}^{\prime}}^{ \mathrm{in}}e^{-ik_{n}x_{e^{\prime}}}+a_{e_{+}^{\prime}}^{\mathrm{in}}e^{ik_{n} (x_{e^{\prime}}-\ell_{e^{\prime}})}\right)}{C(E_{+}-E_{n})}\] \[= \frac{P_{n}(\mathbf{x},\mathbf{x}^{\prime})}{E_{+}-E_{n}}\, \tag{39}\]
where the last equality requires that the constant \(C\) gives the correct normalization of the projection kernel \(P_{n}(\mathbf{x},\mathbf{x}^{\prime})\). This is equivalent to \(\sum_{e\in\mathcal{E}}\int_{0}^{\ell_{e}}P_{n}((e,x_{e}),(e,x_{e}))dx_{e}=1\) which is easily checked by direct calculation. Repeating this calculation for \(E_{+}\) near to all other energy eigenvalues shows that expressions (33) and (36) have the same poles and the same residues. Both expressions can be continued analytically to the lower half plane where the imaginary part of the energy is negative. They are thus equivalent up to an entire function \(F(E)\), (i.e., it is analytic in the whole complex plane). As both (33) and (36) vanish in the limit \(E_{i}\to\pm\infty\), the same must be true for their difference \(F(E)\). The entire function that vanishes in these limits for all \(E_{r}\) is \(F(E)=0\).
### Construction of the Green's function for open scattering graphs
The construction of the Green's function on an open scattering graph follows analogously. In this case, our assumption that the energy has a positive imaginary part together with the requirement of square integrability leads to outgoing boundary conditions along the leads. That is, the amplitudes of incoming plane waves need to vanish, as these would lead to exponentially increasing contributions. These conditions are straight forward to implement and we can go through the same construction as for the compact graph. A short-cut is obtained by first replacing each lead \(e\in\mathcal{L}\) by an edge of finite length with a dangling vertex of degree one and choosing some self-adjoint boundary conditions at the dangling vertices. This results in an auxiliary compact quantum graph as described in the previous section. The Green's function of the auxiliary quantum graph is then given by (33). Clearly, the solution depends on the lengths that have been introduced for the leads as parameters. Next, one sends the introduced edge lengths to infinity. Because the imaginary part of the wave number is positive \(\mathrm{Im}\ k_{+}>0\) the corresponding phase factors then decay as \(e^{ik_{+}\ell_{e}}\to 0\) as \(\ell_{e}\to\infty\). In this limit any dependence on the arbitrary choice of boundary conditions at the dangling vertices disappears and what remains is the Green's function of the open graph. We refer to C for the
details of the calculation which results in
(40) \[G(\mathbf{x},\mathbf{x}^{\prime},E_{+})=\frac{1}{2k_{+}i}\times\\ \begin{cases}\delta_{e,e^{\prime}}\,e^{ik_{+}|x_{e}-x_{e^{\prime}}^{ \prime}|}+e^{ik_{+}(x_{e}+x_{e^{\prime}}^{\prime})}\left[\mathbf{U}(k_{+})_{ \mathcal{L}\mathcal{L}}+\mathbf{U}(k_{+})_{\mathcal{L}\mathcal{B}\frac{1}{ \mathbb{I}-\mathbf{U}(k_{+})\mathcal{B}\mathcal{B}}}\mathbf{U}(k_{+})_{ \mathcal{B}\mathcal{L}}\right]_{ee^{\prime}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
\(\tilde{k}=2\pi/\ell_{0}\) (or any integer multiple of it), one may then set
\[\psi_{e}(x_{e})=\begin{cases}\pm\sin(\tilde{k}x_{e})&\text{if $e$ belongs to the cycle of $\mathcal{S}$;}\\ 0&\text{if $e$ belongs to $\mathcal{R}$.}\end{cases} \tag{42}\]
Here the signs \(\pm\) can be chosen to satisfy the flux conservation condition.
Since the union of \(\mathcal{S}\) and \(\mathcal{R}\) make up the total graph \(\mathcal{G}\), it is natural to express the quantum map in the block-form
\[\mathbf{U}(k)=\begin{pmatrix}\mathbf{U}(k)_{\mathcal{R}\mathcal{R}}&\mathbf{U} (k)_{\mathcal{RS}}\\ \mathbf{U}(k)_{\mathcal{SR}}&\mathbf{U}(k)_{\mathcal{SS}}\end{pmatrix} \tag{43}\]
with appropriate permutations applied. In general there is perfect scar on the subgraph \(\mathcal{S}\) at energy \(E=k^{2}>0\), if the block \(\mathbf{U}(k)_{\mathcal{SS}}\) has an eigenvector \(\mathbf{a}_{\mathcal{S}}^{\mathrm{in}}\) with unit eigenvalue \(\mathbf{U}(k)_{\mathcal{SS}}\mathbf{a}_{\mathcal{S}}^{\mathrm{in}}=\mathbf{a }_{\mathcal{S}}^{\mathrm{in}}\). The unitarity of the full quantum map then implies that \(\mathbf{U}(k)_{\mathcal{RS}}\mathbf{a}_{\mathcal{S}}^{\mathrm{in}}=0\) vanishes. One may extend \(\mathbf{a}_{\mathcal{S}}^{\mathrm{in}}\) to an eigenvector of the full map by setting \(\mathbf{a}_{\mathcal{R}}^{\mathrm{in}}=0\) resulting in the vanishing of wave amplitudes on edges that do not belong to \(\mathcal{S}\).
For open graphs, a perfect scar at a wavenumber \(k_{0}>0\) is a bound state in the continuum and this situation is again straight forward to construct, such as by using the cycle example above. In this case, one may take \(\mathcal{R}\) to contain all leads and \(\mathcal{S}\) to be a sub-graph containing a sub-set of the finite bonds.
Throughout the previous sections, we assumed that the matrix \(\mathbb{I}-\mathbf{U}(k)_{\mathcal{BB}}\) is invertible, which it is generically the case as \(\mathbf{U}(k)_{\mathcal{BB}}\) is a block of a unitary matrix. However, a perfect scar exists, if and only if \(\mathbf{U}(k)_{\mathcal{BB}}\) has an eigenvalue one at the wave number \(k=k_{0}\). Even in the case of "almost" perfect scars (with small nonzero entries for \(\mathbf{a}_{\mathcal{R}}^{\mathrm{in}}\)), matrix inversion may cause large numerical errors when inverting \(\mathbb{I}-\mathbf{U}(k)_{\mathcal{BB}}\). To deal with this issue, we describe a regularisation scheme of the scattering matrix in the following section. This is important when dealing with open quantum graphs and when constructing Green's function both in the compact and open case. The approach may also be used to find the regular part of the Green's function in compact quantum graphs when the energy is in the eigenvalue spectrum. (By regular part, we refer to the Green's function where the contribution from the pole at the energy has been removed). We will focus on the regularization of the scattering matrix, as the other applications can all be derived from there when needed.
### Regularization of the scattering approach at a bound state
We will show in this section that scattering solutions of the form (20) are well defined at \(k=k_{0}\) even in the presence of a bound state at that wave number. We show in D that the scattering matrix can be regularised across a whole \(k\) interval containing \(k_{0}\).
Consider a non-degenerate bound state at wave number \(k=k_{0}\) with wave amplitudes \(\mathbf{b}_{\mathcal{B}}^{\mathrm{in}}\) such that,
\[\mathbf{U}(k_{0})_{\mathcal{BB}}\,\mathbf{b}_{\mathcal{B}}^{\mathrm{in}}= \mathbf{b}_{\mathcal{B}}^{\mathrm{in}}. \tag{44}\]
As discussed in the previous section, the unitarity of the quantum map \(\mathbf{U}(k)\) implies
\[\mathbf{U}(k_{0})_{\mathcal{LE}}\mathbf{b}_{\mathcal{B}}^{\mathrm{in}}=0 \qquad\text{and}\qquad\mathbf{b}_{\mathcal{B}}^{\mathrm{in}\dagger}\mathbf{U}( k_{0})_{\mathcal{LE}}=0, \tag{45}\]
that is, incoming waves \(\mathbf{a}_{\mathcal{L}}^{\mathrm{in}}\) in the leads can not couple into the bound state \(\mathbf{b}_{\mathcal{B}}^{\mathrm{in}}\) and the bound state can not couple back out. Let us assume for simplicity that the perfect scar described by \(\mathbf{b}_{\mathcal{B}}^{\mathrm{in}}\) is not degenerate and introduce the idempotent, Hermitian \(2N_{\mathcal{B}}\times 2N_{\mathcal{B}}\) projection matrix
\[\mathbf{P}\equiv\mathbf{b}_{\mathcal{B}}^{\mathrm{in}}\mathbf{b}_{\mathcal{B}} ^{\mathrm{in}\dagger} \tag{46}\]
and its orthogonal complement
\[\mathbf{Q}=\mathbb{I}-\mathbf{P}. \tag{47}\]
The methods below can be generalised to situations where more than one perfect scar exists at the same wave number \(k_{0}\), such as, if all edge lengths are rationally related in a large graph with Neumann-Kirchhoff matching conditions. Writing Eq. (22) in the form
\[\left(\mathbb{I}-\mathbf{U}(k)_{\mathcal{B}\mathcal{B}}\right)\mathbf{a}_{ \mathcal{B}}^{\text{in}}=\mathbf{U}(k)_{\mathcal{B}\mathcal{L}}\mathbf{a}_{ \mathcal{L}}^{\text{in}}, \tag{48}\]
we find that the solution \(\mathbf{a}_{\mathcal{B}}^{\text{in}}\) is not unique at \(k=k_{0}\) as both
\[\mathbf{P}\left(\mathbb{I}-\mathbf{U}(k_{0})_{\mathcal{B}\mathcal{B}}\right)=0 \quad\text{and}\quad\mathbf{P}\mathbf{U}(k_{0})_{\mathcal{B}\mathcal{L}}=0, \tag{49}\]
which follows directly from (45). This implies, that for any solution \(\mathbf{a}_{\mathcal{B}}^{\text{in}}\) of Eq. (48), \(\mathbf{a}_{\mathcal{B}}^{\text{in}}+\alpha\mathbf{b}_{\mathcal{B}}^{\text{ in}}\), \(\alpha\in\mathbb{C}\), is also a solution. However, a unique solution \(\tilde{\mathbf{a}}_{\mathcal{B}}^{\text{in}}\) exists for the reduced system of equations
\[\mathbf{Y}_{Q}(k_{0})\,\tilde{\mathbf{a}}_{\mathcal{B}}^{\text{in}}=\mathbf{ U}(k_{0})_{\mathcal{B}\mathcal{L}}\,\mathbf{a}_{\mathcal{L}}^{\text{in}}\quad \text{with}\quad\mathbf{Y}_{Q}(k_{0})=\mathbf{Q}\left(\mathbb{I}-\mathbf{U}( k_{0})_{\mathcal{B}\mathcal{B}}\right)\mathbf{Q}. \tag{50}\]
As \(\mathbf{Y}_{Q}(k_{0})\mathbf{b}_{\mathcal{B}}^{\text{in}}=0\), its standard inverse does not exist. One may invert it in the subspace orthogonal to \(\mathbf{b}_{\mathcal{B}}^{\text{in}}\). Let us define (with mild abuse of notation)
\[\mathbf{Y}_{Q}(k_{0})^{-1}=\mathbf{Q}\frac{\mathbb{I}}{\mathbb{I}-\mathbf{Q} \mathbf{U}(k_{0})\mathbf{Q}}\mathbf{Q} \tag{51}\]
as the unique \(2N_{\mathcal{B}}\times 2N_{\mathcal{B}}\) matrix with by \(\mathbf{Y}_{Q}(k_{0})^{-1}\mathbf{Y}_{Q}(k_{0})=\mathbf{Q}=\mathbf{Y}_{Q}(k_{ 0})\mathbf{Y}_{Q}(k_{0})^{-1}\) and \(\mathbf{Y}_{Q}(k_{0})^{-1}\mathbf{P}=0=\mathbf{P}\mathbf{Y}_{Q}(k_{0})^{-1}\). As \(\mathbf{U}(k_{0})_{\mathcal{L}\mathcal{B}}\mathbf{P}=0\), one obtains a well-defined scattering solution for Eq. (20), that is,
\[\mathbf{a}(k)_{\mathcal{L}}^{\text{out}}=\mathbf{U}(k_{0})_{\mathcal{L} \mathcal{B}}\,\tilde{\mathbf{a}}_{\mathcal{B}}^{\text{in}}. \tag{52}\]
We may thus write the scattering matrix (21) in the form
\[\boldsymbol{\sigma}(k_{0})=\mathbf{U}(k_{0})_{\mathcal{L}\mathcal{L}}+\mathbf{ U}(k_{0})_{\mathcal{L}\mathcal{B}}\mathbf{Y}_{Q}(k_{0})^{-1}\mathbf{U}(k_{0})_{ \mathcal{B}\mathcal{L}}. \tag{53}\]
For an in-depth discussion of the regularity of the scattering matrix as \(k\to k_{0}\), see D.
## 5. Worked examples
In this section we explicitly construct the scattering matrices of two open quantum graphs which contain perfect scars. Expressions for the Green's function on the leads follow directly using (41).
### Open lasso
Consider the open lasso quantum graph illustrated in figure 2. The coordinate \(x_{1}\geq 0\) runs along the lead with \(x_{1}=0\) at the vertex \(v_{1}\) and the coordinate \(x_{2}\in[0,\ell_{2}]\) runs along the loop such that \(x_{2}=0\) and \(x_{2}=\ell_{2}\) are the endpoints at the vertex \(v_{1}\). At the vertex, we enforce
Figure 2. An open lasso graph constructed from two edges \(e_{1}\) and \(e_{2}\) where \(e_{1}\) is a lead and \(e_{2}\) is an bond. Both edges are connected to the same vertex \(v_{1}\) where edge \(e_{2}\) has both ends connected forming a loop wherein bound states can exist in the continuum.
Neumann boundary conditions, as expressed in (10), leading to the quantum map written in block form as
\[\mathbf{U}(k)=\left(\begin{array}{c|cc}-\frac{1}{3}&\frac{2}{3}&\frac{2}{3}\\ \hline\frac{2^{eik{\ell_{2}}}}{3}&\frac{2e^{ik{\ell_{2}}}}{3}&-\frac{e^{ik{\ell_ {2}}}}{3}\\ \frac{2e^{ik{\ell_{2}}}}{3}&-\frac{e^{ik{\ell_{2}}}}{3}&\frac{2e^{ik{\ell_{2}} }}{3}\end{array}\right)\equiv\begin{pmatrix}\mathbf{U}_{\mathcal{L}\mathcal{L} }&\mathbf{U}_{\mathcal{L}\mathcal{B}}\\ \mathbf{U}(k)_{\mathcal{B}\mathcal{L}}&\mathbf{U}(k)_{\mathcal{B}\mathcal{B}} \end{pmatrix}. \tag{54}\]
In the construction of the scattering matrix and the Green's function, one needs to invert the matrix \(\mathbb{I}-\mathbf{U}(k)_{\mathcal{B}\mathcal{B}}\) which yields
\[\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k)_{\mathcal{B}\mathcal{B}}}=\begin{pmatrix} \frac{3-2e^{ik{\ell_{2}}}}{\left(e^{ik{\ell_{2}}}-1\right)\left(e^{ik{\ell_{2 }}}-3\right)}&-\frac{e^{ik{\ell_{2}}}}{\left(e^{ik{\ell_{2}}}-1\right)\left(e^ {ik{\ell_{2}}}-3\right)}\\ -\frac{e^{ik{\ell_{2}}}}{\left(e^{ik{\ell_{2}}}-1\right)\left(e^{ik{\ell_{2}} }-3\right)}&\frac{3-2e^{ik{\ell_{2}}}}{\left(e^{ik{\ell_{2}}}-1\right)\left(e ^{ik{\ell_{2}}}-3\right)}\end{pmatrix} \tag{55}\]
and is well defined as long as \(e^{ik{\ell_{2}}}\neq 1\), that is, if \(k\neq k_{n}=2\pi n/{\ell_{2}}\) for \(n=1,2,\ldots\). The reason for this is the existence of perfect scars on the loop which here lead to bound states in the continuum of scattering states. These bound state wave functions are given as
\[\psi_{e_{1}}(x_{1})= 0, \tag{56b}\] \[\psi_{e_{2}}(x_{2})= \sqrt{\frac{2}{{\ell_{2}}}}\sin(k_{n}x_{2}). \tag{56a}\]
The continuum of scattering states exists for all wave numbers \(k>0\) and is given by
\[\psi_{e_{1}}(x_{1})= e^{-ikx_{1}}+\boldsymbol{\sigma}(k)e^{ikx_{1}}, \tag{57b}\] \[\psi_{e_{2}}(x_{2})= \boldsymbol{\rho}(k)_{2+1}e^{ik(x_{2}-{\ell_{2}})}+\boldsymbol{ \rho}(k)_{2-1}e^{-ikx_{2}}. \tag{57a}\]
where
\[\boldsymbol{\rho}(k)=\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k)_{\mathcal{B} \mathcal{B}}}\mathbf{U}(k)_{\mathcal{B}\mathcal{L}}=\begin{pmatrix}\frac{2e^{ ik{\ell_{2}}}}{3-e^{ik{\ell_{2}}}}\\ \frac{2e^{ik{\ell_{2}}}}{3-e^{ik{\ell_{2}}}}\end{pmatrix} \tag{58}\]
and
\[\boldsymbol{\sigma}(k)=\mathbf{U}_{\mathcal{L}\mathcal{L}}+\mathbf{U}_{ \mathcal{L}\mathcal{B}}\boldsymbol{\rho}(k)=\frac{3e^{ik{\ell_{2}}}-1}{3-e^{ ik{\ell_{2}}}}. \tag{59}\]
While the matrix \(\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k)_{\mathcal{B}\mathcal{B}}}\) is used to find \(\boldsymbol{\rho}(k)\) and \(\boldsymbol{\sigma}(k)\) in the scattering approach the poles at \(k=k_{n}\) have disappeared in the final results. Note that bound states and scattering states are trivially orthogonal due to their symmetry under \(x_{2}\mapsto{\ell_{2}}-x_{2}\) (which can be viewed as a mirror symmetry of the lasso). The bound states are odd under this symmetry as \(\psi_{1}(x_{1})=0\) and \(\psi_{2}(x_{2})=-\psi_{2}({\ell_{2}}-x_{2})\) at wave numbers \(k_{n}\). The scattering states are even under this symmetry for all wave numbers \(k>0\) as
\[\psi_{e_{2}}(x_{2})=\frac{4e^{ik{\ell_{2}}/2}}{3-e^{ik{\ell_{2}}}}\cos\left(k \frac{2x_{2}-{\ell_{2}}}{2}\right)=\psi_{2}({\ell_{2}}-x_{2}). \tag{60}\]
For completeness, we give the full Green's function for this example below, where \(x_{e}\) (or \(x^{\prime}_{e^{\prime}}\)) are either on the lead (\(e=e_{1}\)) or on the loop (\(e=e_{2}\)). Following on from the last line in (40), one
obtains, using the expressions in (54) and (55),
\[G_{\text{lasso}}(\mathbf{x},\mathbf{x}^{\prime},E_{+})=\frac{1}{2k_{ +}i}\times\\ \begin{cases}e^{ik_{+}|x_{e}-x_{e^{\prime}}^{\prime}|}+e^{ik_{+}(x_{e }+x_{e^{\prime}}^{\prime})}\frac{3e^{ik_{+}\ell_{2}}-1}{3-e^{ik_{+}\ell_{2}}}& \text{if $e=e_{1}$ and $e^{\prime}=e_{1}$},\\ \frac{2}{3-e^{ik_{+}\ell_{2}}}e^{ik_{+}x_{e_{1}}}\left(e^{ik_{+}x_{e_{2}}^{\prime }}+e^{-ik_{+}(x_{e_{2}}^{\prime}-\ell_{2})}\right)&\text{if $e=e_{1}$ and $e^{\prime}=e_{2}$},\\ \frac{2}{3-e^{ik_{+}\ell_{2}}}e^{ik_{+}x_{e_{1}}^{\prime}}\left(e^{-ik_{+}x_{e _{2}}}+e^{ik_{+}(x_{e_{2}}-\ell_{2})}\right)&\text{if $e=e_{2}$ and $e^{\prime}=e_{1}$},\\ e^{ik_{+}|x_{e_{2}}-x_{e_{2}}^{\prime}|}+\frac{2e^{ik_{+}\ell_{2}}}{(e^{ik_{+} \ell_{2}}-1)(e^{ik_{+}\ell_{2}}-3)}\left[(2-e^{ik_{+}\ell_{2}})\cos(k_{+}(x_{e _{2}}-x_{e_{2}}^{\prime}))\right.\\ \left.\quad-\cos(k_{+}(x_{e_{2}}+x_{e_{2}}^{\prime}-\ell_{2}))\right]&\text{if $e=e_{2}$ and $e^{\prime}=e_{2}$}.\end{cases} \tag{61}\]
### Scattering states for an open 3-star with one lead
Consider the open T-junction quantum graph as illustrated in Figure 3. We choose the three coordinates such that \(x_{n}=0\) for \(n=1,2,3\) at the central vertex \(v_{1}\) with \(x_{n}=\ell_{n}\) at vertices \(v_{n},n=2,3\). We enforce Kirchhoff-Neumann boundary conditions at the central vertex as expressed in (10) and Dirichlet boundary conditions at \(v_{2},v_{3}\), that is, \(\Sigma^{(v_{n})}=-1,n=2,3\), leading to the quantum map
\[\mathbf{U}(k)=\begin{pmatrix}-\frac{1}{3}&0&0&\frac{2}{3}&\frac{2}{3}\\ \hline 2e^{ik\ell_{2}}&0&0&-\frac{e^{ik\ell_{2}}}{3}&\frac{2e^{ik\ell_{2}}}{3}\\ \frac{2e^{ik\ell_{3}}}{3}&0&0&\frac{2e^{ik\bar{\mathfrak{J}}_{3}}}{3}&-\frac{ e^{ik\ell_{3}}}{3}\\ 0&-e^{ik\ell_{2}}&0&0&0\\ 0&0&-e^{ik\ell_{3}}&0&0\\ \end{pmatrix}\equiv\begin{pmatrix}\mathbf{U}_{\mathcal{L}\mathcal{L}}&\mathbf{ U}_{\mathcal{L}\mathcal{B}}\\ \mathbf{U}(k)_{\mathcal{B}\mathcal{L}}&\mathbf{U}(k)_{\mathcal{B}\mathcal{B}} \end{pmatrix}. \tag{62}\]
Figure 3. A 3-star with one lead consists of a central vertex \(v_{1}\) with three edges \(e_{n},n=1,2,3\), attached. Here, \(e_{1}\) is a lead and the other two edges \(e_{2}\) and \(e_{3}\) are bonds of lengths \(\ell_{2}\) and \(\ell_{3}\) ending in vertices \(v_{2}\) and \(v_{3}\).
Computing the scattering matrix and Green's function in the scattering approach require that one inverts the matrix \(\mathbb{I}-\mathbf{U}(k)_{\mathcal{B}\mathcal{B}}\) which is given as
\[\begin{split}&\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k)_{ \mathcal{B}\mathcal{B}}}=\\ &\frac{1}{D}\begin{pmatrix}3-e^{2ik\ell_{3}}&-2e^{ik(\ell_{2}+ \ell_{3})}&-(1+e^{2ik\ell_{3}})e^{ik\ell_{2}}&2e^{ik\ell_{2}}\\ -2e^{ik(\ell_{2}+\ell_{3})}&3-e^{2ik\ell_{2}}&2e^{ik\ell_{3}}&-(1+e^{2ik\ell_{ 2}})e^{ik\ell_{3}}\\ -(3-e^{2ik\ell_{3}})e^{ik\ell_{2}}&2e^{ik(2\ell_{2}+\ell_{3})}&3-e^{2ik\ell_{3} }&-2e^{2ik\ell_{2}}\\ 2e^{ik(\ell_{2}+2\ell_{3})}&-(3-e^{2ik\ell_{2}})e^{ik\ell_{3}}&-2e^{2ik\ell_{3 }}&3-e^{2ik\ell_{2}}\end{pmatrix}\end{split} \tag{63}\]
where
\[D=3-e^{2ik\ell_{2}}-e^{2ik\ell_{3}}-e^{2ik(\ell_{2}+\ell_{3})}. \tag{64}\]
Note that for \(e^{2ik\ell_{2}}=e^{2ik\ell_{3}}=1\), one has \(D=0\) making the inverse not well defined. This can only happen if the bond lengths are rationally related, then giving rise to a set of bound state in the continuum that vanish on the lead and are a sinusoidal wave along the two bonds with a node on the vertex \(v_{1}\). In either case the scattering states are given by
\[\psi_{e_{1}}(x_{1})= e^{-ikx_{1}}+\boldsymbol{\sigma}(k)e^{ikx_{1}}, \tag{65b}\] \[\psi_{e_{2}}(x_{2})= \boldsymbol{\rho}(k)_{2_{+1}}e^{ik(x_{2}-\ell_{2})}+\boldsymbol{ \rho}(k)_{2_{-1}}e^{-ikx_{2}}\] (65c) \[\psi_{e_{3}}(x_{3})= \boldsymbol{\rho}(k)_{3_{+1}}e^{ik(x_{3}-\ell_{3})}+\boldsymbol{ \rho}(k)_{3_{-1}}e^{-ikx_{3}} \tag{65a}\]
where
\[\boldsymbol{\rho}(k)=\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k)_{\mathcal{B} \mathcal{B}}}\mathbf{U}(k)_{\mathcal{B}\mathcal{L}}=\frac{2}{D}\begin{pmatrix} e^{ik\ell_{2}}\left(1-e^{2ik\ell_{3}}\right)\\ e^{ik\ell_{3}}\left(1-e^{2ik\ell_{2}}\right)\\ -e^{2ik\ell_{2}}\left(1-e^{2ik\ell_{3}}\right)\\ -e^{2ik\ell_{3}}\left(1-e^{2ik\ell_{2}}\right)\end{pmatrix} \tag{66}\]
and
\[\boldsymbol{\sigma}(k)=\mathbf{U}_{\mathcal{L}\mathcal{L}}+\mathbf{U}_{ \mathcal{L}\mathcal{B}}\boldsymbol{\rho}(k)=\frac{D^{*}}{D}e^{2ik(\ell_{2}+ \ell_{3})}. \tag{67}\]
The scattering states are then given as
\[\psi_{e_{1}}(x_{1})= e^{-ikx_{1}}+\frac{D^{*}}{D}e^{ik(x_{1}+2\ell_{2}+2\ell_{3})}, \tag{68b}\] \[\psi_{e_{2}}(x_{2})= \frac{2(1-e^{2ik\ell_{2}})(1-e^{2ik\ell_{3}})}{D}\frac{\sin(k( \ell_{2}-x_{2}))}{\sin(k\ell_{2})},\] (68c) \[\psi_{e_{3}}(x_{3})= \frac{2(1-e^{2ik\ell_{2}})(1-e^{2ik\ell_{3}})}{D}\frac{\sin(k( \ell_{3}-x_{3}))}{\sin(k\ell_{3})}. \tag{68a}\]
The scattering matrix is continuous due to \(1+\boldsymbol{\sigma}(k)=\frac{2(1-e^{2ik\ell_{2}})(1-e^{2ik\ell_{3}})}{D}\). It is straight forward to check that the scattering states also behave well near \(e^{2ik\ell_{2}}=e^{2ik\ell_{3}}=1\). Given the above scattering matrix constructions, the Green's function can be derived analogously to the previous example from equation (40).
## 6. Conclusion
To conclude, we present a simple three step procedure for generating the Green's function on both closed and open finite quantum graphs. The procedure exploits the standard scattering approach wherein the infinite sum of trajectories between a given source point and receiver point on the graph involves the inverse of a block component of the matrix defining the graph's quantum map. Generically, this matrix is sub-unitary and its inverse is well defined. Using this scattering representation, a closed form expression for the Green's function is given here for the first time.
We also discuss the possibility of perfect scars and bound states in the continuum for which the existing approaches (based on sums over trajectories) diverge. We show that our closed expressions can be regularized in these cases. This regularization scheme is important also on a practical level, as scattering matrices of generic quantum graphs with NK matching conditions which do not have any exact bound states still have resonances. These can be arbitrarily close to bound states and they can lead to large errors in numerical investigations if not treated with care.
We restricted ourselves here to the positive energy domain, mainly to keep the discussion concise and relevant - generalizations to the negative energy domain follow along the same ideas, but require extra care as scattering matrices are no longer unitary. A more relevant extension of our results would be to graphs which do not have a finite number of edges (such as infinite periodic quantum lattices).
**Acknowledgement**
SG would like to acknowledge support by the COST action CA18232. TL thanks EPSRC for supporting his PhD studies.
Appendix A Derivation of coefficients in the Green's function in terms of the resolvent matrix of the quantum map
For any given edge \(e\in\mathcal{E}\), we will denote its complement as
\[\mathcal{E}^{e}\equiv\mathcal{E}\setminus\{e\}. \tag{69}\]
Analogously, we write \(\mathcal{B}^{e}=\mathcal{B}\setminus\{e\}\) if \(e\in\mathcal{B}\) or \(\mathcal{L}^{e}=\mathcal{L}\setminus\{e\}\) if \(e\in\mathcal{L}\). For any given edge \(e\), we may now write the quantum map in block form (after appropriate reordering of the directed edges), that is,
\[\mathbf{U}=\begin{pmatrix}\mathbf{U}_{ee}&\mathbf{U}_{e\mathcal{B}^{e}}\\ \mathbf{U}_{\mathcal{B}^{e}e}&\mathbf{U}_{\mathcal{B}^{e}\mathcal{B}^{e}} \end{pmatrix}\, \tag{70}\]
where \(\mathbf{U}_{ee}\), \(\mathbf{U}_{e\mathcal{B}^{e}}\), \(\mathbf{U}_{\mathcal{B}^{e}e}\) and \(\mathbf{U}_{\mathcal{B}^{e}\mathcal{B}^{e}}\) are matrices of dimension \(2\times 2\), \(2\times 2(N_{\mathcal{B}}-1)\), \(2(N_{\mathcal{B}}-1)\times 2\) and \(2(N_{\mathcal{B}}-1)\times 2(N_{\mathcal{B}}-1)\), respectively. Eliminating the \(\mathbf{a}_{B}^{\text{in}}\) components in (14), we can write the quantization condition with the help of the unitary \(2\times 2\) matrix \(\mathbf{U}(k)^{\text{red},e}\) defined as
\[\mathbf{U}^{\text{red},e}=\mathbf{U}_{ee}+\mathbf{U}_{e\mathcal{B}^{e}}\left( \mathbb{I}-\mathbf{U}_{\mathcal{B}^{e}\mathcal{B}^{e}}\right)^{-1}\mathbf{U}_ {\mathcal{B}^{e}e}. \tag{71}\]
We also define an alternative reduced secular function
\[\xi(k)^{\text{red},e}\equiv\det\left(\mathbb{I}-\mathbf{U}(k)^{\text{red},e} \right), \tag{72}\]
which is related to \(\xi(k)\) defined in (16) through the identity
\[\xi(k)=\xi(k)^{\text{red},e}\det\left(\mathbb{I}-\mathbf{U}(k)_{\mathcal{B}^{ e}\mathcal{B}^{e}}\right). \tag{73}\]
The relation above is obtained using the decomposition
\[\mathbb{I}-\mathbf{U}=\begin{pmatrix}\mathbb{I}-\mathbf{U}^{\text{red},\text{ e}}&-\mathbf{U}_{e\mathcal{B}^{e}}\left(\mathbb{I}-\mathbf{U}_{\mathcal{B}^{e} \mathcal{B}^{e}}\right)^{-1}\\ 0&\mathbb{I}\end{pmatrix}\begin{pmatrix}\mathbb{I}&0\\ -\mathbf{U}_{\mathcal{B}^{e}e}&\mathbb{I}-\mathbf{U}_{\mathcal{B}^{e}\mathcal{ B}^{e}}\end{pmatrix}. \tag{74}\]
Note that the reduced quantum map \(\mathbf{U}^{\text{red},e}\) is related to the quantum scattering matrix \(\boldsymbol{\sigma}(k)\) introduced in Eq. (28) by
\[\mathbf{U}^{\text{red},\text{e}}=\begin{pmatrix}\mathbf{U}_{e^{+}e^{+}}^{\text{ red},\text{e}}&\mathbf{U}_{e^{+}e^{-}}^{\text{red},\text{e}}\\ \mathbf{U}_{e^{-}e^{+}}^{\text{red},\text{e}}&\mathbf{U}_{e^{-}e^{-}}^{\text{ red},\text{e}}\end{pmatrix}=e^{ik\ell_{e}}\begin{pmatrix}\sigma_{\text{TH}}&\sigma_{\text{TT}}\\ \sigma_{\text{HH}}&\sigma_{\text{HT}}\end{pmatrix}. \tag{75}\]
In order to obtain the second line in (32), we note that the denominator in these expressions can be written in terms of the reduced secular function of the compact graph, that is,
\[\left[(1-e^{ik\ell_{e^{\prime}}}\sigma_{\text{HT}})(1-e^{ik\ell_{e^{\prime}}} \sigma_{\text{TH}})-e^{2ik\ell_{e^{\prime}}}\sigma_{\text{HH}}\sigma_{\text{TT} }\right]=\xi(k)^{\text{red},e^{\prime}}\, \tag{76}\]
where we use the \(e^{\prime}\) notation as in Sec. 3.1.
By writing out the resolvent of the reduced \(2\times 2\) quantum map, that is,
\[\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}^{\mathrm{red},e^{\prime}}}\equiv\begin{pmatrix} 1-\mathbf{U}^{\mathrm{red},e^{\prime}}_{e^{\prime}_{+}e^{\prime}_{+}}&-\mathbf{ U}^{\mathrm{red},e^{\prime}}_{e^{\prime}_{+}e^{\prime}_{-}}\\ -\mathbf{U}^{\mathrm{red},e^{\prime}}_{e^{\prime}_{-}e^{\prime}_{+}}&1- \mathbf{U}^{\mathrm{red},e^{\prime}}_{e^{\prime}_{-}e^{\prime}_{-}}\end{pmatrix} ^{-1}=\frac{1}{\xi^{\mathrm{red},e^{\prime}}}\begin{pmatrix}1-\mathbf{U}^{ \mathrm{red},e^{\prime}}_{e^{\prime}_{-}}&\mathbf{U}^{\mathrm{red},e^{\prime}} _{e^{\prime}_{+}e^{\prime}_{-}}\\ \mathbf{U}^{\mathrm{red},e^{\prime}}_{e^{\prime}_{-}e^{\prime}_{+}}&1- \mathbf{U}^{\mathrm{red},e^{\prime}}_{e^{\prime}_{+}e^{\prime}_{+}}\end{pmatrix} \;, \tag{77}\]
we can relate the terms in (32) to matrix elements of the inverse of the reduced quantum map using again (75). The expressions as given in Eq. (32) are now obtained observing in addition
\[\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}^{\mathrm{red},e^{\prime}}}=\left[ \frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}}\right]_{e^{\prime}e^{\prime}}\,, \tag{78}\]
which follows, for example, from the decomposition (74).
## Appendix B Details on the pole contribution to the Green's function in compact graphs
In this appendix, we want to give a detailed derivation of equations (37) and (38) that define the pole contribution of the Green's function at an energy eigenvalue \(E_{n}=k_{n}^{2}\). With the orthogonal projector \(\mathbf{Q}=\mathbb{I}-\mathbf{P}\) let us start by writing
\[\frac{\mathbf{U}(k_{+})}{\mathbb{I}-\mathbf{U}(k_{+})}= -\mathbb{I}+\frac{1}{\chi(k_{+})}\mathbf{P}\] \[+\mathbf{P}\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k_{+})} \mathbf{Q}+\mathbf{Q}\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k_{+})}\mathbf{P }+\mathbf{Q}\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k_{+})}\mathbf{Q} \tag{79}\]
where
\[\chi(k_{+})=\left(\mathbf{b}^{\mathrm{in}\,\dagger}\frac{\mathbb{I}}{\mathbb{ I}-\mathbf{U}(k_{+})}\mathbf{b}^{\mathrm{in}}\right)^{-1} \tag{80}\]
and we have used that \(\mathbf{P}=\mathbf{b}^{\mathrm{in}}\mathbf{b}^{\mathrm{in}\,\dagger}\) is a rank one projector. We will show that, as \(k_{+}\to k_{n}\), the only singular term in (79) is contained in \(\frac{1}{\chi(k_{+})}\mathbf{P}\). Writing
\[\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k_{+})}\left(\mathbf{P}+\mathbf{Q} \right)\left(\mathbf{I}-\mathbf{U}(k_{+})\right)=\mathbb{I}\;, \tag{81}\]
and multiplying it from left and right with either \(\mathbf{P}\) or \(\mathbf{Q}\) results in four equations that may be solved for
\[\chi(k_{+})= \mathbf{b}^{\mathrm{in}\,\dagger}\left[\mathbb{I}-\mathbf{U}(k_{ +})-\mathbf{U}(k_{+})\mathbf{Q}\frac{\mathbb{I}}{\mathbb{I}-\mathbf{Q} \mathbf{U}(k_{+})\mathbf{Q}}\mathbf{Q}\mathbf{U}(k_{+})\right]\mathbf{b}^{ \mathrm{in}} \tag{82b}\] \[\mathbf{P}\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k_{+})}\mathbf{Q}= \frac{1}{\chi(k_{+})}\mathbf{P}\mathbf{U}(k_{+})\mathbf{Q}\frac{ \mathbb{I}}{\mathbb{I}-\mathbf{Q}\mathbf{U}(k_{+})\mathbf{Q}}\mathbf{Q} \mathbf{Q}\] (82c) \[\mathbf{Q}\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k_{+})}\mathbf{P}= \frac{1}{\chi(k_{+})}\mathbf{Q}\frac{\mathbb{I}}{\mathbb{I}- \mathbf{Q}\mathbf{U}(k_{+})\mathbf{Q}}\mathbf{Q}\mathbf{U}(k_{+})\mathbf{P}\] (82d) \[\mathbf{Q}\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k_{+})}\mathbf{Q}= \mathbf{Q}+\frac{1}{\chi(k_{+})}\mathbf{Q}\frac{\mathbb{I}}{ \mathbb{I}-\mathbf{Q}\mathbf{U}(k_{+})\mathbf{Q}}\mathbf{Q}\mathbf{U}(k_{+} )\mathbf{P}\mathbf{U}(k_{+})\mathbf{Q}\frac{\mathbb{I}}{\mathbb{I}-\mathbf{Q }\mathbf{U}(k_{+})\mathbf{Q}}\mathbf{Q} \tag{82a}\]
using standard properties of orthogonal projectors such as \(\mathbf{P}^{2}=\mathbf{P}\), \(\mathbf{Q}^{2}=\mathbf{Q}\), and \(\mathbf{P}\mathbf{Q}=\mathbf{Q}\mathbf{P}=0\). Now let us write \(k=k_{n}+\delta k\) and consider \(\delta k\to 0\) using the Taylor expansion
\[\mathbf{U}(k_{n}+\delta k)=\mathbf{U}(k_{n})+\frac{d\mathbf{U}}{dk}(k_{n})\ \delta k+O((\delta k)^{2}). \tag{83}\]
The derivative of the quantum map \(\mathbf{U}(k)\) can be performed explicitly. The latter depends on the wave number via phases \(e^{ik\ell_{e}}\) on each edge \(e\), and in general also via an explicit \(k\) dependence of
the vertex scattering matrices. For the vertex scattering matrices of the form (9), one finds, using standard matrix algebra,
\[\frac{d}{dk}\boldsymbol{\Sigma}^{(v)}(k)=\frac{1}{2k}\left(\mathbb{I}-\boldsymbol {\Sigma}^{(v)}(k)^{2}\right). \tag{84}\]
Then the derivative of \(\mathbf{U}(k)=e^{ik\mathbf{L}}\mathbf{\Pi}\boldsymbol{\Sigma}\) gives
\[\frac{d\mathbf{U}}{dk}(k)=i\mathbf{L}\mathbf{U}(k)+\frac{1}{2k}\left[e^{ik \mathbf{L}}\mathbf{\Pi}-\mathbf{U}(k)e^{-ik\mathbf{L}}\mathbf{\Pi}\mathbf{U}(k )\right]. \tag{85}\]
At this stage we may identify that the constant \(C\) stated in (38) is just
\[C=\frac{1}{i}\mathbf{b}^{\mathrm{in}^{\dagger}}\ \frac{d\mathbf{U}}{dk}(k_{n}) \mathbf{b}^{\mathrm{in}}. \tag{86}\]
The expressions (83) and (85) have the following implications
\[\mathbf{PU}(k+\delta k)\mathbf{Q}= O(\delta k) \tag{87b}\] \[\mathbf{QU}(k+\delta k)\mathbf{P}= O(\delta k)\] (87c) \[\chi(k+\delta k)= -iC\delta k+O((\delta k)^{2}) \tag{87a}\]
such that \(\mathbf{P}\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k)}\mathbf{Q}\), \(\mathbf{Q}\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k)}\mathbf{P}\) and \(\mathbf{Q}\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}(k)}\mathbf{Q}\) are not singular in the limit \(\delta k\to 0\) and we are left with the singular part
\[\frac{\mathbf{U}(k_{n}+\delta k)}{\mathbb{I}-\mathbf{U}(k_{n}+\delta k)}=\frac {1}{-iC\delta k}\mathbf{P}+O((\delta k)^{0}) \tag{88}\]
which is equivalent to the Eq. (37) we wanted to proof in this appendix.
## Appendix C Details of the derivation of the Green's function in open scattering graphs
In this appendix, we give details how the Green's function (40) for an open scattering graph \(\mathcal{G}\) can be derived from the Green's function (33) of an auxiliary compact graph \(\mathcal{G}_{\mathrm{aux}}\) by sending the edge lengths of those edges turning into leads to infinity. The auxiliary graph \(\mathcal{G}_{\mathrm{aux}}\) is obtained from the open graph \(\mathcal{G}\) by replacing each lead by an edge of finite length with a vertex of degree one at the other end. For simplicity, we will put Neumann-Kirchhoff conditions at the vertices of degree one, the final results will not depend on this choice. For the sake of this derivation, we will bend the use of notation and continue to refer to 'leads' and 'bonds' of the auxiliary graph. Let us also introduce the \(N_{\mathcal{L}}\)-dimensional diagonal matrix \(\mathbf{L}_{\mathcal{L}}=\mathrm{diag}(\ell_{\mathrm{e}}:\mathrm{e}\in \mathcal{L})\) that contains the edge lengths of the leads. We start from the Green's function for the auxiliary graph (33). It contains four matrix elements of the matrix \(\mathbf{R}=\frac{\mathbf{U}^{\mathrm{aux}}}{\mathbb{I}-\mathbf{U}^{\mathrm{ aux}}}\) where we denote the \((2(N_{\mathcal{B}}+N_{\mathcal{L}})\)-dimensional) quantum map of the auxiliary graph by \(\mathbf{U}^{\mathrm{aux}}\) in order to distinguish it from the \((2N_{\mathcal{B}}+N_{\mathcal{L}}\)-dimensional) quantum map \(\mathbf{U}\) of the open graph. We suppress the dependence on \(k_{+}\) here, as it can be reintroduced easily at the end of the calculation. The standard way to continue the calculation would be to decompose the involved matrices into blocks that correspond to three sets of directed edges: directed bonds \(\mathcal{B}\), outgoing leads \(\mathcal{L}_{+}\) and incoming leads \(\mathcal{L}_{-}\). For the quantum map of the auxiliary graph the structure of the graph then implies
\[\mathbf{U}^{\mathrm{aux}}=\begin{pmatrix}\mathbf{U}^{\mathrm{aux}}_{\mathcal{ L}_{\mathcal{L}}\mathcal{L}_{+}}&\mathbf{U}^{\mathrm{aux}}_{\mathcal{L}_{ \mathcal{L}}\mathcal{L}_{-}}&\mathbf{U}^{\mathrm{aux}}_{\mathcal{L}_{ \mathcal{B}}\mathcal{B}}\\ \mathbf{U}^{\mathrm{aux}}_{\mathcal{L}_{\mathcal{L}}\mathcal{L}_{+}}&\mathbf{U} ^{\mathrm{aux}}_{\mathcal{L}_{\mathcal{L}}\mathcal{L}_{-}}&\mathbf{U}^{ \mathrm{aux}}_{\mathcal{B}\mathcal{B}}\end{pmatrix}=\begin{pmatrix}0&\mathbf{ T}_{\mathcal{L}}\mathbf{U}_{\mathcal{L}\mathcal{L}}&\mathbf{T}_{\mathcal{L}}\mathbf{U}_{ \mathcal{L}\mathcal{B}}\\ \mathbf{T}_{\mathcal{L}}&0&0\\ 0&\mathbf{U}_{\mathcal{B}\mathcal{L}}&\mathbf{U}_{\mathcal{B}\mathcal{B}}\end{pmatrix} \tag{89}\]
where four blocks vanish due to the connectivity of the auxiliary graph, the other four blocks can been identified with corresponding blocks of the quantum map of the open graph and we introduced \(\mathbf{T}_{\mathcal{L}}\equiv e^{ik_{+}\mathbf{L}_{\mathcal{L}}}\), an \(N_{\mathcal{L}}\)-dimensional diagonal matrix that contains the auxiliary lengths of the leads
in the phase. Note, that \(\mathbf{T}_{\mathcal{L}}\to 0\) as the auxiliary lengths are sent to infinity. Writing the identity \(\mathbf{U}^{\mathrm{aux}}=\mathbf{R}-\mathbf{U}^{\mathrm{aux}}\mathbf{R}\) in terms of its blocks one may express the blocks of \(\mathbf{R}\) in the form (90)
\[\mathbf{R}=\begin{pmatrix}\mathbf{R}_{\mathcal{L}_{\mathcal{L}}+\mathcal{L}_{ \mathcal{L}}}&\mathbf{R}_{\mathcal{L}_{\mathcal{L}}-\mathcal{L}_{\mathcal{L}} }&\mathbf{R}_{\mathcal{L}+\mathcal{B}}\\ \mathbf{R}_{\mathcal{L}_{\mathcal{L}}-\mathcal{L}_{\mathcal{L}}}&\mathbf{R}_{ \mathcal{L}-\mathcal{L}_{\mathcal{L}}}&\mathbf{R}_{\mathcal{L}-\mathcal{B}}\\ \mathbf{R}_{\mathcal{B}\mathcal{L}_{\mathcal{L}}}&\mathbf{R}_{\mathcal{B} \mathcal{L}_{\mathcal{L}}}&\mathbf{R}_{\mathcal{B}\mathcal{B}}\end{pmatrix}= \begin{pmatrix}\mathbf{T}_{\mathcal{L}}\boldsymbol{\sigma}\frac{\mathbb{I}}{ \mathbb{I}-\mathbf{T}_{\mathcal{L}}^{2}\boldsymbol{\sigma}}\mathbf{T}_{ \mathcal{L}}&\mathbf{T}_{\mathcal{L}}\boldsymbol{\sigma}\frac{\mathbb{I}}{ \mathbb{I}-\mathbf{T}_{\mathcal{L}}^{2}\boldsymbol{\sigma}}&\mathbf{T}_{ \mathcal{L}}\frac{\mathbb{I}}{\mathbb{I}-\boldsymbol{\sigma}\mathbf{T}_{ \mathcal{L}}^{2}}\boldsymbol{\rho}^{\mathrm{out}}\\ \frac{\mathbb{I}}{\mathbb{I}-\mathbf{T}_{\mathcal{L}}^{2}\boldsymbol{\sigma}} \mathbf{T}_{\mathcal{L}}&\frac{\mathbf{T}_{\mathcal{L}}^{2}\boldsymbol{\sigma} }{\mathbb{I}-\mathbf{T}_{\mathcal{L}}^{2}\boldsymbol{\sigma}}&\mathbf{T}_{ \mathcal{L}}^{2}\frac{\mathbb{I}}{\mathbb{I}-\boldsymbol{\sigma}\mathbf{T}_{ \mathcal{L}}^{2}}\boldsymbol{\rho}^{\mathrm{out}}\\ \boldsymbol{\rho}^{\mathrm{in}}\frac{\mathbb{I}}{\mathbb{I}-\mathbf{T}_{ \mathcal{L}}^{2}\boldsymbol{\sigma}}\mathbf{T}_{\mathcal{L}}&\boldsymbol{\rho}^ {\mathrm{in}}\frac{\mathbb{I}}{\mathbb{I}-\mathbf{T}_{\mathcal{L}}^{2} \boldsymbol{\sigma}}&\frac{\mathbf{U}_{\mathcal{B}\mathcal{B}}}{\mathbb{I}- \mathbf{U}_{\mathcal{B}\mathcal{B}}}+\boldsymbol{\rho}^{\mathrm{in}}\mathbf{T}_ {\mathcal{L}}^{2}\frac{\mathbb{I}}{\mathbb{I}-\boldsymbol{\sigma}\mathbf{T}_{ \mathcal{L}}^{2}}\boldsymbol{\rho}^{\mathrm{out}}\end{pmatrix}\]
where \(\boldsymbol{\sigma}\equiv\mathbf{U}_{\mathcal{L}\mathcal{L}}+\mathbf{U}_{ \mathcal{L}\mathcal{B}}\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}_{\mathcal{B} \mathcal{B}}}\mathbf{U}_{\mathcal{B}\mathcal{L}}\) is the scattering matrix of the open graph, \(\boldsymbol{\rho}^{\mathrm{in}}=\frac{\mathbb{I}}{\mathbb{I}-\mathbf{U}_{ \mathbf{BB}}}\mathbf{U}_{\mathcal{B}\mathcal{L}}\) and \(\boldsymbol{\rho}^{\mathrm{out}}=\mathbf{U}_{\mathcal{L}\mathcal{B}}\frac{ \mathbb{I}}{\mathbb{I}-\mathbf{U}_{\mathbf{BB}}}\).
To proceed one chooses two points \(\mathbf{x}=(x_{e},e)\) and \(\mathbf{x}^{\prime}=(x_{e^{\prime}},e^{\prime})\) on the auxiliary graph \(\mathcal{G}^{\mathrm{aux}}\) and expresses the Green's function (33) of \(\mathcal{G}^{\mathrm{aux}}\) in terms of appropriate matrix elements of \(\mathbf{R}\) and then performs the limit \(\mathbf{T}_{\mathcal{L}}\to 0\). Let us do this explicitly for \(e,e^{\prime}\in\mathcal{L}\) and write (33) for this case in the form
\[2k_{+}i\ G^{\mathrm{aux}}(\mathbf{x},\mathbf{x}^{\prime},E_{+})= \delta_{e,e^{\prime}}\,e^{ik_{+}|x_{e}-x_{e^{\prime}}^{\prime}|}+e^{ ik_{+}(x_{e}-x_{e^{\prime}}^{\prime})}\left[\mathbf{T}_{\mathcal{L}}^{-1} \mathbf{R}_{\mathcal{L}_{\mathcal{L}}+\mathcal{L}_{\mathcal{L}}}\mathbf{T}_{ \mathcal{L}}\right]_{ee^{\prime}}\] \[+e^{ik_{+}(x_{e}+x_{e^{\prime}}^{\prime})}\left[\mathbf{R}_{ \mathcal{L}_{\mathcal{L}}-\mathcal{L}_{\mathcal{L}}}\mathbf{T}_{\mathcal{L}} \right]_{ee^{\prime}}\] \[= \delta_{e,e^{\prime}}\,e^{ik_{+}|x_{e}-x_{e^{\prime}}^{\prime}|} +e^{ik_{+}(x_{e}-x_{e^{\prime}}^{\prime})}\left[\frac{\boldsymbol{\sigma} \mathbf{T}_{\mathcal{L}}^{2}}{\mathbb{I}-\boldsymbol{\sigma}\mathbf{T}_{ \mathcal{L}}^{2}}\right]_{ee^{\prime}}\] \[+e^{-ik_{+}(x_{e}-x_{e^{\prime}}^{\prime})}\left[\frac{\mathbf{T} _{\mathcal{L}}^{2}\boldsymbol{\sigma}}{\mathbb{I}-\mathbf{T}_{\mathcal{L}}^{2} \boldsymbol{\sigma}}\right]_{ee^{\prime}}+e^{ik_{+}(x_{e}+x_{e^{\prime}}^{ \prime})}\left[\boldsymbol{\sigma}\frac{\mathbb{I}}{\mathbb{I}-\mathbf{T}_{ \mathcal{L}}^{2}\boldsymbol{\sigma}}\right]_{ee^{\prime}}\] \[+e^{ik_{+}(x_{e}+x_{e^{\prime}}^{\prime})}\left[\frac{\mathbb{I }}{\mathbb{I}-\mathbf{T}_{\mathcal{L}}^{2}\boldsymbol{\sigma}}\mathbf{T}_{ \mathcal{L}}^{2}\right]_{ee^{\prime}} \tag{91}\]
where we may now send the edge lengths of the leads to infinity \(\mathbf{T}_{\mathcal{L}}\to 0\). This results in
\[2k_{+}i\ G(\mathbf{x},\mathbf{x}^{\prime},E_{+})=\delta_{e,e^{\prime}}\,e^{ ik_{+}|x_{e}-x_{e^{\prime}}^{\prime}|}+e^{ik_{+}(x_{e}+x_{e^{\prime}}^{\prime})} \boldsymbol{\sigma}_{ee^{\prime}} \tag{92}\]
which is equivalent to the given expression for the open Green's function (40) if both points are on the leads. The other cases can be derived in the same way. This calculation is equivalent to formally expanding the Green's function of the auxiliary graph as a sum over trajectories. Sending the lengths of the leads to infinity is equivalent to only summing over trajectories that never travel through any lead from one end to the other - summing just these trajectories then gives back (40).
Appendix D Regularity of the scattering matrix \(\boldsymbol{\sigma}\) at a bound state in the continuum
Following on from the discussion in Sec. 4.2, we show here that the singularity of the scattering matrix \(\boldsymbol{\sigma}(k)\) and the coupling matrix \(\boldsymbol{\rho}(k)\), Eqs. (21) and(23), in the presence of a perfect scar (described by the eigenvector \(\mathbf{b}_{0}\)) can be lifted and that the solution is regular across a whole \(k\) interval containing \(k_{0}\).
### Closed expressions for \(\mathbf{P}\boldsymbol{\rho}(k)\)
First, we decompose the internal graph amplitudes of a scattering solution (22), that is, \(\mathbf{a}(k)_{\mathcal{B}}^{\mathrm{in}}=\boldsymbol{\rho}(k)\mathbf{a}_{ \mathcal{L}}^{\mathrm{in}}\), into components parallel and orthogonal to \(\mathbf{b}_{0}\),
\[\mathbf{P}\,\mathbf{a}(k)_{\mathcal{B}}^{\mathrm{in}}+\mathbf{Q}\,\mathbf{a}(k)_{ \mathcal{B}}^{\mathrm{in}}=\left(\mathbf{P}\boldsymbol{\rho}(k)+\mathbf{Q} \boldsymbol{\rho}(k)\right)\mathbf{a}_{\mathcal{L}}^{\mathrm{in}}, \tag{93}\]
where the projection operator and its orthogonal component are defined in (46) and (47). Starting from Eq. (48), we write
\[\mathbf{P}\left(\mathbb{I}-\mathbf{U}(k)_{\mathcal{B}\mathcal{B}} \right)\left(\mathbf{P}+\mathbf{Q}\right)\mathbf{a}_{\mathcal{B}}^{\mathrm{in}} = \mathbf{P}\mathbf{U}(k)_{\mathcal{B}\mathcal{L}}\,\mathbf{a}_{\mathcal{L }}^{\mathrm{in}},\] \[\mathbf{Q}\left(\mathbb{I}-\mathbf{U}(k)_{\mathcal{B}\mathcal{B}} \right)\left(\mathbf{P}+\mathbf{Q}\right)\mathbf{a}_{\mathcal{B}}^{\mathrm{in}} = \mathbf{Q}\mathbf{U}(k)_{\mathcal{B}\mathcal{L}}\,\mathbf{a}_{\mathcal{L }}^{\mathrm{in}},\]
which yields
\[\left(\mathbf{b}^{\mathrm{in}\,\dagger}_{\mathcal{B}}\left(\mathbb{I}- \mathbf{U}(k)_{\mathcal{B}\mathcal{B}}\right)\mathbf{b}^{\mathrm{in}}_{\mathcal{B }}\right)\cdot\mathbf{P}\,\mathbf{a}^{\mathrm{in}}_{\mathcal{B}}-\mathbf{P} \mathbf{U}(k)_{\mathcal{B}\mathcal{B}}\mathbf{Q}\,\mathbf{a}^{\mathrm{in}}_{ \mathcal{B}} = \mathbf{P}\mathbf{U}(k)_{\mathcal{B}\mathcal{L}}\,\mathbf{a}^{ \mathrm{in}}_{\mathcal{L}}, \tag{94b}\] \[-\mathbf{Q}\mathbf{U}(k)_{\mathcal{B}\mathcal{B}}\mathbf{P}\, \mathbf{a}^{\mathrm{in}}_{\mathcal{B}}+\mathbf{Y}_{Q}(k)\mathbf{Q}\,\mathbf{a }^{\mathrm{in}}_{\mathcal{B}} = \mathbf{Q}\mathbf{U}(k)_{\mathcal{B}\mathcal{L}}\,\mathbf{a}^{ \mathrm{in}}_{\mathcal{L}}, \tag{94a}\]
where \(\mathbf{Y}_{Q}(k)\) has been defined in (50) We have defined \(\mathbf{Y}_{Q}(k)^{-1}\) in (51) as the inverse on the reduced space spanned by \(\mathbf{Q}\). Note that these definitions are here extended to wave numbers close to \(k_{0}\) while \(\mathbf{P}\) and \(\mathbf{Q}\) do not depend on \(k\). We used the general relation \(\mathbf{P}\mathbf{A}\mathbf{P}=\left(\mathbf{b}^{\mathrm{in}\,\dagger}_{ \mathcal{B}}\mathbf{A}\mathbf{b}^{\mathrm{in}}_{\mathcal{B}}\right)\cdot \mathbf{P}\) for a square matrix \(\mathbf{A}\). After rearranging (94b) by multiplying with \(\mathbf{Y}_{Q}(k)^{-1}\) and replacing \(\mathbf{a}(k)^{\mathrm{in}}_{\mathcal{B}}\) by \(\boldsymbol{\rho}(k)\,\mathbf{a}^{\mathrm{in}}_{\mathcal{L}}\), we obtain
\[\mathbf{Q}\boldsymbol{\rho}(k)=\mathbf{Y}_{Q}(k)^{-1}\mathbf{U}(k)_{\mathcal{B }\mathcal{B}}\mathbf{P}\boldsymbol{\rho}(k)+\mathbf{Y}_{Q}(k)^{-1}\mathbf{U}(k )_{\mathcal{B}\mathcal{L}}\,. \tag{95}\]
Given that \(\mathbf{b}^{\mathrm{in}\,\dagger}_{\mathcal{B}}\left(\mathbb{I}-\mathbf{U}(k) _{\mathcal{B}\mathcal{B}}\right)\mathbf{b}^{\mathrm{in}}_{\mathcal{B}}\) in (94a) is a scalar and after replacing \(\mathbf{Q}\,\mathbf{a}^{\mathrm{in}}_{\mathcal{B}}\) by \(\mathbf{Q}\boldsymbol{\rho}(k)\,\mathbf{a}^{\mathrm{in}}_{\mathcal{L}}\) using (95), one obtains after some further manipulations
\[\mathbf{P}\boldsymbol{\rho}(k)=\mathbf{P}\frac{\mathbb{I}+\mathbf{U}(k)_{ \mathcal{B}\mathcal{B}}\mathbf{Y}_{Q}(k)^{-1}}{\mathbf{b}^{\mathrm{in}\,\dagger }_{\mathcal{B}}\left[\mathbb{I}-\mathbf{U}(k)_{\mathcal{B}\mathcal{B}}-\mathbf{ U}(k)_{\mathcal{B}\mathcal{B}}\mathbf{Y}_{Q}(k)^{-1}\mathbf{U}(k)_{\mathcal{B} \mathcal{B}}\right]\mathbf{b}^{\mathrm{in}}_{\mathcal{B}}}\mathbf{U}(k)_{ \mathcal{B}\mathcal{L}}. \tag{96}\]
In order to analyse the scattering solutions in the vicinity of the bound state, we consider wave numbers \(k\) close to \(k_{0}\) in the limit \(\delta k\equiv k-k_{0}\to 0\) in the matrices \(\boldsymbol{\sigma}(k)\) and \(\boldsymbol{\rho}(k)\). By construction we have \(\mathbf{Y}_{Q}(k)\mathbf{b}^{\mathrm{in}\,\dagger}_{\mathcal{B}}=0\) and \(\mathbf{Y}_{Q}(k)^{-1}\) has been defined on the subspace spanned by the projector \(\mathbf{Q}\) in order to remove the pole at \(k_{0}\). For wave numbers \(k\) sufficiently close to \(k_{0}\) this definition remains well defined due to the (assumed) non-degeneracy of \(\mathbf{U}(k)\) as the matrix is then free of poles.
### Expansion of \(\mathbf{P}\boldsymbol{\rho}(k)\) around \(k=k_{0}\)
We will show in the following that, as \(k\to k_{0}\) in (96), the denominator \(\mathbf{b}^{\mathrm{in}\,\dagger}_{\mathcal{B}}\left[\mathbb{I}-\mathbf{U}(k )_{\mathcal{B}\mathcal{B}}-\mathbf{U}(k)_{\mathcal{B}\mathcal{B}}\mathbf{Y}_{Q }(k)^{-1}\mathbf{U}(k)_{\mathcal{B}\mathcal{B}}\right]\mathbf{b}^{\mathrm{in}}_ {\mathcal{B}}\) vanishes but so does the numerator. We will show this for vertex scattering matrices of the form (9) by performing a Taylor expansion of both expressions around \(k=k_{0}\). For this, we need to find explicit expressions for the derivative of the blocks of the quantum map \(\mathbf{U}(k)\). The calculation of these is similar to the one performed in B using Eq. (84). When this equation is applied here to the full quantum map \(\mathbf{U}\), one obtains
\[\frac{d}{dk}\mathbf{U}(k)=\begin{pmatrix}0&0\\ 0&i\mathbf{L}\end{pmatrix}\mathbf{U}(k)+\frac{1}{2k}\left[\begin{pmatrix} \mathbb{I}&0\\ 0&e^{ik\mathbf{L}}\mathbf{\Pi}\end{pmatrix}-\mathbf{U}(k)\begin{pmatrix} \mathbb{I}&0\\ 0&e^{-ik\mathbf{L}}\mathbf{\Pi}\end{pmatrix}\mathbf{U}(k)\right]\, \tag{97}\]
where \(\mathbf{L}\) and \(\exp(-ik\mathbf{L})\) are \(2N_{B}\)- dimensional diagonal matrices with diagonal entries \(\ell_{e}\) and \(\exp(-ik\ell_{e})\), respectively. Setting \(k=k_{0}+\delta k\), we find the expansions
\[\mathbf{U}(k_{0}+\delta k)_{\mathcal{B}\mathcal{B}}= \mathbf{U}(k_{0})_{\mathcal{B}\mathcal{B}}+i\delta k\mathbf{L} \mathbf{U}(k_{0})_{\mathcal{B}\mathcal{B}}\] \[+\frac{\delta k}{2k_{0}}\left(e^{ik_{0}\mathbf{L}}\mathbf{\Pi}- \mathbf{U}(k_{0})_{\mathcal{B}\mathcal{B}}e^{-ik_{0}\mathbf{L}}\mathbf{\Pi} \mathbf{U}(k_{0})_{\mathcal{B}\mathcal{B}}\right) \tag{98a}\] \[-\frac{\delta k}{2k_{0}}\mathbf{U}(k_{0})_{\mathcal{B}\mathcal{L} }\mathbf{U}(k_{0})_{LB}+O((\delta k)^{2})\] \[\mathbf{U}(k_{0}+\delta k)_{\mathcal{B}\mathcal{L}}= \mathbf{U}(k_{0})_{\mathcal{B}\mathcal{L}}+i\delta k\mathbf{L} \mathbf{U}(k_{0})_{\mathcal{B}\mathcal{L}}\] \[-\frac{\delta k}{2k_{0}}\mathbf{U}(k_{0})_{\mathcal{B}\mathcal{B}}e^ {-ik_{0}\mathbf{L}}\mathbf{\Pi}\mathbf{U}(k_{0})_{\mathcal{B}\mathcal{L}}\] (98b) \[-\frac{\delta k}{2k_{0}}\mathbf{U}(k_{0})_{\mathcal{B}\mathcal{L} }\mathbf{U}(k_{0})_{\mathcal{L}\mathcal{L}}+O((\delta k)^{2})\.\]
As \({\bf b}^{\rm in}_{\cal B}\) is a normalized eigenvector of \({\bf U}(k_{0})_{{\cal B}{\cal B}}\) with eigenvalue one and as \({\bf U}(k_{0})_{{\cal L}{\cal B}}{\bf b}^{\rm in}_{\cal B}=0\), \({\bf b}^{\rm in\,\dagger}_{\cal B}{\bf U}(k_{0})_{{\cal B}{\cal L}}=0\) due to the unitarity of \({\bf U}(k_{0})\), one gets
\[{\bf b}^{\rm in\,\dagger}_{\cal B}{\bf U}(k_{0}+\delta k)_{{\cal B}{\cal B}}{ \bf b}^{\rm in}_{\cal B}=1+i\delta k{\bf b}^{\rm in\,\dagger}_{\cal B}\left({ \bf L}+\frac{\sin(k_{0}{\bf L})}{k_{0}}{\bf\Pi}\right){\bf b}^{\rm in}_{\cal B} +O((\delta k)^{2}) \tag{99}\]
and
\[{\bf b}^{\rm in\,\dagger}_{\cal B}{\bf U}(k_{0}+\delta k)_{{\cal B}{\cal B}}{ \bf Y}_{Q}(k_{0}+\delta k)^{-1}{\bf U}(k_{0}+\delta k)_{BB}{\bf b}^{\rm in}_{ \cal B}=O((\delta k)^{2}). \tag{100}\]
The last two equations together give
\[{\bf b}^{\rm in\,\dagger}_{\cal B}\left[\mathbb{I}-{\bf U}(k)_{{ \cal B}{\cal B}}-{\bf U}(k)_{{\cal B}{\cal B}}{\bf Y}_{Q}(k)^{-1}{\bf U}(k)_{{ \cal B}{\cal B}}\right]{\bf b}^{\rm in}_{\cal B}\\ =\ -i\delta k\ {\bf b}^{\rm in\,\dagger}_{\cal B}\left[{\bf L}+ \frac{\sin({\bf L}k_{0})}{k_{0}}{\bf\Pi}\right]{\bf b}^{\rm in}_{\cal B}\,+\, O((\delta k)^{2})\,. \tag{101}\]
Analogously one finds
\[{\bf P}{\bf U}(k_{0}+\delta k)_{{\cal B}{\cal L}}=i{\bf P}{\bf L}{\bf U}(k_{0} )_{{\cal B}{\cal L}}\delta k-{\bf P}\frac{\delta k}{2k_{0}}e^{-ik_{0}{\bf L}} {\bf\Pi}{\bf U}(k_{0})_{{\cal B}{\cal L}}+O((\delta k)^{2}) \tag{102}\]
and
\[{\bf P}{\bf U}(k_{0}+\delta k)_{{\cal B}{\cal B}}{\bf Q}=\\ \delta k\ {\bf P}\left[i{\bf L}{\bf U}(k_{0})_{{\cal B}{\cal B}}+ \frac{1}{2k_{0}}{\bf\Pi}\left(e^{ik_{0}{\bf L}}-e^{-ik_{0}{\bf L}}{\bf U}(k_{ 0})_{{\cal B}{\cal B}}\right)\right]{\bf Q}+O((\delta k)^{2}) \tag{103}\]
which together yield
\[{\bf P}(\mathbb{I}+{\bf U}(k)_{{\cal B}{\cal B}}{\bf Y}_{Q}(k)^{-1 }){\bf U}(k)_{{\cal B}{\cal L}}=\\ i\delta k\ {\bf P}\left[{\bf L}-\frac{1}{2k_{0}i}{\bf\Pi}e^{-ik_{0}{\bf L}} \right]{\bf U}(k_{0})_{{\cal B}{\cal L}}\\ +i\delta k\ {\bf P}\left[\left({\bf L}{\bf U}(k_{0})_{{\cal B}{\cal B }}+{\bf\Pi}\frac{e^{ik_{0}{\bf L}}-e^{-ik_{0}{\bf L}}{\bf U}(k_{0})_{{\cal B}{ \cal B}}}{2k_{0}i}\right){\bf Y}_{Q}(k_{0})^{-1}\right]{\bf U}(k_{0})_{{\cal B} {\cal L}}+O((\delta k)^{2}). \tag{104}\]
Finally, we show that the term \({\bf b}^{\rm in\,\dagger}_{\cal B}\left({\bf L}+\frac{1}{k_{0}}\sin(k_{0}{\bf L }){\bf\Pi}\right){\bf b}^{\rm in}_{\cal B}\) in (101) does not vanish. This is essential for the limit \(\lim_{\delta k\to 0}{\bf P}\mathbf{\rho}(k+\delta k)\) to be well defined (and finite). Indeed one has
\[{\bf b}^{\rm in\,\dagger}_{\cal B}\left({\bf L}+\frac{\sin(k_{0}{\bf L})}{k_{ 0}}{\bf\Pi}\right){\bf b}^{\rm in}_{\cal B}=\sum_{e\in{\cal B}}\ell_{e}(|b_{e_ {+}}|^{2}+|b_{e_{-}}|^{2})+\frac{\sin(k_{0}\ell_{e})}{k_{0}}\left(b^{*}_{e_{+} }b_{e_{-}}+b^{*}_{e_{-}}b_{e_{+}}\right) \tag{105}\]
which is a sum over positive terms as (for \(k_{0}>0\))
\[\left|\frac{\sin(k_{0}\ell_{e})}{k_{0}\ell_{e}}\left(b^{*}_{e_{+}}b_{e_{-}}+b^{* }_{e_{-}}b_{e_{+}}\right)\right|\qquad<\qquad\left|\left(b^{*}_{e_{+}}b_{e_{-} }+b^{*}_{e_{-}}b_{e_{+}}\right)\right|\qquad\leq\qquad|b_{e_{+}}|^{2}\quad+\quad| b_{e_{-}}|^{2}\]
using the Cauchy-Schwartz inequality.
This means that the limit \({\bf P}\mathbf{\rho}(k_{0})\equiv\lim_{\delta k\to 0}{\bf P}\mathbf{\rho}(k_{0}+\delta k)\) is well defined and we obtain to leading order
\[{\bf P}\mathbf{\rho}(k_{0})=\frac{{\bf P}\left[\frac{1}{2i}{\bf\Pi}e^ {-ik_{0}{\bf L}}-k_{0}{\bf L}-\left(k_{0}{\bf L}{\bf U}_{{\cal B}{\cal B}}+{\bf \Pi}\frac{e^{ik_{0}{\bf L}}-e^{-ik_{0}{\bf L}}{\bf U}_{{\cal B}{\cal B}}}{2i} \right){\bf Y}_{Q}^{-1}\right]}{{\bf b}^{\rm in\,\dagger}_{\cal B}\left[k_{0}{ \bf L}+\sin(k_{0}{\bf L}){\bf\Pi}\right]{\bf b}^{\rm in\,\dagger}_{\cal B}}{ {\bf b}^{\rm in\,\dagger}_{\cal B}\left[k_{0}{\bf L}+\sin(k_{0}{\bf L}){\bf\Pi }\right]{\bf b}^{\rm in\,\dagger}_{\cal B}}{\bf U}(k_{0})_{{\cal B}{\cal L}}. \tag{106}\]
For quantum graphs with vertex matching conditions leading to vertex scattering matrices not depending on the wave number, (such as Neumann- Kirchhoff boundary conditions), this simplifies further to
\[\mathbf{P}\boldsymbol{\rho}(k_{0})=-\mathbf{P}\mathbf{L}\ \frac{\mathbb{I}+ \mathbf{U}(k_{0})_{\mathcal{BE}}\mathbf{Y}_{\mathcal{Q}}(k_{0})^{-1}}{\mathbf{b }_{\mathcal{B}}^{\mathrm{in}\dagger}\mathbf{L}\ \mathbf{b}_{\mathcal{B}}^{ \mathrm{in}}}\mathbf{U}(k_{0})_{\mathcal{BL}}. \tag{107}\]
Likewise, it can be shown that \(\mathbf{Q}\boldsymbol{\rho}\) in (95) and the scattering matrix in (24) are also well defined in an interval containing \(k_{0}\). In the limit \(k\to k_{0}\), we obtain for the latter the result (53) as expected.
In this regularization, we have explicitly used Eq. (84) which is valid precisely for scattering matrices that come from a self-adjoint matching condition. So one may wonder whether it is valid for the large amount of physical quantum graph models that define the quantum graph in terms of arbitrary prescribed scattering matrices (as for instance in [17]). In most of these physical cases, the scattering matrices are assumed to be constant with respect to \(k\) which implies that the right-hand side of Eq. (84) vanishes. It is easy to see that this leads to some simplifications in the following formulas and leads to a well-defined regularized scattering matrix. If one prescribes scattering matrices with some dependency on the wave number then the regularity of the scattering matrices in the presence of bound states cannot be guaranteed in general. However if the scattering matrix is an effective description derived from a more detailed self-adjoint system (whether that is a graph or a different type of model), then there exists a well-defined scattering matrix both physically and mathematically basically because the spectral decomposition of self-adjoint operators is always based on orthogonal projections, such that scattering states are always orthogonal to bound states. Showing the regularity in this case will require an analogous projection method but will generally require its own analysis. Vice versa a non-regular scattering matrix may be an indicator that a model is not physical in all respects (which does not necessarily mean that the model is bad as long as its limitations are known).
Our assumption that the perfect scar is non-degenerate may also be lifted but leads to more cumbersome calculations - if the perfect scars do not overlap, one may regularise by first regularizing the scattering matrices of the corresponding non-overlapping subgraphs and then build up the full scattering matrix from there. Otherwise the rank one projector \(\mathbf{P}\) needs to be replaced by higher rank projectors.
|
2309.08700 | Wasserstein Distributionally Robust Control Barrier Function using
Conditional Value-at-Risk with Differentiable Convex Programming | Control Barrier functions (CBFs) have attracted extensive attention for
designing safe controllers for their deployment in real-world safety-critical
systems. However, the perception of the surrounding environment is often
subject to stochasticity and further distributional shift from the nominal one.
In this paper, we present distributional robust CBF (DR-CBF) to achieve
resilience under distributional shift while keeping the advantages of CBF, such
as computational efficacy and forward invariance.
To achieve this goal, we first propose a single-level convex reformulation to
estimate the conditional value at risk (CVaR) of the safety constraints under
distributional shift measured by a Wasserstein metric, which is by nature
tri-level programming. Moreover, to construct a control barrier condition to
enforce the forward invariance of the CVaR, the technique of differentiable
convex programming is applied to enable differentiation through the
optimization layer of CVaR estimation. We also provide an approximate variant
of DR-CBF for higher-order systems. Simulation results are presented to
validate the chance-constrained safety guarantee under the distributional shift
in both first and second-order systems. | Alaa Eddine Chriat, Chuangchuang Sun | 2023-09-15T18:45:09Z | http://arxiv.org/abs/2309.08700v1 | Wasserstein Distributionally Robust Control Barrier Function using Conditional Value-at-Risk with Differentiable Convex Programming
###### Abstract
**Control Barrier functions (CBFs) have attracted extensive attention for designing safe controllers for their deployment in real-world safety-critical systems. However, the perception of the surrounding environment is often subject to stochasticity and further distributional shift from the nominal one. In this paper, we present distributional robust CBF (DR-CBF) to achieve resilience under distributional shift while keeping the advantages of CBF, such as computational efficacy and forward invariance.**
**To achieve this goal, we first propose a single-level convex reformulation to estimate the conditional value at risk (CVaR) of the safety constraints under distributional shift measured by a Wasserstein metric, which is by nature tri-level programming. Moreover, to construct a control barrier condition to enforce the forward invariance of the CVaR, the technique of differentiable convex programming is applied to enable differentiation through the optimization layer of CVaR estimation. We also provide an approximate variant of DR-CBF for higher-order systems. Simulation results are presented to validate the chance-constrained safety guarantee under the distributional shift in both first and second-order systems.**
## I Introduction
Autonomous systems are nowadays ubiquitous in the world, from daily life assistance for household chores, and industrial productions, to space explorations, which have significantly changed human society. For example, in 2020, there are 276 million motor vehicles registered in the U.S.[1], and the global market of industrial robots was estimated at around $55 billion and is projected to surpass $165 billion in 2028 [2]. However, there can often be perturbations, noise, or malicious attacks during sensing and perception, communication, and actuation. This issue is further exacerbated in unstructured and dynamical environments, such as off-road vehicles, multi-domain operations, and space explorations.
Consider a challenging rescue mission with a robot system after an earthquake. Due to the damage to the cyber-physical infrastructure, communication with the agent can be very limited/ corrupted. Also, the mission can have many unanticipated scenarios, such as road closure. Moreover, in future battlefields, military forces will be deployed in complex environments, including multi-domain operations (MDO) against adversaries by combining the traditional domains (e.g., air, maritime) and the information and electromagnetic domains. In MDO, the goal is to achieve superiority by "connecting distributed sensors, shooters, and data from all domains to joint forces, enabling the coordinated exercise of authority to integrate planning and synchronize convergence in time, space, and purpose" ([3], page 6). In such operations, pervasive uncertainty/perturbation from the environments, intentional/ stealthy/ deceptive adversarial attacks from the opponents, and cyber-physical dysfunctionality can all possibly lead to mission failures.
In aerial flights and space explorations, such scenarios are also common. A quadcopter, deployed in the open world for wild wire monitoring and disaster relief, can often encounter unexpected gusts. Moreover, the CADRE (Cooperative Autonomous Distributed Robotic Exploration) program from NASA JPL aims to achieve collaborative, autonomous exploration, and formation sensing with an integrative pipeline of sensing, perception, communication, computing, and decision-making. Small rovers will be deployed, which will explore the lunar surface, share data, and cooperatively make decisions to eventually accomplish tasks, such as generating a digital elevation map. Moreover, the first Mars helicopter Ingenuity was launched with Mars rovers (e.g., Perseverance) for more effective Mars exploration. Intuitively, the condition is much more restrictive on the Moon/ Mars, regarding atmospheric data, communication quality, the toughness of terrain, and eventually the degradation of all equipment (e.g., perception sensors) on board without maintenance. As a result, the noise and uncertainty from sensing and perception, inter-vehicle communication, planning
and control, and actuation will bring significant challenges to collaborative decision-making and coordination. In other words, while a prior of those quantities of interest is available, such probabilistic distribution can shift. In summary, decision-making in autonomous systems with rigorous robustness and resilience is highly desired in many applications, especially those with humans in the loop. To this end, our overarching goal is to develop a distributed approach for distributionally robust control and decision-making for autonomous systems with applications in space exploration.
Due to the importance of decision-making under distributional shift, there exist many works of distributionally robust control for a single agent in the framework of model predictive control (MPC), mostly also using chance constraints such as conditional value-at-risk [4, 5, 6, 7, 8]. There are also works based on approximate dynamic programming to achieve distributional robustness [9]. However, those methods often require solving a complex optimization problem, possibly in the minimax form, making it inapplicable for online efficient control. Comparatively, control barrier functions(CBF [10]) have attracted much attention with diverse variants in different settings. One of the reasons is their computational efficacy, which only requires successively solving a convex quadratic programming regarding one time step for general control affine systems. Moreover, the forward invariance of safety satisfaction is also guaranteed via the control barrier condition. However, exactly combining chanced-constrained distributional optimization and control barrier function is not straightforward and we aim to bridge the gaps.
Consider a safety constraint \(h(x,w)\leq 0\), where \(x\) is the state and \(w\) is the noise subject to distributional shift. The chance-constrained safety specification, \(\text{CVaR}\circ h(x,w)\), is often estimated by conditional value at risk (CVaR) by solving optimization problems. Here "\(\circ\)" denotes function composition. It comes to the first issue: estimating the CVaR under distributional shift is nontrivial; it is by nature a tri-level problem: CVaR optimization and the primal-dual optimization considering the distributional constraint. To address this issue, we present an approximate approach to keep tractability while avoiding over-conservatism, which eventually leads to single-level convex programming. Subsequently, it comes to the forward invariance and satisfaction of the \(\text{CVaR}\circ h(x,w)\) by control barrier functions. Naturally, the control barrier condition (CBC) that enforces the forward invariance of the safety constraints under distributional shift is in the form of CBC \(\circ\) CVaR \(\circ\)\(h(x,w)\). It is known that CBC needs the differentiation of its argument, which in this case \(\text{CVaR}\circ h(x,w)\). However, with \(\text{CVaR}(\bullet)\) as an optimization layer, it is not immediately clear how to differentiate through it to construct the CBC. To circumvent this issue, work in [11] estimates the conditional value at risk of the control barrier condition, instead of the CVaR estimate of the original constraint \(h(x,w)\). That is to say, a relaxed criterion is imposed by enforcing the chance-constrained control barrier condition (i.e., \(\text{CVaR}_{\alpha}\circ\text{CBC}\circ h(x,w)\)), instead of enforcing the forward variance of the real chance-constrained safety constraint (i.e., \(\text{CBC}\circ\text{CVaR}_{\alpha}\circ h(x,w)\) ). As a result, \(\text{CVaR}_{\alpha}\circ\text{CBC}\circ h(x,w)\) only needs to differentiate \(h(x,w)\) itself, which is much easier compared to the differentiation through the optimization layer \(\text{CVaR}_{\alpha}\circ h(x,w)\) over \(x\).
### Contributions
To enable distributional robustness while keeping the advantages of the control barrier function, we make the following contributions to bridge the aforementioned gaps with distributionally robust control barrier functions.
* We present a formulation to simply the tri-level optimization problem for estimating the CVaR into a single-level convex program, then use differentiable convex programming to enable differentiation through the optimization layer \(\text{CVaR}\circ h(x,w)\)
* We construct the control barrier function to enforce the forward invariance of the chance-constrained safety constraint (i.e., \(\text{CVaR}\circ h(x,w)\)). As a result, it makes it possible to enforce CBC \(\circ\)\(\text{CVaR}_{\alpha}\circ h(x,w)\) that can capture the essence of the problem and eventually guarantee the forward invariance and the satisfaction of the safety specification. We also provide an approximate method for higher-order systems.
* We present simulation results where the strength of the proposed distributionally robust control barrier function (i.e., remaining safe under distributional shift) in stochastic environments is demonstrated compared to the vanilla CBFs.
### Related Works
There have been extensive existing works[12, 13] studying robust learning and control under perturbations, often formulated as \(l_{p}\) balls as \(\|x-x_{0}\|_{p}\leq\epsilon\), with \(x\) and \(x_{0}\) as the quantity of interest and its nominal value respectively and \(\epsilon\) as the radius of the \(l_{p}\) ball. More specifically, the perturbations in observations [14], action [15], model [16], and those in the context of safe reinforcement learning (RL)[17, 18] have been well studied. In multi-agent RL, model uncertainty[19], adversary agents [19, 20], and beyond have been considered to achieve robustness. Correspondingly, the
distributional shift defined as \(x\sim p(x),d(p(x),p_{0}(x))\leq\epsilon\), where \(p(x)\) and \(p_{0}(x)\) are the real and nominal probability distributions of \(x\) with \(d(\bullet,\bullet)\) as a distance measurement between two probability distributions and \(\epsilon\) the threshold. Compared to the \(I_{p}\) ball perturbation, the distributional shift admits a much larger space to explore to find the worst adversarial behaviors, which the agent is expected to mitigate in a principled way. For distributionally robust single-agent learning and control under distributional shift, model-based approaches[4] such as approximate dynamic programming [5, 9] and model predictive control [6, 7, 8] have been proposed, with a chance-constrained criterion under a Wasserstein metric. Specifically, in the framework of model predictive control, many existing works of distributionally robust control use conditional value-at-risk [4, 5, 6, 7, 8]. In the model-free regime, one line of work is to generate environments/ tasks with distributional shifts, for policy training to achieve robustness[21, 22, 23]. Moreover, to balance the worst-case (robustness) and average performance, [24] trains policies over task groups by adding regularization to the worst possible outcomes. Additionally, with the presence of adversaries, the minimax/ bi-level optimization formulation [25] is often employed for worst-case robustness, which is challenging to solve.
### Organization
To ensure that the research provides a comprehensive understanding of the topic at hand, this research paper is organized into four main sections. The first section introduces the background and significance of the research, highlighting the motivation behind the study and its relevance. The rest of this paper is organized as follows. In Section II, we revisit some fundamental concepts and definitions of control barrier functions, differentiable convex programming, and distributionally robust optimization. In section III, we start by developing the optimization problem to estimate CVaR under distributional shift and calculate the relevant gradients. Then we construct the control barrier function of the CVaR for first-order systems and provide an approximate method for higher-order systems. In section IV, we present simulation results for first-order and second-order systems highlighting the advantage of using the Distributionally Robust Control Barrier Function(DR-CBF). Finally, section V summarizes our research, and discusses some future ideas that warrant exploration.
## II Preliminary
### Control Barrier Function
Control Barrier Functions (CBFs) are mathematical tools used to ensure the safety and stability of dynamic systems. A CBF is a function that maps the current state of the system to a value that measures how far the system is from violating the desired safety constraint. The control law is then designed to enforce the CBF such that it remains within the safe region. The control law is typically designed using a Lyapunov-based approach, where a Lyapunov function is chosen to drive the system to the desired behavior and solved in the form of a Quadratic Program (QP), while The barrier function is incorporated into the QP to guarantee the desired safety specifications. Mathematically, consider the nonlinear control-affine system:
\[\dot{x}(t)=f(x(t))+g(x(t))u(t) \tag{1}\]
where \(f\) and \(g\) are globally Lipschitz, \(x\in\mathbb{R}^{n}\) and \(u\in\mathbb{R}^{m}\) are the states and control inputs, respectively, constrained in closed sets, with initial condition \(x(t_{0})=x_{0}\).
**Definition 1**: _[_10_]__\(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a barrier function for the set \(C=\{x\in\mathbb{R}^{n}:h(x)\geqslant 0\}\) if \(\exists\)\(\mathcal{K}\) function \(\alpha(\bullet)\) such that:_
\[\begin{split}\sup_{u\in U}\left[L_{f}h(x)+L_{g}h(x)u+\alpha(h(x)) \right]\geqslant 0\\ \inf_{int(C)}\left[\alpha(h(x))\right]\geqslant 0\quad\text{ and } \quad\lim_{\partial C}\alpha(h(x))=0\end{split} \tag{2}\]
Because not all systems are first-order in inputs, we can use higher-order control barrier functions to constrain higher-order systems.
**Definition 2**: _[_26_]__For the non linear system (1) with the \(m^{th}\) differentiable function \(h(x)\) as a constraint, we define a sequence of functions \(\psi_{i}\) with \(i\in\{1,2,...,m\}\), starting from \(\psi_{0}=h(x)\):_
\[\psi_{i}(x,t)=\dot{\psi}_{i-1}(x,t)+\alpha_{i}\left(\psi_{i-1}(x,t)\right) \tag{3}\]
the function \(h(x)\) is a high order control barrier function if \(\exists\)\(\mathcal{K}\) functions \(\alpha_{i}(\bullet)\) such that:_
\[\psi_{m}(x,t)\geqslant 0 \tag{4}\]
By solving the following Quadratic Program, leveraging the power of CBF, we can enforce safety constraints, maintain stability, and prevent undesirable behavior
\[\begin{split}\min_{u\in[\underline{u},\dot{u}]}& J(x,u)\\ \text{s.t.}&\frac{\partial h(x)}{\partial x}(f(x)+g( x)u)+\kappa(h(x))\geq 0\end{split} \tag{5}\]
### Differentiable Convex Programming
Differentiable convex programming is a powerful technique that enables the computation of gradients for the objective function of an optimization problem with respect to its parameters. This is achieved by applying matrix differentiation to the Karush-Kuhn-Tucker (KKT) conditions. A notable example of a differentiable optimization method is OPTNET [27], which incorporates differentiable optimization problems within the architecture of neural networks. During training, the gradients of the objective function are computed and back-propagated through the network. In a broader sense, this methodology can be applied to differentiate through disciplined convex programs [28] by initially mapping them into cone programs [29], computing the gradients, and subsequently mapping back to the original problem. A common application of differentiable programming is learning the constraints of an optimization problem, such as convex polytopes or ellipsoid projections. The major advantage of differentiable optimization methods, such as OPTNET, lies in their ability to optimize a wide range of challenging convex objectives that are typically difficult to handle using traditional optimization approaches. It is worth noting that the convex quadratic program (QP) in equation (5) can be differentiated through the KKT conditions [27], which serve as equivalent conditions for global optimality. According to the KKT conditions, at the optimal solution, the gradient of the Lagrangian function with respect to the program's input and parameters must be zero. Consequently, by taking the partial derivative of the Lagrangian function with respect to the input and extending it through the chain rule to the program's parameters, their gradients can be obtained. We have integrated differentiable optimization using the cvxpylayers package * which is an extension to the cvxpy package with an affine-solver-affine (ASA) approach. The ASA consists of taking the optimization problem's objective and constraints and mapping them to a cone program. For a generalized QP:
Footnote *: [https://github.com/cvxgrp/cvxpylayers](https://github.com/cvxgrp/cvxpylayers)
\[\begin{split}\min_{x}&\frac{1}{2}x^{T}Qx+q^{T}x\\ \text{s.t.}& Ax=b\\ & Gx\leq h,\end{split} \tag{6}\]
we can write the Lagrangian of the problem as:
\[L(z,\nu,\lambda)=\frac{1}{2}z^{T}Qz+q^{T}z+\nu^{T}(Az-b)+\lambda^{T}(Gz-h) \tag{7}\]
where \(\nu\) are the dual variables on the equality constraints and \(\lambda\geq 0\) are the dual variables on the inequality constraint. Using the KKT conditions for stationarity, primal feasibility, and complementary slackness.
\[\begin{split} Qz^{\star}+q+A^{T}\nu^{\star}+G^{T}\lambda^{\star} &=0\\ Az^{\star}-b&=0\\ D\left(\lambda^{\star}\right)\left(Gz^{\star}-h\right)& =0\end{split} \tag{8}\]
By differentiating these conditions, we can shape the Jacobian of the problem as follows.
\[\left[\begin{array}{c}d_{z}\\ d_{\lambda}\\ d_{\nu}\end{array}\right]=-\left[\begin{array}{ccc}Q&G^{T}D\left(\lambda^{ \star}\right)&A^{T}\\ G&D\left(Gz^{\star}-h\right)&0\\ A&0&0\end{array}\right]^{-1}\left[\begin{array}{c}\left(\frac{\partial\ell}{ \partial z^{\star}}\right)^{T}\\ 0\\ 0\end{array}\right] \tag{9}\]
Furthermore, via chain rule, we can get the derivatives of any loss function of interest regarding any of the parameters in the QP.
### Distributionally Robust Optimization and Conditional Value at Risk
Distributionally Robust Optimization (DRO) is an approach to optimization under uncertainty that aims to find solutions that are robust against a wide range of possible probability distributions. Unlike traditional optimization methods that assume a known probability distribution for uncertain parameters, DRO takes a more cautious approach by considering a set of possible distributions and optimizing for the worst-case scenario within that set. In DRO, the uncertain parameters are typically modeled as random variables with unknown distributions. The goal is to find a solution that performs well across all possible distributions within a certain ambiguity set. The ambiguity set represents the range of possible distributions and is defined based on certain metrics, such as statistical moments or the Wasserstein metric. The key idea in DRO is to find a solution that minimizes the expected cost or maximizes the expected reward under the worst-case distribution within the ambiguity set. This approach provides a robust solution that performs well regardless of the actual distribution of the uncertain parameters. DRO has applications in various domains, including operations research, finance, and machine learning. It can be used to optimize decisions in settings with uncertain data, such as supply chain management, portfolio optimization, or predictive modeling with uncertain inputs. However, solving DRO problems can be challenging due to the increased complexity introduced by worst-case optimization. The optimization problem typically becomes non-convex and computationally demanding. Various techniques, such as convex relaxations, scenario approximations, or sample-based methods, are used to handle the computational challenges associated with DRO. In general, the value at risk of a random quantity \(h(x,w)\) (with \(h\) as the shorthand notation) with a confidence level of \(\alpha\) is :
\[\text{VaR}_{\alpha}(h):=\min\{\eta\in\mathbb{R}\mid\mathbb{P}(h\leq\eta)\geq \alpha\} \tag{10}\]
which can be interpreted as the worst-case scenario risk with probability \(\alpha\). Due to the complexity of solving for the VAR, we define a more efficient version, the conditional value at risk which can be formulated as the following convex program:
\[\text{CVaR}_{\alpha}(h):=\min_{\eta\in\mathbb{R}}\mathbb{E}\left[\eta+\frac{ (h-\eta)_{+}}{1-\alpha}\right] \tag{11}\]
which can be subsequently reformulated into a tractable linear program:
\[\begin{split}\text{CVaR}_{\alpha}\left(h\right)\approx& \min_{\eta_{i},s_{i}}\quad\eta_{i}+\frac{1}{(1-\alpha)N_{s}}\sum_{m=1}^{N_{s}} s_{i}^{m}\\ &\text{s.t.}\quad h-\eta_{i}\leq s_{i}^{m},\quad\forall m\in \mathbb{I}_{1:N_{s}}\\ &\quad 0\leq s_{i}^{m},\quad\forall m\in\mathbb{I}_{1:N_{s}}\end{split} \tag{12}\]
where \(I\in\mathbb{N}\) is the set of estimated samples. In order to take into account the unmeasured distributions, we introduce the Wasserstein metric and build an ambiguity set. This enables solving the problem for the worst-case scenario. For all distributions \(\mathcal{P}_{1},\mathcal{P}_{2}\in\mathcal{P}(\mathbb{W})\) we can define the Wasserstein metric as:
\[d_{\text{W}}\left(\mathcal{P}_{1},\mathcal{P}_{2}\right):=\min_{\kappa\in \mathcal{P}\left(\mathbb{W}^{2}\right)}\left\{\int_{\mathbb{W}^{2}}\left\|w_ {1}-w_{2}\right\|\text{d}\kappa\left(w_{1},w_{2}\right)\right.\times\left| \right.\Pi^{l}\kappa=\mathcal{P}_{l},l=1,2\right\} \tag{13}\]
integrating the Wasserstein metric into the CVaR linear program to optimize over the whole ambiguity set results in the following optimization problem [6]:
\[\sup_{\mathcal{P}\in\mathbb{D}}\text{CVaR}_{\alpha}^{\mathcal{P}}(h)=\inf_{ \lambda\geq 0}\left\{\lambda\epsilon+\frac{1}{N_{s}}\sum_{m=1}^{N_{s}}\sup_{w \in\mathbb{W}}\left\{\left[h-\eta\right]_{+}-\lambda\left\|w-w^{m}\right\| \right\}\right\} \tag{14}\]
Overall, distributionally robust optimization provides a principled approach to decision-making under uncertainty, offers robustness guarantees, and can lead to more reliable and resilient solutions in uncertain environments.
## III Chance Constrained Distributionally Robust Control Barrier Functions with a Wasserstein Metric
### The Estimation of Conditional Value-at-Risk under Distributional Shifts: A Simplified Formulation
Consider a general dynamical system in the following form
\[\dot{x}=f\left(x,u\right) \tag{15}\]
where \(x\in\mathbb{R}^{n},u\in\mathbb{R}^{m}\), \(f:\mathbb{R}^{n+m}\rightarrow\mathbb{R}^{n}\) are the state, control input, and the dynamical transition function, respectively. Naturally, in many scenarios, there arise safety constraints, such as obstacle avoidance formulated as \(h(x)\leq 0\). Without loss of generality, only one constraint is considered here and our approach can be easily extended to multiple-constraint cases.
As the dynamics in (15) are deterministic, we consider stochastic constraints with additive noise \(h(x,w)\leq 0\). As a result, it is desirable to satisfy the constraint with as high a probability as possible, with a prerequisite first to estimate the worst case \(h(x,w)\) under stochasticity. Then value-at-risk and its more tractable approximation conditional value-at-risk (CVaR) [30] are often used to measure the risks. Mathematically, \(\lim_{\alpha\to 1}\text{CVaR}_{\alpha}(h(x,w))\leq 0\) means that the constraint is satisfied with a probability of at least \(\alpha\) (i.e., \(\mathbb{P}(h(x,w)\leq 0)\geq\alpha\) ). With \(N_{s}\) independent and identically distributed (i.i.d.) samples of the disturbance \(\{w^{m}\}_{m=1}^{N_{s}}\), we can get the corresponding samples of \(x\) at the current time step based on the dynamics (15) and the control input \(u\) of last time-step in an online setting. Time dependency is omitted, as it is applicable for all time instances. Then the CVaR\({}_{\alpha}\) can be estimated by solving the following linear programming [30] with auxiliary variables \(s^{m}\)
\[\min_{\eta,s}\left\{\eta+\frac{1}{(1-\alpha)N_{s}}\sum_{m\in[N_{s}]}s^{m}\mid \text{ s.t. }h\left(x^{m},w^{m}\right)-\eta\leq s^{m},s^{m}\geq 0,\forall m\in[N_{s}] \right\}, \tag{16}\]
where \([N_{s}]=1,\ldots,N_{s}\) is the index set for the samples.
To estimate the risk constraint \(\text{CVaR}_{\alpha}(h(x,w))\) via (16), there often requires many samples (i.e., large \(N_{s}\) ) of the random variables, such as sample average approximation (SAA [31]) in data-driven stochastic optimization. However, collecting samples of the disturbance for the physical robotic systems is too restrictive and unsafe, especially with humans in the loop. As such, limited samples will usually not be able to capture the proper distribution of the stochastic variable, admitting an ambiguity set. In other words, the actual distribution might shift from the estimated one. To effectively ensure robustness under such a distributional shift, Distributionally Robust Optimization (DRO [32, 33, 34]) will be employed to solve the stochastic optimization problem by considering the worst case within the ambiguity set. Intuitively, the metrics to measure the distance between two probabilistic distributions are used to parameterize the ambiguity set, including the Kullback-Leibler divergence [35] and the Wasserstein metric [36]. Here the latter is adopted, as distributional robust optimization provides a probabilistic guarantee of out-of-sample performance under a Wasserstein metric [37]. In the following, we will show how \(\text{CVaR}_{\alpha}\) can be estimated with the Wasserstein ambiguity set.
Denote \(p_{0}(w)\) as the empirical distribution of the random variables \(w\) estimated from samples \(\{w^{m}\}_{m=1}^{N_{s}}\). Then the ambiguity set of the perturbed distribution \(p(w)\) from the nominal distribution \(p_{0}(w)\) under a Wasserstein metric [36] is expressed as \(\mathcal{P}=\{p(w)\mid W_{d}\left(p(w),p_{0}(w)\right)\leq\rho\}\), with \(W_{d}(\bullet,\bullet)\) as the Wasserstein distance and \(\rho\) as the threshold of such a shift. To satisfy the constraint, we have the following worst-case scenario in the ambiguity set as \(H(x):=\sup_{p\in\mathcal{P}}\text{CVaR}_{\alpha}^{p}(h(x,w))\leq 0\). Then based on the definition of \(\text{CVaR}_{\alpha}\), it can be further reformulated as
\[\begin{split}\sup_{p\in\mathcal{P}}\text{CVaR}_{\alpha}^{p}(h(x, w))&\leq\min_{\eta}\left\{\eta+\frac{1}{1-\alpha}\sup_{p}\left\{ \left[h(x,w)-\eta\right]_{+}-\lambda W_{d}\left(p(w),p_{0}(w)\right)\right\} \right\}\\ &=\min_{\eta}\left\{\eta+\frac{1}{1-\alpha}\frac{1}{N_{s}}\sum_{ m\in[N_{s}]}\sup_{w}\left\{\left[h(x,w)-\eta\right]_{+}-\lambda\left\|w-w^{m} \right\|\right\}\right\}\end{split} \tag{17}\]
where \([\bullet]_{+}=\max(0,\bullet)\). In the first inequality, we take the ambiguity set constraint as a penalty with \(\lambda>0\) to reduce one layer of the optimization problem (i.e., eliminating the minimization over \(\lambda\)). However, the inner maximization
is on the infinite-dimensional probability measure of \(w\) and is hence intractable. With Kantorovich duality [37], it is further equivalently reformulated as an optimization problem on the finite space of \(w\) in the equality. Note that \([\bullet]_{+}\)can be transformed as linear constraints with slack variables as (16), resulting in an equivalent convex quadratic programming (QP) problem from (17). Note that the problem in (17) will lead to a bi-level optimization problem if combined with optimal control design. Due to the difficulty of solving bi-level optimization, most existing works plainly list the constraints in (17) by removing the minimization over \(\eta\) in the overall optimal control problem. This lead to conservatism (e.g., replacing \(min_{x}f(x)\leq 0\) by \(f(x)\leq 0\)). Here we will address this issue principally to keep tractability while mitigating over-conservatism, by efficiently solving the supremum problem in (17) to further reduce it from a bi-level problem to a single-level convex programming. To achieve this, it is further assumed that the noise is additive as \(h(x,w)=h(x)+w\), where \(w\in\mathbb{R}^{1}\) is in a closed convex set subject to a Gaussian distribution. This can lead to the analytical solution for the supremum problem. with two cases considering the \([\bullet]_{+}\) operator.
**Case 1: \(h(x)+w-\eta\leq 0\).**
In this case, the optimal solution for the \(\sup_{w}\left\{-\lambda\left\|w-w^{m}\right\|\right\}\) regarding \(w\) is achieved with \(w^{*}=w^{m}\). Hence, it results in the following linear program:
\[\begin{split}\sup_{p\in\mathcal{P}}\text{CVaR}_{\alpha}^{p}(h(x,w))&=\min_{\eta}\left\{\eta+\frac{1}{1-\alpha}\frac{1}{N_{s}} \sum_{m\in[N_{s}]}s^{m}\right\}\\ \text{s.t.}&\ h(x)+w^{*}-\eta\leq s^{m},\\ &\ h(x)+w^{*}-\eta\leq 0,\\ &\ 0\leq s^{m}.\end{split} \tag{18}\]
In this case, \(s^{m}\) will be zeros (due to the minimization and nature of the first two constraints). Then the estimation of \(\sup_{p\in\mathcal{P}}\text{CVaR}_{\alpha}^{p}(h(x,w))\) will be the worst case of \(h(x)+w^{m}\), intuitively leading to conservatism under the distributional shift of \(w\).
**Case 2: \(h(x)+w-\eta\geq 0\).**
In this case, the optimal solution for the linear program \(\sup_{w}h(x,w)-\eta-\lambda\left\|w-w^{m}\right\|\) regarding \(w\) is achieved at the vertices of the polytope feasible sets formulated via the linear constraints, including the bounds. Therefore, the value \(w^{m}\), or the bounds of the set \(\underline{w},\widetilde{w}\) are possible solutions for the supremum operator. As a result, we can arrive at the following linear program:
\[\begin{split}\sup_{p\in\mathcal{P}}\text{CVaR}_{\alpha}^{p}(h(x,w))&=\min_{\eta}\left\{\eta+\frac{1}{1-\alpha}\frac{1}{N_{s}} \sum_{m\in[N_{s}]}(s^{m}+L^{m})\right\}\\ \text{s.t.}&\\ &\left\{\begin{array}{l}h(x)+\widetilde{w}-\eta\leq s^{m},\\ h(x)+\widetilde{w}-\eta-\lambda(\widetilde{w}-w^{m})\leq L^{m},\\ h(x)+\widetilde{w}-\eta\geq 0,\\ \end{array}\right.\\ &\left\{\begin{array}{l}h(x)+w^{m}-\eta\leq s^{m},\\ h(x)+w^{m}-\eta\leq L^{m},\\ h(x)+w^{m}-\eta\geq 0,\\ \end{array}\right.\\ \end{split} \tag{19}\]
For each potential solution, we get a set of constraints to satisfy in (19). For the term with absolute value operator, i.e., \(-\left\|w-w^{m}\right\|\), it is further reduced as \((\underline{w}-w^{m})\) and \(-(\widetilde{w}-w^{m})\), with \(\underline{w}\) and \(\widetilde{w}\) as the lower and upper bound of \(w\), respectively. Also, since \(h(x)+\underline{w}-\eta\geq 0\) and \(h(x)+\underline{w}-\eta\leq s^{m}\), the constraint \(0\leq s^{m}\) becomes trivial and thus can be removed. Note that the problem in (17) is only to estimate the risk constraint and it will lead to a bi-level optimization problem if combined with optimal control design. Due to the difficulty of solving bi-level optimization, most existing works just plainly list the objective (without the minimization) and constraints in (17) as extra constraints over \(\eta\) in
the overall optimal control problem. This lead to conservatism (e.g., replacing \(\min_{g}f(\eta)\leq 0\) by \(f(\eta)\leq 0\)). Here we will address this issue principally to keep tractability while mitigating over-conservatism by integrating it with control barrier functions. As a result, we will present how we can use the _optimal value_ of \(\sup_{p\in\mathcal{P}}\mathrm{CVaR}_{\alpha}^{p}(h(x,w))\) and its derivative to construct the control barrier conditions in the following section.
### Distributionally Robust Control Barrier Functions via Differentiable Convex Programming
Consider the following non-linear control-affine system \(\dot{x}=f(x)+g(x)u\), where \(f\) and \(g\) are locally Lipschitz, \(x\in D\subset\mathbb{R}^{n}\) is the state and \(u\in U\subset\mathbb{R}^{m}\) is the set of admissible inputs. The safety set is defined as \(\mathcal{C}=\{x\in D\subset\mathbb{R}^{n}\ |\ h(x,w)\leq 0\}\) with \(\mathcal{C}\subset D\). Then \(h\) is a zeroing control barrier function (CBF) [10] if there exists an extended class- \(\kappa_{\infty}\) function \(\kappa\) such that for the above control system
\[\sup_{u\in U}\left(L_{f}h(x,w)+L_{g}h(x,w)u+\kappa(h(x,w))\right)\leq 0, \forall x\in D \tag{20}\]
where \(L_{f}h(x)=\left(\frac{\partial h(x,w)}{\partial x}\right)^{T}f(x)\) is the Lie derivative. Note that \(h(x,w)\leq 0\), instead of \(h(x,w)\geq 0\) in the CBF literature, defines the safety set for consistency here. As such, it is " \(\leq\) " rather than " \(\geq\) " in (20). The control barrier condition (CBC) in (20) will ensure the forward invariance of the constraint \(h(x,w)\) and has been extensively studied with many variants. Forward invariance means that the violation of the safety constraint will only become smaller and smaller if starting outside the safety set, and will remain inside otherwise.
In terms of distributionally robust CBF, work in [11] estimates the conditional value at risk of the control barrier condition in (20), instead of the CVaR estimate of the original constraint \(h(x,w)\) as in (17). That is to say, the former is applying a relaxed criterion by enforcing the chance-constrained control barrier condition (i.e., \(\mathrm{CVaR}_{\alpha}\circ\mathrm{CBC}\circ h(x,w)\)), instead of enforcing the forward variance of the real chance-constrained safety constraint (i.e., \(\mathrm{CBC}\circ\mathrm{CVaR}_{\alpha}\circ h(x,w)\) ). Here we use " \(\circ\) " to denote function composition to avoid many layers of parentheses. However, while \(\mathrm{CBC}\circ\mathrm{CVaR}_{\alpha}\circ h(x,w)\) can capture the essence of the problem, it brings new challenges. As to differentiate through the optimization layer \(\mathrm{CVaR}_{\alpha}\circ h(x,w)\) over \(x\) (non-trivial), while \(\mathrm{CVaR}_{\alpha}\circ\mathrm{CBC}\circ h(x,w)\) only needs to differentiate \(h(x,w)\) itself (much easier). we combine distributionally robust control with control barrier functions to enforce the forward invariance of the risk estimate \(\mathrm{CVaR}_{\alpha}\circ h(x,w)\). As discussed before, the estimate of \(\mathrm{CVaR}_{\alpha}(h(x,w))\) in (17) is a convex quadratic program, which we need to differentiate through over \(x\) to construct the control barrier condition in (20). Leveraging recent advances in differentiable convex optimization [38, 39, 40], we can formulate our problem as a disciplined parameterized program and use the _cvxpolyayers_ package to map our problem into a cone program and differentiate the KKT conditions at the optimal solution to get the partial derivatives of the solution of the CVaR with respect to the problem's parameters; see details in Section II.B. As a result, we are able to calculate \(\frac{\partial\mathrm{CVaR}_{\alpha}(h(x,w))}{\partial x}\) and construct the control barrier condition as follows
\[\min_{u}\left\{J\left(x,u\right)\ |\ \ \mathrm{s.t.}\ \frac{\partial\mathrm{CVaR}_{ \alpha}(h(x,w))}{\partial x}\left(f\left(x\right)+g\left(x\right)u\right)+ \kappa\left(\mathrm{CVaR}_{\alpha}(h(x,w))\right)\leq 0,\forall m\in\left[N_{s} \right]\right\}, \tag{21}\]
where \(J\left(x,u\right)\) is the loss function, such as reference trajectory tracking (e.g., \(\left\|x-x_{ref}\right\|_{2}^{2}\) ), a Lyapunov function for goal-reaching, optimal fuel consumption (e.g., \(\left\|u\right\|_{2}^{2}\) ), etc. Problem (21) is often a convex quadratic programming, with a quadratic loss function \(J\left(x,u\right)\) as exemplified and a linear CBC constraint inheriting from the general CBF context in (20). Note that the dynamics are not explicitly included in the optimization, and thus not variables, as CBF only considers a single time-step forward. The single-step system propagation will follow the dynamics in (15) to get the next state with the control input \(u\) from (21). In this way, we implicitly integrate the optimization problem in (17) for risk estimate as part of the optimal control problem (21), rather than just listing the constraints therein. Algorithm1 summarizes the steps for the distributionally robust control barrier function.
```
1:Require:\(\alpha,\lambda\), samples
2:while Not converged do
3: Initialize \(x\), \(w^{N_{s}}\), \(h(x,w)\)
4: Calculate CVaR using (18) or (19)
5: Get \(\frac{\partial\text{CVaR}_{\alpha}(h(x,w))}{\partial x}\) by backpropagating through the QP using differentiable convex programming (Section II.B)
6: Solve for optimal input \(u^{*}\) using the QP in (21) with CVaR and its derivative \(\frac{\partial\text{CVaR}_{\alpha}(h(x,w))}{\partial x}\)
7: Update \(x\gets x+f(x,u)\Delta t\)
8:if\(x==x_{\text{final}}\)then
9: Break
10:endif
11:endwhile
```
**Algorithm 1** Distributionally Robust Control Barrier Function
### High-Order System: an Approximate Method
Higher-order control barrier functions are an extension of traditional control barrier functions used for higher-order systems. HOCBF incorporates higher-order derivatives of the system's states which allows the consideration of more complex dynamics. Hence, handling more intricate safety requirements and enabling systems to avoid undesirable behaviors. Using the same approach for a higher-order system proves challenging due to the need for two successive differentiation through the Linear programs (18)and (19). However, we can get a good approximation by differentiating \(h(x,w)\) analytically first to get the first layer of the HOCBF 2, then we can calculate the CVaR\((\dot{h}(x,w)\) and backpropagate to get the gradients. We can construct the first Barrier analytically:
\[\psi(x,w)=\dot{h}(x,w)+\kappa(\dot{h}(x,w)) \tag{22}\]
then we can calculate the CVaR using:
\[\sup_{p\in\mathcal{P}}\text{CVaR}_{\alpha}^{p}(\psi(x,w))=\min_{\eta}\left\{ \eta+\frac{1}{1-\alpha}\frac{1}{N_{s}}\sum_{m\in[N_{s}]}\sup\left\{[\psi(x,w)- \eta]_{+}-\lambda\left\|w-w^{m}\right\|\right\}\right\} \tag{23}\]
It can be further integrated t in the optimal control problem as in (21) in the following way
\[\min_{u}\left\{J\left(x,u\right)\mid\text{ s.t. }\frac{\partial\text{CVaR}_{\alpha}( \psi(x,w))}{\partial x}\left(f\left(x\right)+g\left(x\right)u\right)+\kappa \left(\text{CVaR}_{\alpha}(\psi(x,w))\right)\leq 0,\forall m\in[N_{s}]\right\} \tag{24}\]
Although the resulting CBF is an approximation as \(\text{CBC}\circ\text{CVaR}_{\alpha}\circ\text{CBC}\circ h(x,w)\) instead of \(\text{CBC}\circ\text{CBC}\circ\text{CVaR}_{\alpha}\circ h(x,w)\), its performance was comparable to the first order DR-CBF. We defer the exact method for high-order systems, which requires higher-order differentiable convex programming techniques, to future works.
## IV Simulations and Results
In this section, we assess the performance of DR-CBF in several scenarios involving first-order systems and second-order systems. Our approach is then compared to a conventional CBF approach while keeping all other configurations identical. The advantages of the proposed DR-CBF on maintaining safety under distributional shift are presented. Note that in both these simulations, we use the first approximation of CVaR(18), for its computational ease and compromise between optimality and robustness.
### Dubins Car: A First-Order System Case Study
To evaluate the DR-CBF, we used the first-order Dubins car environment with the following kinematics:
\[\left(\begin{array}{c}\dot{x}\\ \dot{y}\\ \dot{\theta}\end{array}\right)=\left[\begin{array}{ccc}\cos\theta&-\sin \theta&0\\ \sin\theta&\cos\theta&0\\ 0&0&1\end{array}\right]\left(\begin{array}{c}v_{x}\\ v_{y}\\ \omega\end{array}\right), \tag{25}\]
where \(v_{x}\) and \(v_{y}\) are the velocities along the \(x\) and \(y\) axes of the car's frames, \(\theta\) is the heading angle, and \(\omega\) is the angular velocity.
In order to go from an initial state \(r_{0}=[x_{0},y_{0},\theta_{0}]^{T}\) to a final state \(r_{f}=[x_{f},y_{f},\theta_{f}]^{T}\), we use a Lyapunov function \(\frac{1}{2}(r-r_{f})^{2}\), resulting in \(J(r,u)=(r-r_{f})u+\frac{1}{2}(r-r_{f})^{2}\). We describe the safe region by the area outside a circular obstacle in the middle of the car's trajectory and an additive noise \(h(r,w)=\rho^{2}-\|r-r_{obs}\|_{2}^{2}+w\leq 0\) To keep the QP in (21) in a standard Control Lyapunov Function(CLF) form, we rewrite it as an explicit Quadratic Program and integrate the DR-CBF:
\[\begin{split}\min_{u\in[\underline{u},\delta],\delta}& u^{T}Qu+q^{T}\delta^{2}\\ \text{s.t.}&(r-r_{f})u+\frac{1}{2}(r-r_{f})^{2} \leq\delta\\ &\frac{\partial\text{CVaR}_{\alpha}(h(x,w))}{\partial x}\left( f\left(x\right)+g\left(x\right)u\right)+\kappa\left(\text{CVaR}_{\alpha}(h(x,w)) \right)\leq 0\end{split} \tag{26}\]
where \(\delta\) is a relaxation factor for the CLF, to allow some divergence from reaching the final point when the safety of the car is compromised and the CBF needs to take over.
Figure 1 presents trajectories for the standard CBF and our distributionally robust CBF, while the shaded area shows the contours of the circumference noise. The standard CBF successfully avoids the obstacle but stays amid the noisy region, resulting in a fluctuated trajectory affected by the noise. On the other hand, the DR-CBF takes a more conservative trajectory avoiding the noisy region as well.
Figure 1: The normal CBF trajectory crosses the noisy region and fluctuates while the DR-CBF avoids the noisy region by a safe margin.
### Quadcopter: A Second-Order System Case Study
The approximate method for high-order systems was demonstrated on a 2D Quadcopter environment, with the following kinematics:
\[\begin{split}\ddot{x}&=\frac{T_{r}+T_{l}}{m}\sin\theta \\ \ddot{y}&=\frac{T_{r}+T_{l}}{m}\cos\theta-g\\ \ddot{\theta}&=(T_{r}-T_{l})\frac{L}{J}\end{split} \tag{27}\]
where \(x\) is the horizontal distance of the quadcopter's frame, \(y\) is the vertical distance, \(\theta\) is the orientation of the quadcopter, \(T_{r},T_{l}\) are the right and left rotors thrust/control inputs, \(L\) is the arm length of the quadcopter, and \(J\) is its moment of inertia. We simulate a circular trajectory reference problem, where the quadcopter would come across four obstacles along its circular path, with added noise to their circumference \(h(r,w)=\rho^{2}-\|r-r_{obs}\|_{2}^{2}+w\leq 0\), with \(r=[x,y]\). This will require twice differentiations to get the control input in the CBF as this is a second-order system. We evaluate the performance of the CBF and DR-CBF in tracking the trajectory and avoiding the obstacles by a safe margin.
In Figure 2, it is demonstrated that the CBF stays close to the obstacle crossing the noisy region (magnified picture) while the DR-CBF starts steering clear of the obstacle earlier to avoid the noise, and keeps a safer distance from it. Due to the agility of the Quadcopter, we see a throwing motion after avoiding the obstacle for both algorithms, which reflects a slight delay in returning back to the reference trajectory.Table 1 summarizes the values for the coefficients used in each problem.
Figure 2: The DR-CBF follows a more conservative trajectory to avoid the potentially unsafe region compared to normal CBF.
Conclusion
In this paper, we devise a distributionally robust control barrier function for stochastic constraints, using the conditional value at risk, and convex differentiable programming. The proposed framework results in a safer and more robust variant of CBF. In further work, we want to explore methods for the following settings. (1) More complex, non-additive, and multidimensional noise, which leads to a harder problem for solving the supremum problem. (2) Solving the optimization under distributional constraints exactly with the primal-dual method without taking the dual variable as a constant penalty coefficient. This requires further work to re-cast the optimization over the dual variable in a tractable way. (3) Exact methods for higher-order systems, which require higher-order differentiable convex programming techniques.
## Appendix
|
2308.00713 | A Guide to the Risk-Averse Gambler and Resolving the St. Petersburg
Paradox Once and For All | We use three kinds of computations: simulation, numeric, and symbolic, to
guide risk-averse gamblers in general, and offer particular advice on how to
resolve the famous St. Petersburg paradox. | Lucy Martinez, Doron Zeilberger | 2023-07-31T16:14:11Z | http://arxiv.org/abs/2308.00713v2 | # A Guide to the Risk-Averse Gambler and Resolving the St. Petersburg Paradox Once and For All
###### Abstract
We use three kinds of computations: Simulation, Numeric, and Symbolic, to guide risk-averse gamblers in general, and offer particular advice on how to resolve the famous St. Petersburg paradox.
## 1 The Famous Saint Petersburg Paradox
In the original 'infinitarian' version of the St. Petersburg paradox [3], a gambler, let's call him Nick, is tossing a fair coin. If it lands on Heads, he gets two ducats, and has to leave the casino. Otherwise he stays, and tosses the coin again, and if it lands on Heads, he gets four ducats, and has to leave. The reward doubles each time, while he stays at the casino, so if he lasted \(k\) rounds, he takes home \(2^{k}\) ducats.
**Question:** How much should Nick be willing to pay as 'entrance fee' to the casino?
Nick's expected gain is
\[\frac{1}{2}\cdot 2+\frac{1}{4}\cdot 4+\frac{1}{8}\cdot 8+\ldots\,=\,\sum_{i=1}^{ \infty}\frac{1}{2^{i}}\cdot 2^{i}\,=\,\sum_{i=1}^{\infty}1\,=\,\infty.\]
So Nick should be willing to pay any amount, \(M\), (even a billion ducats), since his expected gain would still be \(\infty-M=\infty\).
Obviously, Nick should only be willing to pay a small amount for the privilege of playing. This is the original version of the famous St. Petersburg paradox, that puzzled some of the best minds in probability and economics. Just looking at the references of Wikipedia one can see (in addition to other luminaries, including Laplace), three Nobel Prize winners in economics (Kenneth Arrow, Robert Aumann, and Paul Samuelson).
### Supporting Maple package and output
All the results in this article were obtained by the use of the Maple package
[https://sites.math.rutgers.edu/~zeilberg/tokhniot/StPete.txt](https://sites.math.rutgers.edu/~zeilberg/tokhniot/StPete.txt)
that also requires the data set
[https://sites.math.rutgers.edu/~zeilberg/tokhniot/StPeteData.txt](https://sites.math.rutgers.edu/~zeilberg/tokhniot/StPeteData.txt)
(in the same directory in your computer), whose output files, along with links to diagrams, are available from the front of this article
[https://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/stpete.html](https://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/stpete.html)
### A quick resolution of the St. Petersburg Paradox
The whole thing is utter nonsense, since it involves an infinite sum, and infinity is a meaningless concept. Besides, life is obviously finite, so the original version of this paradox is just gibberish.
The Finite (and hence Meaningful) Version of the Saint Petersburg Paradox
Fix, once and for all, a positive integer \(k\), and stipulate that if Nick lasted all the \(k\) rounds, i.e. the coin tosses were all Tails, he would also get \(2^{k}\) ducats, so his expected gain is
\[\sum_{i=1}^{k}\frac{1}{2^{i}}\cdot 2^{i}+\frac{1}{2^{k}}\cdot 2^{k}\,=\,\sum_{i=1 }^{k+1}1\,=\,k+1.\]
Hence, the conventional wisdom of rational choice theory is that he should be willing to pay any amount \(n<k+1\) for the privilege of playing, since his expected gain, \(k+1-n\) would be positive. Once again, Nick should not accept this bet, if he is only allowed one shot, since his probability of losing money is very high, and he hates to lose (after all he is _risk averse_). Of course, if you want to make money gambling, even if the odds are in your favor, you should be willing to tolerate some positive chance of losing, but if Nick can insist on being able to play this game many times, then the Central Limit Theorem would guarantee that his chance of exiting the casino a loser can be made as small as he wishes.
**Question**: For a given risk-averseness, i.e. the maximum probability \(\epsilon\) of winding up a loser that Nick is willing to take, how many rounds exactly should he insist on?
## 3 Simulation
Stephen Wolfram famously said that formulas and equations are _passe_, long live simulation (aka Monte-Carlo, another casino!). Indeed, in the bad old days, before computers, (poor Count Buffon!), it was impractical to do efficient simulations in real time. In other words, before actually playing for real, have a dry run. Once the gambler decides on insisting that he should be able to repeat the gamble \(n\) times, and then he can repeat each such \(n\)-times game, \(N\) times, and see what happens. The larger the \(N\), the better the estimate.
If \(n\) is large enough, then he would wind up not losing any money most of the times, but once in a while, he would lose some money, and he hates to lose.
He can then count how many of the \(N\)'meta-times' were winning, and hence estimate the probability that he will not lose any money with this stipulated \(n\).
Now with computers, one can do it very fast, without any calculations! Our Maple package, StPete.txt does it for you, dear gambler. First, we have a macro, StPetePT(n,M), that inputs \(n\), the number of allowed rounds in one game of the St. Petersburg game, and the "entrance fee", \(M\). It outputs the probability table of the outcomes of the game. So when \(M=0\), it outputs the probability table of outcomes when there is no fee. For the rest of the paper, we remind the reader that Maple syntax requires the user to use a semicolon whenever a macro is used. For example, trying
lprint(StPete(6,0)); would output
[[2, 1/2], [4, 1/4], [8, 1/8], [16, 1/16], [32, 1/32], [32, 1/32]] meaning that with probability \(\frac{1}{2}\) you get 2 ducats, with probability \(\frac{1}{4}\) you get 4 ducats,..., with probability \(\frac{1}{32}\) you get 32 ducats, and again with probability \(\frac{1}{32}\) you get 32 ducats (of course, we could have combined these two last outcomes, but for the sake of clarity we prefer to keep it that way).
As noted above, the expected gain is 6, hence it is still a good deal to have entrance fee 5. Typing
lprint(StPete(6,5)); would output
[[-3, 1/2], [-1, 1/4], [3, 1/8], [11, 1/16], [27, 1/32], [27, 1/32]] meaning that with probability \(\frac{1}{2}\) you would lose 3 ducats, with probability \(\frac{1}{4}\) you would lose 1 ducat, with probability \(\frac{1}{8}\) you will win 3 ducats etc. Note that if you can only play it once, your probability of losing money is \(\frac{3}{4}\), how scary! But with the protection of the law of large numbers, we can try and see what happens, by pure simulation, if you play it many times. Procedure
Simu1(M,n) takes any such probability table \(M\) and runs the gamble \(n\) times and outputs your total gain from this one \(n\)-times run. Most often, if \(n\) is large enough,
you would wind up winning at least some money, but once in awhile you would be a loser. Procedure
Simu(M,n,N)
runs Simu1(M,n) \(N\) times, and returns the total gain, that should be close to \(N\) times the expected gain of \(M\) (assumed positive), followed by the estimated probability that you will win some money. For example with the above probability table,
M:=[[-3, 1/2], [-1, 1/4], [3, 1/8], [11, 1/16], [27, 1/32], [27, 1/32]]; typing (once our Maple package StPete.txt has been read),
Simu([[-3, 1/2], [-1, 1/4], [3, 1/8], [11, 1/16], [27, 1/32], [27, 1/32]],100,1000);
outputs something like
101.5120000, 0.9150000000.
This means that the estimated probability of not losing any money is 0.915. Of course, this is only an estimate, and every time you get something slightly different. Doing it again, gave us:
103.2560000, 0.9200000000.
We will soon see, using symbolic computation, that the exact probability is 0.9088286275.... So the drawback of simulation (no offense to Wolfram) is that it is only approximate, and also, quite time-consuming, even with a fast computer.
## 4 Elementary Symbolic Computation
Recall that a probability table for a gamble is a list of pairs of the form:
\[M=[[M_{1},p_{1}],\ldots,[M_{r},p_{r}]].\]
This means that with probability \(p_{1}\) you would get \(M_{1}\) dollars (or ducats, or whatever), with probability \(p_{2}\) you would get \(M_{2}\) dollars,...., and with probability \(p_{r}\) you would get \(M_{r}\) dollars. In general \(M_{1},\ldots,M_{r}\) are any real
numbers, but for the sake of simplicity, let's assume that they are integers. Of course, in real life, currency is discrete, so this assumption is not unrealistic. Note that some of the \(M_{i}\) are negative, otherwise the decision whether to play would be a no-brainer. If you gamble you should be willing to lose once in a while. Also, the probabilities \(p_{i}\) are all non-negative, and add-up to one,
\[p_{1}+p_{2}+\cdots+p_{r}\,=\,1.\]
The probability generating function (henceforth pgf), in the (formal) variable \(x\), is the following Laurent polynomial (i.e. a polynomial that can also have negative exponents, for example \(p(x)=1/x+x\)),
\[P_{M}(x)=\sum_{i=1}^{r}p_{i}x^{M_{i}}.\]
For example, for the above St. Petersburg gamble with six rounds and entrance fee 5,
M=[[-3, 1/2], [-1, 1/4], [3, 1/8], [11, 1/16], [27, 1/32], [27, 1/32]] the pgf is
\[P(x)\,=\,\frac{1}{2}\cdot x^{-3}\,+\,\frac{1}{4}\cdot x^{-1}+\frac{x^{3}}{8}+ \frac{x^{11}}{16}+\frac{x^{27}}{16}.\]
This is implemented in procedure PGF(M,x), in our Maple package.
Since you are risk-averse, you are interested in the probability of not losing money, or even better, winning some. Let's denote by \(P(x)^{+}\) the sum of the coefficients whose exponents are positive, then
\[\left(\sum_{i=1}^{k}p_{i}x^{M_{i}}\right)^{(+)}=\sum_{1\leq i\leq r\atop{ \lambda}_{i}>0}p_{i}.\]
In the above running example, if you only play \(M\) once, your probability of winning some money is only \(\frac{1}{4}\). But, if you insist on the privilege of playing the gamble a pre-decided number of times, \(n\), then your probability of winning some money is
\[\left(P(x)^{n}\right)^{+}.\]
Maple is very good at raising a Laurent polynomial to high powers, expanding them, and then adding up the coefficients of the terms with positive exponents.
This is implemented in procedure
ProbPos(M,n) in our Maple package StPete.txt. For example, to get the probability of winning some money if you are allowed to gamble 100 times in the above gamble, type:
ProbPos([[-3, 1/2], [-1, 1/4], [3, 1/8], [11, 1/16], [27, 1/32], [27, 1/32]],100); getting, immediately that the exact probability is
\[\frac{6125492831448122153753381305179491123116907379470526605886323646825}{673998666 787659948666753771754907668409286105635143120275902562304},\]
or more usefully, in decimals,
\[0.9088286275,\]
confirming the estimates that we got above using simulation. So if you play this gamble 100 times, you know that with probability more than ninety percent you would win some money. But, being risk-averse, this is too risky! If you insist on being allowed to play exactly 200 times, then typing
evalf(ProbPos([[-3, 1/2], [-1, 1/4], [3, 1/8], [11, 1/16], [27, 1/32], [27, 1/32]],200)); would tell you that that the probability of not losing is 0.9733818383, and if you really want to play it safe, and insist on playing 1000 times (if you can spare the time)
evalf(ProbPos([[-3, 1/2], [-1, 1/4], [3, 1/8], [11, 1/16], [27, 1/32], [27, 1/32]],1000)); would give you the very reassuring 0.9999947442.
### Simplified Gambles
To simplify matters, and still preserve the St. Petersburg spirit, let's consider the family of gambles whose probability table, let's call it \(G_{i}\), is
\[G_{i}:=\left[\left[-1,\frac{i-1}{i}\right],\left[i,\frac{1}{i}\right]\right],\]
whose probability generating function, let's call it \(P_{i}(x)\) is
\[P_{i}(x)=\frac{i-1}{i}x^{-1}+\frac{1}{i}x^{i}.\]
Note that the expected gain is positive, namely \(\frac{1}{i}\), and it is equivalent (changing currency) to the gamble
\[\left[\left[-i,\frac{i-1}{i}\right],\left[i^{2},\frac{1}{i}\right]\right],\]
whose expected gain is 1 unit. Let's experiment with \(G_{10}\), and the above procedure ProbPos.
If you play 100 times: the probability of not losing is gotten by typing
evalf(ProbPos([[-1,9/10],[10,1/10]],100)); giving 0.5487098346, while if you play 500 times, typing
evalf(ProbPos([[-1,9/10],[10,1/10]],500));
would still be the fairly low 0.7453107394. Wouldn't it be nice if we could compute fast, the sequence
\[\left\{ProbPos(M,n)\right\}\]
for \(n\leq N_{0}\), for any desired \(N_{0}\)?
Luckily, thanks to the Almkvist-Zeilberger algorithm [1] we can compute many terms very fast, as we will see in the next section.
## 5 Advanced Symbolic Computation
For any Laurent polynomial \(P(x)\), we have
\[(P(x))^{(+)} = \sum_{j=1}^{\infty}\operatorname{Coeff}_{x^{j}}(P(x))\,=\,\sum_{j=1 }^{\infty}\int_{|x|=1}\frac{1}{2\pi i}\frac{P(x)}{x^{j+1}}\,dx\] \[= \frac{1}{2\pi i}\int_{|x|=1}P(x)\sum_{j=1}^{\infty}\frac{1}{x^{j +1}}\,dx\quad\text{ by Cauchy's residue theorem}\] \[= \frac{1}{2\pi i}\int_{|x|=1}\frac{P(x)}{x(x-1)}\,dx,\]
where \(\operatorname{Coeff}_{x^{j}}(P(x))\) is the coefficient of \(x_{j}\) in the Laurent polynomial \(P(x)\).
Given the one-shot, \(M\), with its pgf \(P_{M}(x)\), we are interested in the probability of winding up with at least some money after \(n\) repeats. In other words we are interested in the sequence
\[\frac{1}{2\pi i}\int_{|x|=1}\frac{(P_{M}(x))^{n}}{x(x-1)}\,dx,\quad n\in \mathbb{N}.\]
Thanks to the Almkvist-Zeilberger algorithm [1] (see [2] for a lucid and engaging exposition), such sequences always satisfy a linear recurrence equation with polynomial coefficients. This algorithm is included in StPete.txt. The function call is
OpeProbPos(M,n,Sn)
where \(M\) is the probability table, \(n\) (a symbol!) is the number of repeats, and \(Sn\) is the symbol denoting the shift operator in \(n\). It also returns the initial conditions. These operators are very complicated, and it is better not to show them to humans. But the computer can use them to compute this sequence very fast. It turns out that eventually, the risk-averse gambler would have to repeat the gamble so many times, that it would be impractical, and he should refuse to play.
## 6 Numerics: The Central Limit Theorem to the Rescue
The advantage of 'elementary symbolic computation', and the more efficient and much faster, 'advanced symbolic computation' is that it gives you the exact desired probability. Alas, as the number of repeats \(n\) gets larger, sooner or later, it takes too long. It turns out, that for sufficiently large \(n\) the approximation given by the Central Limit Theorem gives you good approximations.
Given a gamble \(M=[[M_{1},p_{1}],\ldots,[M_{r},p_{r}]]\), define, as usual
\[\mu :=\sum_{i=1}^{r}p_{i}\,M_{i},\] \[\sigma^{2} :=\sum_{i=1}^{r}p_{i}\,(M_{i}-\mu)^{2}.\]
Then the probability that after \(n\) repeats, of not losing is approximately
\[\operatorname{erf}\left(\frac{-\mu\sqrt{n/2}}{\sigma}\right)\]
where \(\operatorname{erf}(x)\) is the error-function, built-in in Maple, and \(\operatorname{erf}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-t^{2}}dt\). This is implemented in procedure ProbPosA. For small \(n\) it is not so good, but it gets better as \(n\) gets larger. For example:
evalf(ProbPos([[-1,9/10],[10,1/10]],100));
gives 0.5487098346, while
evalf(ProbPosA([[-1,9/10],[10,1/10]],100));
gives 0.6190666158. Not very good!
evalf(ProbPos([[-1,9/10],[10,1/10]],1000));
gives 0.8417618586, while its Central Limit Theorem approximation gives 0.8310356673, much better!
evalf(ProbPos([[-1,9/10],[10,1/10]],10000));
gives 0.9988718721, while the approximation is 0.9987784576, very close! Furthermore the latter is much faster!
## 7 Data Files
Using our Maple package, we prepared lots of useful data files to guide the risk-averse gambler. They are all in the front of this article:
[https://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/stpete.html](https://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/stpete.html).
It also contains nice pictures. Enjoy!
## 8 Graphs
In this section, we provide graphs for the risk-averseness with different probability tables and gambles.
The following list describes each of the graphs in figures 1(a), 1(b), 1(c), 1(d), 1(e), and 1(f):
1. Given: in one shot, the probability of losing one dollar is \(1/2\) and the probability of winning \(2\) dollars is \(1/2\). The graph illustrates the probability of not losing money if you insist on playing \(n\) times, for \(n\) from \(1\) to \(200\).
2. Given: in one shot, your probability of losing one dollar is \(2/3\) and the probability of winning \(3\) dollars is \(1/3\). The graph illustrates the probability of not losing money if you insist on playing \(n\) times, for \(n\) from \(1\) to \(600\).
3. Given: in one shot, your probability of losing one dollar is \(3/4\) and the probability of winning \(4\) dollars is \(1/4\). The graph illustrates the probability of not losing money if you insist of playing \(n\) times, for \(n\) from \(1\) to \(700\).
4. Given: in one shot, your probability of losing one dollar is \(7/8\) and the probability of winning \(8\) dollars is \(1/8\). The graph illustrates the probability of not losing money if you insist of playing \(n\) times, for \(n\) from \(1\) to \(3000\).
* Given: in one shot, your probability of losing one dollar is \(8/9\) and the probability of winning \(9\) dollars is \(1/9\). The graph illustrates the probability of not losing money if you insist of playing \(n\) times, for \(n\) from \(1\) to \(3000\).
* Given: in one shot, your probability of losing one dollar is \(9/10\) and the probability of winning \(10\) dollars is \(1/10\). The graph illustrates the probability of not losing money if you insist of playing \(n\) times, for \(n\) from \(1\) to \(3000\).
The following list describes each of the graphs in figures 2(a), 2(b) and 2(c):
* The risk-averseness graph for the St. Petersburg Gamble with \(7\) rounds and entrance fee \(7\). The graph represents the probability of not losing money if you insist on playing \(n\) times, for \(n\) from \(1\) to \(300\) where the
Figure 1: The risk-averseness graphs for the corresponding gambles.
probability table is \([[-5,1/2],[-3,1/4],[1,1/8],[9,1/16],[25,1/32],[57,1/64],[121,1/128],\) \([121,1/128]]\).
2. The approximate risk-averseness graph for the St. Petersburg Gamble with 7 rounds and entrance fee 7, using the Central Limit Theorem Approximation. The graph represents the probability of not losing money if you insist on playing \(n\) times, for \(n\) from 1 to 2000 where the probability table is the same as in example (a).
3. The approximate risk-averseness graph for the St. Petersburg Gamble with 7 rounds and entrance fee 7, using the Central Limit Theorem Approximation. The graph represents the probability of not losing money if you insist on playing \(n\) times, for \(n\) from 1 to 2000 where the probability table is \([[-9,1/2],[-7,1/4],[-3,1/8],[5,1/16],[21,1/32],[53,1/64],[117,1/128],\) \([245,1/256],[501,1/512],[1013,1/1024],[2037,1/2048],[2037,1/2048]]\).
|
2309.08066 | Morphologically-Aware Consensus Computation via Heuristics-based
IterATive Optimization (MACCHIatO) | The extraction of consensus segmentations from several binary or
probabilistic masks is important to solve various tasks such as the analysis of
inter-rater variability or the fusion of several neural network outputs. One of
the most widely used methods to obtain such a consensus segmentation is the
STAPLE algorithm. In this paper, we first demonstrate that the output of that
algorithm is heavily impacted by the background size of images and the choice
of the prior. We then propose a new method to construct a binary or a
probabilistic consensus segmentation based on the Fr\'{e}chet means of
carefully chosen distances which makes it totally independent of the image
background size. We provide a heuristic approach to optimize this criterion
such that a voxel's class is fully determined by its voxel-wise distance to the
different masks, the connected component it belongs to and the group of raters
who segmented it. We compared extensively our method on several datasets with
the STAPLE method and the naive segmentation averaging method, showing that it
leads to binary consensus masks of intermediate size between Majority Voting
and STAPLE and to different posterior probabilities than Mask Averaging and
STAPLE methods. Our code is available at
https://gitlab.inria.fr/dhamzaou/jaccardmap . | Dimitri Hamzaoui, Sarah Montagne, Raphaële Renard-Penna, Nicholas Ayache, Hervé Delingette | 2023-09-14T23:28:58Z | http://arxiv.org/abs/2309.08066v2 | [
###### Abstract
The extraction of consensus segmentations from several binary or probabilistic masks is important to solve various tasks such as the analysis of inter-rater variability or the fusion of several neural network outputs. One of the most widely used methods to obtain such a consensus segmentation is the STAPLE algorithm. In this paper, we first demonstrate that the output of that algorithm is heavily impacted by the background size of images and the choice of the prior. We then propose a new method to construct a binary or a probabilistic consensus segmentation based on the Frechet means of carefully chosen distances which makes it totally independent of the image background size. We provide a heuristic approach to optimize this criterion such that a voxel's class is fully determined by its voxel-wise distance to the different masks, the connected component it belongs to and the group of raters who segmented it. We compared extensively our method on several datasets with the STAPLE method and the naive segmentation averaging method, showing that it leads to binary consensus masks of intermediate size between Majority Voting and STAPLE and to different posterior probabilities than Mask Averaging and STAPLE methods. Our code is available at [https://gitlab.inria.fr/dhamzaou/jaccardmap](https://gitlab.inria.fr/dhamzaou/jaccardmap).
Congensus, Distance, Heuristics, Optimization, STAPLE Morphologically-Aware Consensus Computation via Heuristics-based IterATive Optimization (MACCHIatO)]Morphologically-Aware Consensus Computation via Heuristics-based IterATive Optimization (MACCHIatO) Dimitri Hamzaoui [https://orcid.org/0000-0003-2775-8594](https://orcid.org/0000-0003-2775-8594)
Universite Cote d'Azur, Inria, Epione Team, Sophia Antipolis, France
Carnegie Mellon, Universite Cote d'Azur, Inria, Epione Team, Sophia Antipolis, France
Universite Cote d'Azur, Inria, Epione Team, Sophia Antipolis, France
Universite Cote d'Azur, Inria, Epione Team, Sophia Antipolis, France
Consensus, Distance, Heuristics, Optimization, STAPLE
## 1 Introduction
The fusion of several segmentations into a single consensus segmentation is a classical problem in the field of medical image analysis related to the need to merge multiple segmentations provided by several clinicians into a single "consensus" segmentation. This problem has been recently revived by the development of deep learning and the multiplication of ensemble methods based on neural networks (Isensee et al., 2021). One of the most well-known methods to obtain a consensus segmentation is the STAPLE algorithm (Warfield
et al., 2004), where an Expectation-Maximization algorithm is used to jointly construct a consensus segmentation, and to estimate the raters' performances posed in terms of sensitivities and specificities. The seminal STAPLE method (Warfield et al., 2004) creating a probabilistic consensus from a set of binary segmentations was followed by several follow-up works. For instance, Asman and Landman (2012) replaced global indices of performance by spatially dependent performance fields and Commowick et al. (2012) combined STAPLE with a sliding window approach to allow spatial variations of rater performances. Another improvement consisted in introducing the original image intensity information (Asman and Landman, 2013). Several alternatives to STAPLE were proposed, with a large diversity of approaches. Some of them decided to use a generative model but with different properties. For example, Audelan et al. (2020) modeled raters' input maps by heavy-tailed distributions whose parameters are estimated by variational calculus, and Sabuncu et al. (2010) presented a model using a random field learnt on the whole set to model the interaction between the intensity maps and the corresponding label maps. Methods based on deep learning were also conceived, as in Zhang et al. (2020) where two CNNs are trained together to estimate simultaneously the consensus segmentation and each rater's performance via an estimation of their spatial confusion matrices. Also in Ji et al. (2021) authors incorporate the expertise level of each rater and specific modules to better take into account disagreements between raters. However, those methods do not lead to explainable results and they require the collection of a preliminary training data on a consequent number of cases which make them not suitable on small datasets. In addition to those complex methods, several studies (Rohlfing and Maurer, 2007; Aljabar et al., 2009) show that simple majority voting (MV) could remain a suitable pick. However STAPLE and its simple yet robust probabilistic model remains the go-to method for consensus segmentation estimation (Warfield et al., 2004; Dewalle-Vignion et al., 2015) despite suffering from several limitations, some of them already addressed in the literature (Asman and Landman, 2012; Commowick et al., 2012; Asman and Landman, 2013) and some, to the best of our knowledge, never raised before.
In this article, we first analytically characterize the dependence of the STAPLE algorithm on the size of the background image and the choice of prior consensus probability. We then introduce an alternative consensus segmentation method, coined MACCHlatO, which is based on the minimization of the squared distance between each binary segmentation and the consensus. After choosing a distance between binary or probabilistic shapes, the consensus is thus posed as the estimation of the Frechet mean of this distance (an extension of centroids to metric spaces), which is independent of the size of the background image for a well-chosen distance. We show that the adoption of specific heuristics based on morphological distances (i.e. voxel-wise distances to the different binary masks based on morphological operations) during the optimization allows to provide a novel binary or probabilistic globally consistent consensus method that creates masks of intermediate size between Majority Voting and the STAPLE methods.
This work extends our MICCAI-UNSURE 2022 paper (Hamzaoui et al., 2022) by (1) Adding the Dice coefficient and its soft surrogates as distances between binary sets (2) Providing more mathematical details on baseline models and the STAPLE's dependence on the background size and prior choice (3) Adding experiments and a dataset to justify the choice of selected heuristics and to analyze the impact on the consensus volume and
computational time and (4) Expanding the discussion in various ways including detailing the limitations of the proposed approach.
## 2 Estimation of a soft or hard consensus from binary segmentations
In the remainder, we consider the problem of generating a consensus segmentation \(T_{n}\), \(1\leq n\leq N\) given \(K\) binary segmentations \(\mathcal{S}=\{S^{1},...,S^{K}\}\), \(S^{k}_{n}\in\{0,1\}\) of size \(N\) provided by each rater \(k\). The consensus segmentation may be either a _hard_ binary segmentation \(T_{n}\in\{0,1\}\) or a _soft_ probabilistic segmentation \(\tilde{T}_{n}\in[0,1]\), the tilde sign indicating that we are dealing with a continuous probabilistic consensus value, rather than a binary one. Given a soft consensus, one can easily generate a hard consensus by thresholding the soft consensus voxels at the \(0.5\) limit. Yet, this raises the issue of dealing with voxels that are exactly at the \(0.5\) value which can be either set arbitrarily to one of the \(2\) classes or set aside to a third class.
In terms of probabilistic framework, the main approach is to consider that each observed binary segmentation \(S^{k}\) results from a random process applied on a consensus segmentation \(T\) which is captured by the likelihood distribution \(p(S^{k}|T,\theta_{k})\) also involving some parameters \(\theta_{k}\) specific to each rater \(k\). A prior probability on the consensus \(p(T)\) is also defined related to the general _a priori_ knowledge about the consensus segmentation. Then a hard consensus can be obtained as a maximum likelihood \(T=\arg\max_{M}p(\mathcal{S}|M)\) or maximum _a posteriori_ estimate \(U=\arg\max_{U}p(\mathcal{S}|U)p(U)\) whereas a soft consensus is obtained as the posterior probability \(p(\tilde{T}|\mathcal{S})=p(\mathcal{S}|\tilde{T})p(\tilde{T})/p(\mathcal{S})\). The parameters \(\theta_{k}\) are also estimated by maximum likelihood for hard consensuses or maximum marginal likelihood for soft ones.
We make use of the following notations : \(\text{FP}_{k}\), \(\text{TP}_{k}\), \(\text{FN}_{k}\), and \(\text{TN}_{k}\) are respectively the number of false positives, true positives, false negatives, and true negatives between observed mask \(S^{k}\) and consensus \(T\), i.e. \(FP_{k}=\sum_{n=1}^{N}S^{k}_{n}\wedge T_{n}\).
We consider as baseline methods to create a hard consensus the majority voting (MV) and the ML STAPLE (Maximum Likelihood STAPLE, a binary version of STAPLE) algorithms whereas mask averaging (MA) and STAPLE algorithm are baseline approaches for the soft consensus estimation. We describe below the hypotheses in terms of probability distribution associated with those baseline models and discuss their limitations.
### Majority Voting and Mask Averaging Models
We first make the hypothesis of voxel independence, i.e. that the binary value of each voxel of an observed segmentation mask \(S^{k}\) is independent of the values of other voxels : \(p(S^{k}|T)=\prod_{n=1}^{N}p(S^{k}_{n}|T_{n})\). Furthermore, we consider that the prior and likelihood probability are simple Bernoulli distributions of the same parameter \(b_{n}\in[0,1]\) : \(p(S^{k}_{n}=1|b_{n})=p(T_{n}=1|b_{n})=b_{n}\). This means that the probability parameter \(b_{n}\) is potentially different for all voxels, but the same for all raters : \(\theta_{k}=\theta=\{b_{n}\}\). Also, the observed masks \(\mathcal{S}\) do not directly depend on the consensus but share the same distribution.
Therefore the likelihood of observing the whole segmentation data is then
\[p(\mathcal{S}|\theta)=\prod_{k=1}^{K}\prod_{n=1}^{N}b_{n}^{S^{k}_{n}}(1-b_{n} )^{1-S^{k}_{n}}=\prod_{n=1}^{N}b_{n}^{S^{+}_{n}}(1-b_{n})^{S^{-}_{n}}\]
where \(S_{n}^{+}\) (resp. \(S_{n}^{-}=K-S_{n}^{+}\)) is the number of times voxel \(n\) is equal to 1 (resp. 0) in the observed segmentation masks \(S^{k}\),1 \(\leq k\leq K\). After maximizing the likelihood, one trivially gets the Bernoulli parameter as \(p(S_{n}^{k}=1|b_{n})=p(T_{n}=1|b_{n})=\frac{S_{n}^{+}}{K}=b_{n}\), leading to the Mask Averaging consensus formula where the probability of having a foreground voxel is the frequency of positive voxels in the observed masks \(S^{k}\). To estimate the hard consensus, one needs to maximize \(p(T_{n}|b_{n})\) thus leading to majority voting : \(T_{n}=1\) if \(S_{n}^{+}>S_{n}^{-}\) and \(T_{n}=0\) if \(S_{n}^{+}<S_{n}^{-}\).
LimitationsMajority voting and mask averaging are simple and easy-to-understand mechanisms to choose a consensus. Yet they suffer from the fact that this decision is purely local without any influence from the neighboring pixels. This can lead to situations where the hard consensus includes some isolated voxels or has very irregular boundaries. This is especially true for mask averaging, which does not have any mechanisms to enforce inter-rater consistency and that relies on the implicit assumption that the neighboring voxels of a segmented voxel are likely to be segmented, which is not the case on the boundaries. Another limitation of majority voting is the case where the number of raters \(K\) is even and therefore many decisions are ambiguous with as many foreground than background voxels. Finally, those simple models assume that all raters' contributions to the consensus are equal which may not be the case. In particular, an underperforming rater will bias the soft consensus with mask averaging.
### STAPLE model
In the STAPLE algorithm (Warfield et al., 2004), all voxels are also assumed independent but the probability that \(S_{n}^{k}\) is equal to \(T_{n}\) depends on whether \(T_{n}\) is a background or foreground voxel, and on the rater \(k\). More precisely, \(p(S_{n}^{k}=T_{n}|T_{n}=1)=p_{k}\) and \(p(S_{n}^{k}=T_{n}|T_{n}=0)=q_{k}\) where \(p_{k}\) is the sensitivity of rater \(k\) and \(q_{k}\) its specificity.
Prior ConsensusThe consensus prior probability is here supposed to factorize as the product of voxel priors \(w_{n}\) values \(p(T)=\prod_{n=1}^{N}P(T_{n})=\prod_{n=1}^{N}w_{n}\). The original STAPLE paper (Warfield et al., 2004) also introduced an Ising Markov random field model as a prior consensus probability to enforce that a voxel prior value depends on that of its neighbors. However, this approach leads to solving iteratively graph cuts problems and is not available in most widely used STAPLE implementations. Instead, the original paper assumes simple independent priors that lead to closed-form updates. Choosing \(w_{n}=w=\frac{1}{2}\) is a non-informative prior but another common choice is to have a spatially uniform value \(w_{n}=w=\frac{1}{NK}\sum_{n,k}S_{n}^{k}\) which is the average relative size of the foreground object in the observed segmentation masks. We further consider more general priors of the form \(w=\frac{A}{N^{\alpha}}\), with \(A\) a constant independent of the image size, and \(\alpha\in\mathbb{N}\) an exponent. The non-informative case \(w_{n}=0.5\) corresponds to \(\alpha=0\) while the average object size to \(\alpha=1\).
Maximum likelihood STAPLE (ML STAPLE)The likelihood of the observed data simply writes as \(\mathcal{L}(T,\theta)=\prod_{k=1}^{K}p_{k}^{\text{TP}_{k}}(1-p_{k})^{\text{ FN}_{k}}q_{k}^{\text{TN}_{k}}(1-q_{k})^{\text{FP}_{k}}\) and does not involve the prior on the consensus. There is no closed-form expression for the estimation of the rater parameters \((p_{k},q_{k})\) and the hard consensus \((T)\) maximizing the likelihood. But an iterative maximization of the likelihood is possible by setting its derivatives to zero which leads to the update equation :
\[p_{k}=\frac{\text{TP}_{k}}{\text{TP}_{k}+\text{FN}_{k}} q_{k}=\frac{\text{TN}_{k}}{\text{TN}_{k}+\text{FP}_{k}} \tag{1}\] \[s_{n}^{+}=\prod_{k=1}^{K}p_{k}^{S_{n}^{k}}(1-p_{k})^{1-S_{n}^{k}} s_{n}^{-}=\prod_{k=1}^{K}q_{k}^{1-S_{n}^{k}}(1-q_{k})^{S_{n}^{k}}\] (2) \[T_{n}=1\text{ if }s_{n}^{+}>s_{n}^{-} T_{n}=0\text{ if }s_{n}^{+}<s_{n}^{-}\]
Maximum marginal likelihood (MML STAPLE)The _marginal likelihood_ or _evidence_ writes as \(p(\mathcal{S}|\theta)=\prod_{n=1}^{N}(w_{n}\prod_{k}p_{k}^{S_{k}^{k}}(1-p_{k})^ {1-S_{n}^{k}}+(1-w_{n})\prod_{k}q_{k}^{1-S_{n}^{k}}(1-q_{k})^{S_{n}^{k}})\) and is only a function of the rare parameters \(\theta_{k}\). Its maximization is not tractable in closed form but the expectation-maximization algorithm provides a way to estimate some local maxima. The E-step consists in evaluating the posterior probability from Bayes law with the current estimated sensitivities and specificities :
\[u_{n}=p(\tilde{T}|\theta,\mathcal{S})=\frac{w_{n}\prod_{k}p_{k}^{S_{k}^{k}}(1- p_{k})^{1-S_{n}^{k}}}{w_{n}\prod_{k}p_{k}^{S_{k}^{k}}(1-p_{k})^{1-S_{n}^{k}}+(1-w _{n})\prod_{k}q_{k}^{1-S_{n}^{k}}(1-q_{k})^{S_{n}^{k}}} \tag{3}\]
The M-step updates the parameters \(p_{k}\) and \(q_{k}\) as follows :
\[p_{k}=\frac{\sum_{n,S_{n}^{k}=1}u_{n}}{\sum_{n}u_{n}}=\frac{\text{sTP}_{k}}{ \text{sFN}_{k}+\text{sTP}_{k}}\hskip 28.452756ptq_{k}=\frac{\sum_{n,S_{n}^{k}=0 }(1-u_{n})}{\sum_{n}(1-u_{n})}=\frac{\text{sTN}_{k}}{\text{sTN}_{k}+\text{sFP} _{k}} \tag{4}\]
where \(\text{sTP}_{k}\), \(\text{sTN}_{k}\), \(\text{sFP}_{k}\), \(\text{sFN}_{k}\) are the "soft extension" of the number of true positive, true negative, false positive, and false negative voxels from rare \(k\).
#### 2.2.1 Influence of the prior term
We can better understand the influence of the prior when estimating the probability to belong to a consensus by writing its logit \(\text{logit}(u_{n})=\ln\left(\frac{u_{n}}{1-u_{n}}\right)\) from Eq.3 :
\[\text{logit}\left(u_{n}\right)=\text{logit}(w_{n})+\sum_{k,S_{n}^{k}=1}\log \left(\frac{p_{k}}{1-q_{k}}\right)+\sum_{k,S_{n}^{k}=0}\log\left(\frac{1-p_{k}} {q_{k}}\right) \tag{5}\]
Thus, we see that to estimate \(u_{n}\) each foreground voxel of rare \(k\) "votes" with a (usually) positive quantity \(\log\left(\frac{p_{k}}{1-q_{k}}\right)\) whereas each background voxel "votes" with a (usually) negative quantity \(\log\left(\frac{1-p_{k}}{q_{k}}\right)\). Then the prior term \(\text{logit}(w_{n})\) biases this vote depending on whether \(w_{n}\) is greater or smaller than \(\frac{1}{2}\).
#### 2.2.2 Influence of the background size
In many cases, the size \(N\) of images that contain the objects delineated by the raters is arbitrary since it can be the size of the original image (with a large value of \(N\)) or the size of a restricted region of interest (with a small value of \(N\)). It is therefore important to estimate the influence of the background size, i.e. the number of true negative voxels \(\text{TN}_{k}\), in the estimation of the hard and soft consensus.
Influence on hard consensusBased on Eqs.1 and 2, the sensitivity and coefficient \(s_{n}^{+}\) are not influenced by \(\text{TN}_{k}\), but the specificities are. More precisely, we have \(q_{k}=1-\frac{\text{FP}_{k}}{\text{TN}_{k}}+O((\text{TN}_{k})^{-2})\), and therefore the quantity \(s_{n}^{-}\) tends towards 0 when \(\text{TN}_{k}\) reaches large values. This implies that the hard consensus converges towards the union of all observed segmentation masks when the background size becomes large.
Influence on soft consensusThe posterior probability \(u_{n}\) and specificities \(q_{k}\) are mainly impacted by the increase of the background size, while the sensitivities are more marginally influenced. The nature of the soft consensus depends on the \(\alpha\) exponent of the prior expression \(w_{n}=\frac{A}{N^{\alpha}}\), and in particular we have :
\[\text{logit}\left(u_{n}\right)=(\sum_{k=1}^{K}S_{n}^{k}-\alpha)\log N+\log A+ \ln\left(\frac{p_{k}}{\text{sFP}_{k}}\right)+\sum_{k,S_{n}^{k}=0}\ln\left(1-p_ {k}\right)+O(N^{-2})\]
A direct consequence of this formula is that the background size impacts the obtained consensus, as can be seen in Fig. 1a where the consensus obtained when applying STAPLE on a bounding box tightly surrounding the organ (referred to as _Focused STAPLE_ in Fig. 1a) appears as smaller and with more non-binary values than the one computed on the whole image (referred to as _Full size STAPLE_ in Fig. 1a). Comparisons between STAPLE computed on both volumes are available in Tab. 10 in the appendices. Moreover, as seen in Fig.1b, the soft consensus when having a large background size depends on the value of \(\alpha\), with larger \(\alpha\) corresponding to smaller consensuses. The detailed proof is presented in Appendix A.
Removing the Influence of the background sizeWe explore under which conditions the STAPLE model leads to consensus estimations that are independent of the background size. A first simplification of the model is to assume that all raters perform equally \(p_{k}=p\), \(q_{k}=q\). In this case, the global specificity maximizing the likelihood is \(q=\frac{\sum_{k=1}^{K}\text{TN}_{k}}{\sum_{k=1}^{K}\text{TN}_{k}+\text{FP}_{k}}\) which is still dependent on the size of the background through \(\text{TN}_{k}\).
A second simplification is to consider that each rater sensitivity and specificity are equal, i.e. \(p_{k}=q_{k}=\gamma_{k}\). This implies that the rater performance is independent of the fact the consensus voxel is in the background or foreground. In this case, the parameter \(p_{k}=q_{k}=\gamma_{k}\) can be interpreted as the accuracy parameter and its optimization leads to \(\gamma_{k}=\frac{\text{TP}_{k}+\text{TN}_{k}}{N}\). It is easy to see that in that case, \(\frac{s_{n}^{+}}{s_{n}^{-}}=\left(\frac{\gamma_{k}}{1-\gamma_{k}}\right)^{S_{ n}^{+}-S_{n}^{-}}\), and therefore the maximum likelihood is equivalent to majority voting when \(\gamma_{k}>\frac{1}{2}\) which is independent of background size. With this simplification, and from Eq.5, the soft consensus obtained by maximizing the marginal likelihood with a non-informative prior \(w_{n}=\frac{1}{2}\), is such that \(\text{logit}(u_{n})=(S_{n}^{+}-S_{n}^{-})\,\text{logit}(\gamma_{k})\). The value of \(\gamma_{k}\) depends on the background size, but whether a voxel is more likely to be a background pixel \(u_{n}>\frac{1}{2}\) does not depend on the background size.
#### 2.2.3 Limitations
The STAPLE algorithm addresses the problem of taking into account the performance of raters when building a consensus segmentation. However, this approach has the drawback
of being dependent on the choice of the prior, and the background size. This dependence of the STAPLE consensus can be explained by the fact that it is a generative model which should explain the foreground and the background voxels separately. When assuming that the rater performance is the same in both background and foreground, then the model becomes equivalent to majority voting. This dependence is a subject of concern as STAPLE is often used as a standard in label fusion works. To improve the robustness of comparisons with novel methods and decrease the impact of this hidden hyperparameter, researchers may compute STAPLE consensus using several bounding boxes, or at least indicates the size of the bounding box on which STAPLE was applied.
The use of local sliding windows in STAPLE as in Commowick et al. (2012) can somewhat mitigate the background size effect, but smaller structures in images can still be impacted and the window size remains a hyperparameter which is difficult to set.
## 3 MACCHIatO framework
### Main approach description
In the previous section, we have seen that only the majority voting and mask averaging algorithms lead to a consensus that is independent of the background size. Yet, those algorithms are purely local at the voxel level and can lead to irregular boundaries or isolated voxels.
In this section we introduce a new framework to compute soft and hard consensuses that are i) invariant from the background size and ii) dependent on the global morphology of each binary object. This approach is coined MACCHIatO for Morphologically-Aware Consensus Computation via Heuristics-based IterATive Optimization.
Distance-based approachWe formulate the estimation of a hard consensus \(T\) as the minimization of the sum of the square distance between the consensus \(T\) and each observed
Figure 1: Impact of STAPLE hyperparameters and background size on the soft consensus
binary mask \(S^{k}\) :
\[T=\arg\min_{M\in\{0,1\}^{N}}\sum_{k=1}^{K}d(M,S^{k})^{2} \tag{6}\]
where \(d(T,S^{k})\) is a distance as defined in Deza and Deza (2016) between the two masks \(S^{k}\) and \(T\). This is equivalent to estimating the consensus as a maximum likelihood where the likelihood can be written as \(p(S^{k}|T)\propto\exp(-\lambda d(T,S^{k})^{2})\). We can note that the squared sum \(\sum_{k=1}^{K}d(M,S^{k})^{2}\) also corresponds to the definition of a Frechet variance. Based on this interpretation, \(T\) appears as the Frechet mean of \(\mathcal{S}\) i.e. its centroid in the metric space defined by \(d\).
Link with baseline modelsIn section 2.2.2, we have seen that when the sensitivity and specificity are equal, the maximization of the STAPLE model leads to the majority voting algorithm. In this case, we can write the likelihood \(p(S^{k}|T)=\gamma_{k}^{\text{TP}_{k}+\text{TN}_{k}}(1-\gamma_{k})^{\text{FP}_ {k}+\text{FN}_{k}}\) (where \(\gamma_{k}\) is the accuracy parameter) which is a product of \(N\) independent Bernoulli distributions. Since the Bernoulli distribution is a member of the exponential family (Dai et al., 2013), it can be also written as \(p(S^{k}|T)\propto\exp(-\lambda_{k}(\text{FP}_{k}+\text{FN}_{k}))\) where \(\lambda_{k}=\text{logit}(\gamma_{k})\). The number of false positives or false negatives \(\text{FP}_{k}+\text{FN}_{k}\) is the number of elements of symmetric difference between the two sets \(S^{k}\) and \(T\) : \(\text{FP}_{k}+\text{FN}_{k}=|T\Delta S^{k}|=|(T\cup S^{k})\setminus(T\cap S^{k })|\) and is also called the _Hamming distance_ in information theory. Thus, by choosing \(d(T,S^{k})=\sqrt{|T\Delta S^{k}|}\), the maximum likelihood leads to majority voting consensus (as detailed in Appendix B).
Soft consensus frameworkOn the baseline models, soft consensus were obtained as posterior probabilities of having a consensus from the observed binary masks. However, from the likelihoods \(p(S^{k}|\tilde{T})\propto\exp(-\lambda d(\tilde{T},S^{k})^{2})\), the computation of the posterior \(p(\tilde{T}|\mathcal{S})\) may not be tractable due to the difficulty of computing the normalization constant. Instead, we propose to approximate \(p(\tilde{T}_{n}|\mathcal{S})\) by the quantity \(\tilde{U}_{n}\in[0,1]\) such that \(\tilde{U}\in[0,1]^{N}\) minimizes the quantity :
\[\tilde{U}=\arg\min_{\tilde{X}\in[0,1]^{N}}\sum_{k=1}^{K}d^{s}(\tilde{X},S^{k}) ^{2} \tag{7}\]
where \(d^{s}(\tilde{X},S^{k})\) is a distance between the probabilistic array \(\tilde{X}\) and the binary mask \(S^{k}\). More precisely, the distances \(d^{s}(\tilde{X},S^{k})\) considered are _soft surrogate_ of the distance between binary sets \(d(\tilde{X},S^{k})\) such that \(d^{s}(\tilde{X},S^{k})^{2}=d(\tilde{X},S^{k})^{2}\) when \(\tilde{X}\in\{0,1\}^{N}\). For instance, the distance \(d(\tilde{X},S^{k})=\|\tilde{X}-S^{k}\|\) is a soft surrogate of the Hamming distance since \(|\tilde{X}\Delta S^{k}|=\|\tilde{X}-S^{k}\|^{2}\). Besides it is clear that the mask averaging (MA) method is a soft consensus minimizing the following squared sum \(\sum_{k=1}^{K}\|\tilde{U}-S^{k}\|^{2}\).
Optimization approachThe estimation of the soft and hard consensus is independent of the background size if the distance \(d(T,S^{k})\) is invariant to the number of true negatives. Besides, unlike the MV and MA algorithms, the optimization cannot be performed at the voxel level when the distance cannot be split voxelwise. Instead of optimizing the whole foreground object, we chose to consider each connected component separately from each other to obtain more coherent results. Finally, we further split the optimization into subcrowns with various heuristics to speed up the computation.
### Distances between binary masks
We detail below the selected distances between binary sets that are considered and their associated soft surrogates. We mainly focus on distances based on two widely used methods to measure the overlap between binary segmentations : the Jaccard and Dice coefficients.
Jaccard distanceThe Jaccard coefficient (aka IoU) between binary masks \(A\) and \(B\in\{0,1\}^{N}\) is defined as : \(\text{Jac}(A,B)=\frac{|A\cap B|}{|A\cup B|}\). In Kosub (2019), it is shown that its complementary to \(1\)\(\text{dist}_{J}(A,B)=1-\text{Jac}(A,B)=\frac{|A\Delta B|}{|A\cup B|}\) is a metric between binary sets following the triangular inequality. Several formulations of soft surrogates exist that extend the Jaccard distance. We focused specifically on two of them : the Soergel metric (Spath, 1981; Deza and Deza, 2016)\(d_{\text{Sg}}(x,y)=\frac{\sum_{i}\max(x_{i},y_{i})-\min(x_{i},y_{i})}{\sum_{i} \max(x_{i},y_{i})}\) which follows the triangular inequality but is not differentiable, and the widely-used Tanimoto distance (Willett et al., 1998; Deza and Deza, 2016; Leach and Gillet, 2007)\(d_{\text{Tan}}(x,y)=1-\frac{\sum_{i}x_{i}y_{i}}{\sum_{i}x_{i}^{2}+y_{i}^{2}-x_ {i}y_{i}}=\frac{||x-y||^{2}}{||x-y||^{2}+<x,y>}\).
Dice coefficientIt is defined as \(\text{DSC}(A,B)=\frac{2|A\cap B|}{|A|+|B|}\) and is widely used in image segmentation as a performance index. Indeed, the Dice index is equal to the F1-score and corresponds to the harmonic mean of the sensitivity and positive predictive value. It is closely related to the Jaccard coefficient as \(\text{DSC}(A,B)=\frac{2\text{Jac}(A,B)}{1+\text{Jac}(A,B)}\). The Dice distance \(\text{dist}_{D}(A,B)=1-\text{DSC}(A,B)\) is a near-metric i.e. it respects a relaxed form of the triangular inequality (Gragera and Suppakitpaisarn, 2018). Soft surrogates of the Dice distance have been developed especially as a loss function in deep learning. We consider in the remainder two main extensions of the Dice distance (Ma et al., 2021) on non-binary sets defined as \(d_{\text{pSD}}(x,y)=1-\frac{2\sum_{i}x_{i}y_{i}}{\sum_{i}x_{i}^{2}+\sum_{i}y_ {i}^{2}}\) where \(p\in\{1,2\}\).
By construction, all those distances only depend on segmented pixels and are independent of the background size. Note that both distances are extended to get a null distance between two empty sets. Using those distances in the Frechet variance computation, the inclusion of voxels segmented by a large number of raters (resp. a few raters) decreases (resp. increases) its value. The different formulations of the MACCHlatO framework are summarized in table 1.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Hard Consensus & Soft Consensus & Distance & Soft Surrogate & Computation-level \\ Method & Method & & & \\ \hline Majority Voting & Mask Averaging & \(|A\Delta B|\) & \(\|x-y\|\) & Voxel-level \\ \hline ML STAPLE & MML STAPLE & NA & NA & Image-level \\ \hline MACCHlatO-J & MACCHlatO-TJ & \multirow{2}{*}{Jaccard \(d_{J}\)} & Tanimoto \(d_{\text{Tan}}\) & \multirow{2}{*}{Connected component level} \\ \cline{2-3} \cline{5-5} & MACCHlatO-SJ & & & \\ \hline MACCHlatO-D & MACCHlatO-1SD & \multirow{2}{*}{Dice \(d_{D}\)} & \(d_{1\text{SD}}\) & \\ \cline{2-3} \cline{5-5} & MACCHlatO-2SD & & \(d_{2\text{SD}}\) & \\ \hline \end{tabular}
\end{table}
Table 1: Distances between binary sets and their soft surrogate considered to compute hard and soft consensuses with the MACCHlatO framework
### Heuristic computation based on morphological distance and crowns
Domain of optimizationSince the distances listed in the previous section are independent of the number of true negatives, their computations can be restricted to the union of all rater masks : \(\mathcal{E}_{\mathcal{S}}=\{n|\sum_{k=1}^{K}S_{n}^{k}>0\}\). Furthermore, we consider that to decide whether a voxel belongs to the consensus, one should only take into account the regional context associated with the connected components surrounding that voxel, since far-away components may not be relevant. Therefore, we choose to minimize separately the Frechet variances of Eqs. 6 and 7 for each connected component \(St\) of the masks union \(\mathcal{E}_{\mathcal{S}}\). Therefore, in practice, we minimize the _Local Mean Squared Distance_ between \(\mathcal{S}\) and the consensus : \(\text{LMSD}_{d}(\mathcal{S},M)=\sum_{St\subset\mathcal{E}_{\mathcal{S}}}\frac{ 1}{K}\sum_{k}d(S_{\|St}^{k},M_{\|St})^{2}\) where \(S_{\|St}^{k}\) (resp. \(M_{\|St}\)) are the restriction of the binary masks \(S^{k}\) (resp. \(M\)) to the connected component \(St\). A benefit of this choice is that the determination of the Frechet Mean behaves similarly to a structure-wise MV, as the Frechet Mean of components segmented by less than half of the raters is the null set. However, contrary to MV, raters who do not segment a component kept by the majority of raters do not bias its consensus segmentation, as their contribution to the associated LMSD is \(\frac{1}{K}\delta_{\emptyset}(M_{\|St})=0\) and does not impact the Frechet mean. To lighten notations, we drop the \(St\) index in the remainder. It is equivalent to considering that \(\mathcal{E}_{\mathcal{S}}\) has only one single connected component.
Subcrown-based optimizationThe minimization of the Frechet variance is a combinatorial problem with a complexity of \(2^{|\mathcal{E}_{\mathcal{S}}|}\) for the naive approach. Furthermore, it may lead to several global minima when the number of raters \(K\) is small. For those reasons, we propose instead to seek a local minimum of the Frechet variance by introducing some heuristics in the optimization. With this approach, the local minimum has a lower complexity to compute and, by construction, is maximally connected to avoid isolated voxels.
More precisely, instead of a computationally expensive per voxel minimization of the Frechet variance, we decompose the set \(\mathcal{E}_{\mathcal{S}}\) into a set of _subcrowns_ that take into account the global morphological relationships between each rater mask. The formal definition of subcrowns requires the specification of distance maps \(Dm_{\mathcal{N}}(S^{k})\) to each binary mask \(S^{k}\) on \(\mathcal{E}_{\mathcal{S}}\) according to a chosen neighborhood \(\mathcal{N}\). This one can be either the 4 or 8 (resp. 6 or 26) connectivity in 2D (resp. 3D). The distance \(Dm_{\mathcal{N}}(S^{k})\) is set to 0 for all voxels inside the object \(S^{k}\).
The global morphological distance map is the sum of those distance maps
\[D_{\mathcal{S}}^{\mathcal{N}}=\sum_{S^{k}\in\mathcal{S}}Dm_{\mathcal{N}}(S^{k})\]
for all raters on \(\mathcal{E}_{\mathcal{S}}\). A _crown_\(C_{td}^{\mathcal{N}}\) is defined as the set of voxels having the same global morphological distance \(td\). Those crowns realize a partition of \(\mathcal{E}_{\mathcal{S}}\) (\(\mathcal{E}_{\mathcal{S}}=\prod_{td}C_{td}^{\mathcal{N}}\)), and the 0-crown corresponds by construction to the intersection of all masks in \(\mathcal{S}\).
We further split each crown as a set of _subcrowns_ by grouping the voxels that have been produced by the same set of raters. In other words, a subcrown corresponds to a set of voxels located at the same morphological distance from the intersection of all rater masks
and which have been segmented by the same group of raters, as seen in Fig. 2a. Formally, a subcrown is noted \((C_{td}^{\mathcal{N}})^{g}\) where the superscript \(g\) corresponds to a group of raters and subcrowns realize a partition of a crown :
\[C_{td}^{\mathcal{N}}=\coprod_{g\in\mathcal{P}(\llbracket 1,K\rrbracket)}(C_{td}^{ \mathcal{N}})^{g},\text{with }(C_{td}^{\mathcal{N}})^{g}=\{n|n\in C_{td}^{\mathcal{N}}\ \&\ \forall k\ S_{n}^{k}=(k\in g)\} \tag{8}\]
where \(\mathcal{P}(\llbracket 1,K\rrbracket)\) is the power set (i.e. the set of all subsets) of the first K integers.
The process for the construction of subcrowns is illustrated in Fig. 2a
### Hard consensus algorithm
The optimization proceeds in a greedy fashion by iteratively removing or adding subcrowns to the current estimate of the consensus until the \(\text{LMSD}_{d}\) criterion stops decreasing. In Alg. 1, we use two concurrent strategies : either we start from the union of all masks and then remove subcrowns with decreasing distances (a straeegy illustrated in Fig. 2b, or we start with the crown with the minimum distance and then add subcrowns of increasing distances. Both growing and shrinking strategies are applied as the greedy process can lead to different results, and we keep the consensus associated with the minimum \(\text{LMSD}_{d}\) of both strategies and the null set. The latter is also tested in the last stage since the distance of a set \(M\) to the null set is \(\delta_{\emptyset}(M)\), for both Dice and Jaccard distances. This discontinuity is not compatible with the iterative process and calls for a independent test.
Examples of consensuses obtained with this strategy can be seen in Fig. 3. Thus, the resulting consensus leads to a consistent grouping since all voxels belonging to the same connected component, having the same morphological distance, and being generated by the same group of raters will end up in the same class. Alternative optimization approaches could have been based on adding or removing single voxels (smaller than subcrowns) or crowns (larger than subcrowns). While voxel-based minimization would be very time-consuming, especially in 3D, conversely crown-based would lead to suboptimal results as crowns can be fairly large. Thus, the Morphologically-Aware Consensus Computation via Heuristics-based IterATive Optimization (MACCHIatO) algorithm is designed to be a good compromise between computational efficiency and consistency, with a number of iterations exponentially depending on \(K\) but which is lower than the naive \(2^{|\mathcal{E}_{\mathcal{S}}|}\) complexity.
### Soft consensus algorithm
The estimation of a probabilistic or soft consensus is based on the minimization of the sum of square surrogate distances as displayed in Eq. 7 and the optimization is split for each connected component of the mask union \(\mathcal{E}_{\mathcal{S}}\).
The _soft MACCHIatO_ algorithm extends the previous approach to minimize the criterion \(\text{LMSD}_{d^{s}}(\tilde{T},\mathcal{S})\). A brute force approach would lead to the optimization of a sum of \(K\) rational polynomials over a set of \(|\mathcal{E}_{\mathcal{S}}|\) scalars. Instead, we proceed in a greedy manner, separately on each connected component of \(\mathcal{E}_{\mathcal{S}}\), by starting with the mean consensus and optimizing successively subcrowns of increasing distances. All subcrowns of increasing dis
Figure 2: (a) Construction of the heuristics. From left to right: Segmentations by 3 raters (red, green, and blue); computation of the associated distance maps \(Dm_{\mathcal{N}}(S^{k})\); merging into the morphological distance map \(D_{S}^{\mathcal{N}}\) restricted to the voxels segmented at least one; subdivision into subcrowns (1 color = 1 subcrown) based on morphological distance and raters). (b) An iteration of the shrinking approach with the selection of sub-crowns and the evaluation of their contribution to the LMSD\({}_{d}\) (c) Application of mask averaging and soft MACCHIatO on a toy example with three segmentations (red, green, and blue contours). After thresholding, averaging gives an empty segmentation whereas the soft MACCHIatO method is more inclusive and outputs one connected component.
**Input:**\(\mathcal{S}\) segmentation maps, \(\mathcal{N}\) neighborhood, \(d\) distance
**Result:**\(T\)
**Initialization:** Computation of \(D_{\mathcal{S}}^{\mathcal{N}},\ td_{u}=\max(D_{\mathcal{S}}^{\mathcal{N}})\), \(td_{i}=\min(D_{\mathcal{S}}^{\mathcal{N}})\);
\(T^{u}=\bigcup_{k}S^{k}\); \(T^{i}=\{n|(D_{\mathcal{S}}^{\mathcal{N}})_{n}=td_{i}\}\)
**while**\(\mathrm{LMSD}_{d}(T^{u},\mathcal{S})\)_decreases_**do // Shrinking strategy**
**for**\(g\in\mathcal{P}([\![1,K]\!])\)**do**
**if**\(\mathrm{LMSD}_{d}((T^{u}/(C_{td_{u}}^{\mathcal{N}})^{g}),\mathcal{S})<\mathrm{ LMSD}_{d}(T^{u},\mathcal{S})\)**then**
\(T^{u}\gets T^{u}/(C_{td_{u}}^{\mathcal{N}})^{g}\)
**end**
**end**
**while**\(\mathrm{LMSD}_{d}(T^{i},\mathcal{S})\)_decreases_**do // Growing strategy**
**for**\(g\in\mathcal{P}([\![1,K]\!])\)**do**
**if**\(\mathrm{LMSD}_{d}((T^{i}\cup(C_{td_{i}}^{\mathcal{N}})^{g}),\mathcal{S})< \mathrm{LMSD}_{d}(T^{i},\mathcal{S})\)**then**
\(T^{i}\gets T^{i}\cup(C_{td_{i}}^{\mathcal{N}})^{g}\)
**end**
**end**
**end**
\(td_{i}\leftarrow\min(\{x\in D_{\mathcal{S}}^{\mathcal{N}}|x>td_{i}\})\)
**end**
\(T\leftarrow\arg\min_{T\in\{T^{u},T^{i},\emptyset\}}\mathrm{LMSD}_{d}(T, \mathcal{S})\)
**Algorithm 1**Hard consensus algorithm.
Figure 3: Comparison of several hard consensus methods on a 2D slice with 5 raters using MV, ML STAPLE and both hard MACCHlatO. On the left is indicated the number of raters who segmented each pixel.
tances are iteratively considered until \(\text{LMSD}_{d}(\tilde{T},\mathcal{S})\) stops decreasing. For each subcrown \(r=(C_{td}^{\mathcal{N}})^{g}\), we seek the scalar value \(p_{r}\in[0,1]\) such that it minimizes
\[p_{r}=\arg\min_{x\in[0,1]}(d(\tilde{T}_{(td,g),x},\mathcal{S})),\,\text{with}\, \,\tilde{T}_{(td,g),x}=\left\{\begin{array}{l}x\text{ if }n\in r\\ \tilde{T}_{n}\text{ otherwise}\end{array}\right..\]
The algorithm is described in Alg.2 and iteratively optimizes each subcrown from the inside to the outside of the \(\mathcal{E}_{\mathcal{S}}\) set. We have observed no gain in combining a growing and a shrinking exploration of subcrowns contrary to Alg. 1. For the optimization process of Eq. 3.5, we use the SLSQP algorithm (Kraft, 1988) implemented in Scipy v1.7.3 (Virtanen et al., 2020). Resulting consensus can be seen in Figs. 4, 6 and 7.
```
Input:\(\mathcal{S}\) segmentation maps, \(\mathcal{N}\) neighborhood, \(d^{s}\) distance Result:\(\tilde{T}\) Initialization: Computation of \(D_{\mathcal{S}}^{\mathcal{N}};\ \tilde{T}=\frac{1}{K}\sum_{k=1}^{K}S^{k}\) while\(\text{LMSD}_{d^{s}}(\tilde{T},\mathcal{S})\)decreasesdo for\(td\in D_{\mathcal{S}}^{\mathcal{N}}\) in increasing orderdo for\(g\in\mathcal{P}([1,K])\)do \(p=\arg\min_{x\in[0,1]}(\text{LMSD}_{d^{s}}(\tilde{T}_{(td,g),x},\mathcal{S}))\) with \(\tilde{T}_{(td,g),x}=\left\{\begin{array}{l}x\text{ on }(C_{td}^{\mathcal{N}})^{g}\\ \tilde{T}\text{ elsewhere}\end{array}\right.\) \(\tilde{T}\leftarrow\tilde{T}_{(td,g),p}\) end for end for end for
```
**Algorithm 2**Soft consensus algorithm
Figure 4: Comparison of several soft consensus methods on a 2D case with 5 raters using MA, STAPLE and MACCHIatO with different distances.
## 4 Results
### Datasets and Implementation Details
We tested our method on 3 datasets :
* A private database of transition zones of prostate T2w MR images, composed of 40 cases segmented by 5 raters.
* The publicly available MICCAI MSSEG 2016 dataset of Multiple Sclerosis lesions segmentations (Commowick et al., 2018) segmented from Brain MR images, with 15 subjects segmented by 7 raters
* The publicly available SCGM dataset (Prados et al., 2017), with 40 spinal cords and their grey matter segmented by 4 raters. We used the whole spinal cord segmentation (SCGM-SC) and the grey matter segmentation (SCGM-GM).
Images from the private dataset (resp. MSSEG dataset, SCGM dataset) have a size of [80-288]\(\times\)[320-640]\(\times\)[320-640] voxels (resp. [144-261]\(\times\)[224-512]\(\times\)[224-512] voxels and [3-28]\(\times\)[100-655]\(\times\)[100-776] voxels). It was possible to extract from the private dataset bounding boxes of size [58-227]\(\times\)[53-184]\(\times\)[62-180] voxels. Similarly, we were able to extract from SCGM-SC (resp. SCGM-GM) bounding boxes of size [3-20]\(\times\)[15-90]\(\times\)[24-131] voxels(resp.) From the 3D private dataset, we created a 2D subset by extracting a single slice for each patient located at the base of the prostate since this region is subject to a high inter-rater variability (Becker et al., 2019; Montagne et al., 2021).
Examples for each dataset of segmentations by the different raters of the same case are available in Appendix C (Fig. 8).
Implementation detailsIn the remainder, STAPLE results were produced by using the algorithm implemented in SimpleITK v2.0.2 (Lowekamp et al., 2013). All MACCHIatO methods used the 8 or 26-connectivity neighborhood for 2D or 3D cases. MACCHIatO code is available at https ://gitlab.inria.fr/dhamzaou/jaccardmap
### Heuristics relevance
In Section 3.3, we have presented the subcrown-based heuristics that drives the optimization of the local mean square distance criteria. Indeed, those subcrown group voxels are based on three properties : their morphological distance, the connected component they belong to, and the raters who segmented them. To check if this heuristics is appropriate, we compared it with two alternatives :
* The first alternative iteratively minimizes the LMSD\({}_{d}\) at the crown level (as defined in subsection 3.3 and represented in Fig. 2a), without any rater-related property.
* The other one iteratively processes each voxel separately.
We compared the 3 heuristics by computing a soft consensus (with the Tanimoto distance) on the toy example of Fig. 5, and we display their optimized value of LMSD\({}_{d^{s}}\) and their computation time in Table 2. Furthermore, since the size of \(\mathcal{E}_{\mathcal{S}}\) is small, we could estimate the true minimizer of LMSD\({}_{d^{s}}\) that involves the optimization of \(|\mathcal{E}_{\mathcal{S}}|\) parameters.
Unlike the crown-based heuristics, the subcrown-based and voxel-based heuristics appear to compute a consensus close to the real LMSD\({}_{d^{s}}\) minimizer. In addition, the subcrown method is significantly faster than the voxel-based approach.
We have also compared the three heuristics on two datasets in Table 3. The crown-based heuristics is the fastest method to compute but with the highest criteria LMSD\({}_{d^{s}}\), whereas the voxel-based method requires far more computation time than the subcrown-based heuristics and even several hours for some Prostate 3D cases. Surprisingly, on average, the subcrown-based heuristics reaches a lower LMSD\({}_{d^{s}}\) criteria than the voxel-based method, although the difference may hardly be seen in the produced consensus. On those datasets, we were not able to estimate the true minimizer of LMSD\({}_{d^{s}}\), due to the high memory resources those computations would require.
### Comparison with baseline methods
Comparison of inter-rater variabilitiesA first set of experiments consist in measuring the impact of the choice of the consensus method when computing a measure of inter-rater variability. More precisely, we compute the average precision, recall, and F1-score between the hard consensus (considered as ground truth) and each rater segmentation. Those metrics have been computed on the MSSEG dataset where there are potentially large disagreements between raters. Table 4 reports those metrics averaged among all lesions of all images, a
lesion corresponding to a connected component of the mask union \(\mathcal{E_{S}}\). The MV consensus has the highest recall and lowest precision which can be interpreted by a MV consensus smaller than other methods. Conversely, the STAPLE consensus has the largest precision and lowest recall, thus corresponding to a larger size consensus. Regarding terms of F1-score, MV and MACCHIlatO methods obtained similar metrics but slightly higher for MACCHIlatO-D (0.449).
In addition, we also compared the methods on the number of connected components. To do so, we defined each consensus as ground truth and from there computed the average precision, recall, and F1-score of each rater for lesion detection (considering the existence of a non-null intersection with the rater's segmentation as a sufficient threshold to detect). We performed this experiment on the MSSEG dataset, as it is our only dataset with several connected components per case. Table 5 reports those metrics averaged among all patients. The MV consensus has the highest detection recall and lowest detection precision which can be interpreted by a MV consensus not segmenting some lesions conserved by the other methods. Conversely, the STAPLE consensus has the largest precision and lowest recall, thus corresponding to the presence of lesions rarely segmented by the raters. In terms of F1-score, MV and MACCHIlatO methods are close to each other, but it is highest for MACCHIlatO-D (0.894).
Comparison of consensus areas or volumesIn Table 6, we compare the relative size of hard consensuses on all datasets, taking the MV consensus as reference. On average, all methods lead to consensuses of larger size than MV. For the MACCHIlatO methods, the difference with MV consensus is modest on a massive organ (prostate) but significant for small lesions (\(>\)16%). The ML STAPLE method generates much larger consensuses than MV, especially when dealing with small lesions. Note that for the MSSEG dataset, ML STAPLE is computed on the whole image, thus with a large background size. Finally, the MACCHIlatO-D and MACCHIlatO-J methods lead to consensuses of similar size, without
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline MeasureMethod &
\begin{tabular}{c} ML \\ STAPLE \\ \end{tabular} & MV & MACCHIlatO-J & MACCHIlatO-D \\ \hline Precision & 0.976 & 0.497 & 0.562 & 0.570 \\ \hline Recall & 0.273 & 0.817 & 0.769 & 0.758 \\ \hline F1-score & 0.297 & 0.437 & 0.448 & 0.449 \\ \hline \end{tabular}
\end{table}
Table 4: Averaged lesion-wise measures on the MSSEG dataset for all hard consensus methods
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline MeasureMethod &
\begin{tabular}{c} ML \\ STAPLE \\ \end{tabular} & MV & MACCHIlatO-J & MACCHIlatO-D \\ \hline Precision & 0.994 & 0.887 & 0.914 & 0.931 \\ \hline Recall & 0.643 & 0.967 & 0.931 & 0.930 \\ \hline F1-score & 0.746 & 0.892 & 0.888 & 0.894 \\ \hline \end{tabular}
\end{table}
Table 5: Measures of lesion detection on the MSSEG dataset for all hard consensus methods
any clear order. Table 7 compares the soft area or volumes of the soft consensuses (given by \(\sum_{n=1}^{N}\tilde{U}_{n}\)) generated by all methods, taking the mask averaging as reference. Fig. 6 illustrates those soft consensuses on the MSSEG dataset. The variation of volumes is smaller for soft consensus than for hard consensus. In general, the MA method produces the smallest volumes, and STAPLE the largest ones. The methods using surrogate Dice or Jaccard distances give similar volumes, although the Soergel and \(1SD\) are more diverging on the MSSEG dataset. We also compare the size of the thresholded maps \(\tilde{U}_{n}>0.5\) which provide similar trends to their soft maps.
For both hard and soft consensuses, the largest differences between the different methods are observed on the MSSEG dataset, followed by SCGM-GM.
We recorded the cumulative running time for STAPLE and soft MACCHlatO methods to generate a consensus for all structures of our datasets in Table 8. We did not consider MA as it requires far less computation than the other methods. Among the considered algorithms STAPLE is in general the fastest method, being approximately 2-3 times faster
\begin{table}
\begin{tabular}{|c|c|c|c||c|c|c|} \cline{2-7} \multicolumn{1}{c|}{} & \multicolumn{2}{c||}{Avg. size variation w.r.t MV} & \multicolumn{3}{c|}{Frequencies of size \(>|\)MV\(|\)} \\ \hline Dataset Method & Jaccard & Dice & ML STAPLE & Jaccard & Dice & ML STAPLE \\ \hline Prostate 3D & +0.4\% & +0.6 \% & +22\% & 87.5\% & 85\% & 100\% \\ \hline MSSEG & +19\% & +16\% & +151\% & 100\% & 93\% & 100\% \\ \hline SCGM-SC & +2.36\% & +2.30\% & +11\% & 97.5\% & 97.5\% & 100\% \\ \hline SCGM-GM & +17\% & +15\% & +47\% & 100\% & 100\% & 100\% \\ \hline \end{tabular}
\end{table}
Table 6: Left : Average size variation on 3D datasets for hard consensuses, with the Majority Voting serving as the reference size. Right : percentage of cases where the computed consensus is strictly larger than the MV consensus. Red color indicates that for this setting, all cases are at least of equal size.
Figure 6: Two consecutive slices of a MSSEG sample on which we applied STAPLE (pink), Majority Voting (purple) and MACCHlatO-TJ (green contour) (a, c), and for each voxel of those slices the number of raters who segmented them (b, d). We can note that some zones (highlighted by brown squares) were selected by soft MACCHlatO-TJ whereas less than the majority of raters segmented them.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{4}{|c|}{Avg. soft volume variation w.r.t MA} \\ \hline DatasetMethod & TJ & SJ & 2SD & 1SD & STAPLE \\ \hline Prostate 3D & +0.4\% & +0.1\% & +0.1\% & +0.7\% & +10\% \\ \hline Thresholded & +0.1\% & +0.07\% & +0.09\% & +0.03\% & +11\% \\ \hline \hline MSSEG & +4\% & +16\% & +2\% & -3\% & +43\% \\ \hline Thresholded & +8\% & +37\% & +4\% & +11\% & +68\% \\ \hline \hline SCGM-SC & -0.4\% & +0.5\% & -0.5\% & +0.3\% & +4\% \\ \hline Thresholded & +1\% & +1.3\% & +0.9\% & +0.9\% & +5.7\% \\ \hline \hline SCGM-GM & +1.2\% & +4.4\% & +1\% & +2.9\% & +8.6\% \\ \hline Thresholded & +13\% & +16\% & +11\% & +14\% & +19\% \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{4}{|c|}{Frequencies of soft volume \(>\) |MA|} \\ \hline DatasetMethod & TJ & SJ & 2SD & 1SD \\ \hline Prostate 3D & 80\% & 65\% & 60\% & 80\% \\ \hline Thresholded & 22.5\% & 12.5\% & 7.5\% & 7.5\% \\ \hline \hline MSSEG & 87\% & 100\% & 73\% & 33\% \\ \hline Thresholded & 93\% & 100\% & 80\% & 93\% \\ \hline \hline SCGM-SC & 10\% & 52.5\% & 5\% & 37.5\% \\ \hline Thresholded & 35\% & 67.5\% & 25\% & 27.5\% \\ \hline \hline SCGM-GM & 92.5\% & 95\% & 92.5\% & 82.5\% \\ \hline Thresholded & 100\% & 100\% & 100\% & 100\% \\ \hline \end{tabular}
\end{table}
Table 7: Top : Average soft volume variation on 3D datasets for soft consensuses, with the MA serving as the reference. Bottom : Percentage of cases where the obtained consensus has a higher volume than the MA consensus. Red color indicates for the thresholded case that for this setting, all cases are at least of equal size.
than MACCHIatO methods. The exception here is the computation time on SCGM, which always involves small structure sizes and large image sizes.
### Entropy of soft consensus
In Figs. 3 and 7 we show examples of soft consensuses on the prostate and grey matter datasets. It appears that MACCHIatO-SJ and MACCHIatO-1SD methods often assign to subcrowns probability values very close to 0 or 1 despite being soft consensus methods. To confirm this behaviour, we compared on all 3D datasets the Shannon entropy \(-\sum_{n}\tilde{U}_{n}\log\tilde{U}_{n}-(1-\tilde{U}_{n})\log(1-\tilde{U}_{n})\) obtained by MA and by the four soft MACCHIatO methods. Table 9 confirms the strong binary behavior of MACCHIatO-SJ and MACCHIatO-1SD methods while MACCHIatO-TJ and MACCHIatO-2SD have a similar spread than mask averaging. Thus, we classify the surrogate distances between two families : the ones associated with low-entropy consensus (Soergel, \(d_{1SD}\)), and the ones generating high-entropy consensus (Tanimoto, \(d_{2SD}\)).
### Discussion
Experiments confirmed the dependence on background size of the STAPLE method, as shown in Fig. 1a and Appendix A (Tab. 10). We also observed that hard consensuses obtained by MACCHIatO were generally slightly larger than those obtained by MV, particularly with MACCHIatO-J which almost never produces consensuses smaller than MV's. This can be explained by the fact that the MACCHIatO consensus may include voxels segmented by less than half of the raters (as seen in Figs. 3 and 6). Finally, STAPLE consensuses always have a larger size than both MACCHIatO and MV. Similar observations can be made on soft consensus but with a smaller difference between methods on soft volumes
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline DatasetMethod & TJ & SJ & 2SD & 1SD & STAPLE \\ \hline Prostate 2D & 11.1s & 14.6s & 7.4s & 9.8s & 2.3s \\ \hline Prostate 3D & 15m02s & 12m52s & 9m19s & 9m48s & 4m17s \\ \hline MSSEG & 14m29s & 11m31s & 11m42s & 11m13s & 3m38s \\ \hline SCGM-SC & 16.7s & 15.1s & 14s & 14.3 & 40.6s \\ \hline SCGM-GM & 14.1s & 12.8s & 12.4s & 13.3s & 34.7s \\ \hline \end{tabular}
\end{table}
Table 8: Computation time of continuous methods on all datasets
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Dataset & MA & TJ & SJ & 2SD & 1SD \\ \hline Prostate 3D & 63850 & 63658 & 6928 & 63799 & 19361 \\ \hline MSSEG & 41295 & 37377 & 3805 & 37720 & 6107 \\ \hline SCGM-SC & 2401 & 2467 & 259 & 2483 & 305 \\ \hline SCGM-GM & 757 & 736 & 97 & 736 & 118 \\ \hline \end{tabular}
\end{table}
Table 9: Mean entropy on 3D datasets for soft MACCHIatO methods. MA entropy is given as a reference.
compared to hard volumes. The MACCHIatO methods by construction create consensuses, independent from the background size, that maximize the local average (soft) squared Dice or Jaccard coefficients between the consensus and rater masks for each connected component. Furthermore, they produce masks that are different from the MV and STAPLE methods and have in general larger volumes than MV consensuses and smaller volumes than STAPLE ones. Finally, the MACCHIatO algorithms are in general more computationally expensive than MV or STAPLE algorithms but only to a reasonable extent (about 2 or 3 times more). In this article, we had the deliberate position not to choose between soft and hard consensus. From our perspective, the choice of method should be based on the users' motivations and the downstream task. If the users solely aim to generate a binary mask for visualization purposes or inter-rater variability studies, they can opt for the hard consensus method. However, if they wish to incorporate uncertainty modeling and obtain more refined results, the soft consensus methods would be more suitable.
Similarly, the choice of distances should align with the intended objectives. If users prioritize a solid mathematical foundation for the method, then they may opt for the Jaccard (hard) and Soergel (soft) metrics as, contrary to other used distances, they respect the triangular inequality. Alternatively, the Tanimoto distance can be used for uncertainty assessment, as MACCHIatO-TJ outputs more non-binary values than MACCHIatO-SJ. Users also have the flexibility to use the more commonly employed Dice instead of Jaccard if they prefer. In definitive, we have presented a range of methods within a consistent framework and elaborated on their characteristics. However, the specific configuration is ultimately left to the users based on their individual requirements and preferences.
It can also be noted that the size variation observed on a dataset seems to be correlated with its inter-rater variability, the observed differences being more important on the MSSEG and SCGM-GM dataset than on the others.
In this article, we always considered 8-connexity in 2D cases and 26-connexity in 3D cases, as it performed better in preliminary experiments. However, the use of other neighborhoods (such as the 4-neighborhood in 2D, or the 6 and 18-neighborhood in 3D) could be envisaged. Moreover, we did not consider the case of highly anisotropic images, like in the SCGM dataset where a ratio of anisotropy greater than 10 in the voxel size is encountered. For those cases, it could be considered to apply a 2.5D approach consisting in applying our
Figure 7: Impact of the choice of the distance on the computed soft MACCHIatO consensus on a SCGM-GM example
method to each slice independently. Comparisons between 2.5D and 3D neighborhoods on SCGM are available in Appendix D.
The proposed method has several limitations. First, we only considered a binary segmentation problem. Extension to multiclass segmentation could be foreseen using for instance the generalization method presented in Crum et al. (2006) and Sudre et al. (2017). Second, the considered distances between binary sets are based on region overlap measures (Dice, Jaccard indices) and discard distances between boundaries such as Hausdorff Distance (HD). Our experiments based on HD were not conclusive.
The reasons for this may be similar to the ones described in Karimi and Salcudean (2019) : instability of the methods to minimize a distance only defined from the largest error, HD sensitivity to outliers, difficulties to optimize it from an optimization point of view. To mitigate those effects, we made some tests using two of the Hausdorff alternatives defined in Karimi and Salcudean (2019) and based respectively on distance maps and erosion, to no avail.
Third, the proposed criteria LMSD\({}_{d}\), weights all raters equally for all connected components, unlike the STAPLE algorithm. It is possible to extend the MACCHIatO framework by attributing weights to raters based on their precision and recall (as those measures are independent of background size), either at the local or global level. Yet, this extension would require additional optimization steps, since the weights depend on the current estimate of the consensus.
Extending the MACCHIatO method to generate consensuses from \(K\) (soft) probability maps instead of binary segmentations is not straightforward. Indeed, while minimizing the Frechet variance of Eq. 7 is well-posed, we can no longer restrict its computation to the set \(\mathcal{E}_{\mathcal{S}}\) and define subcrowns as optimization blocks. An alternative method that we have explored in our prior workAudelan et al. (2020), is to map probabilities to real values through a link function (e.g. a logit function) and then use robust parametric models (t-distributions) to fuse the probability maps.
## 5 Conclusion
In this paper, we have shown that the STAPLE method is impacted by the image background size and the choice of prior law. We have also introduced a new background-size independent method to generate a consensus based on Jaccard and Dice-based distances, thus extending the Majority Voting and mean consensus methods. More precisely, the generated masks minimize the average squared Jaccard or Dice distance between the consensus and each rater segmentation. The MACCHIatO algorithms are efficient and provide consistent masks by taking into account local morphological configurations between rater masks. The consensus masks are usually larger than those generated by the majority voting or mask averaging methods but smaller than those issued by STAPLE. Therefore, based on the experiments performed on three datasets, we believe that the hard and soft MACCHIatO algorithms are good alternatives to MV-based and STAPLE-based methods to define consensus segmentation.
## Acknowledgments
This work has been supported by the French government, through the 3IA Cote d'Azur Investments and UCA DS4H Investments in the Future project managed by the National Research Agency (ANR) with the reference numbers ANR-19-P3IA-0002 and ANR-17-EURE-0004 and by the Health Data Center of the AP-HP (Assistance Publique-Hopitaux de Paris). Private data was extracted from the Clinical Data Warehouse of the Greater Paris University Hospitals (Assistance Publique-Hopitaux de Paris). We thank Julien Castelneau, software Engineer Inria, for his help in the development of MedInria Software (MedInria - Medical image visualization and processing software by Inria [https://med.inria.fr/](https://med.inria.fr/) - RRID :SCR_001462). The authors are grateful to the OPAL infrastructure from Universite Cote d'Azur for providing resources and support. We also thank Alexandre Allera, Malek Ezziane, Anna Luzurier, Raphaelle Quint and Mehdi Kalai for providing prostate segmentations, Yann Fraboni and Etrit Haxholli for insightful discussions, and Federica Cruciani and Lucia Innocenti for feedback.
This paper is dedicated to the memory of our dear colleague Olivier Commowick who has been very active and innovative in the domain of data fusion.
## Ethical Standards
The work follows appropriate ethical standards in conducting research and writing the manuscript, following all applicable laws and regulations regarding treatment of animals or human subjects.
## Conflicts of Interest
We declare we do not have conflicts of interest
|
2301.13502 | Parity-violation in bouncing cosmology | We investigate the possibility of the enhancement of parity-violation signal
in bouncing cosmology. Specifically, we are interested in deciding which phase
should generate the most significant parity-violation signals. We find that the
dominant contribution comes from the bouncing phase, while the contraction
phase has a smaller contribution. Therefore, bouncing cosmology can enhance the
parity-violation signals during the bouncing phase. Moreover, since the
bouncing phase has the highest energy scale in bouncing cosmology, we can also
probe new physics at this scale by studying the parity-violation effect. | Mian Zhu, Yong Cai | 2023-01-31T09:43:22Z | http://arxiv.org/abs/2301.13502v2 | # Parity-violation in bouncing cosmology
###### Abstract
We investigate the possibility of the enhancement of parity-violation signal in bouncing cosmology. Specifically, we are interested in deciding which phase should generate the most significant parity-violation signals. We find that the dominant contribution comes from the bouncing phase, while the contraction phase has a smaller contribution. Therefore, bouncing cosmology can enhance the parity-violation signals during the bouncing phase. Moreover, since the bouncing phase has the highest energy scale in bouncing cosmology, we can also probe new physics at this scale by studying the parity-violation effect.
###### Contents
* I Introduction
* II Model
* II.1 Action
* II.2 Bouncing background
* II.3 The effective action on high energy scale
* III Tensor perturbation
* III.1 Formalism
* III.2 Dynamics of tensor perturbation
* III.3 Parity violation signal
* III.4 Comment on the resulting signal
* III.5 Semi-analytic investigation
* IV Conclusion and Outlook
* V Acknowledgement
## I Introduction
The primordial gravitational waves (GWs) might encode rich information about the very early universe, which may help distinguish between different scenarios of the primordial universe, including inflation and its alternatives. Chirality is a distinct characteristic of GWs, which could be manifested in parity-violating theories of gravity. Recently, it is found that the polarization data of Planck and WMAP [1; 2; 3] may be a hint of parity-violating physics in the cosmic microwave background, though further confirmations are required. The explorations of parity-violating primordial GWs have aroused a lot of interest, see e.g. [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36], see also [37; 38; 39; 40; 41; 42; 43].
In single-field slow-roll inflation models where inflaton is non-minimally coupled to a parity-violating term, such as the gravitational Chern-Simons (gCS) term [44; 45], the effect
of parity-violation should be suppressed by the slow-roll condition. However, in the modifications or alternatives to the single field slow-roll inflation, the slow-roll condition could be violated at least for some moment. As a result, the effect of parity-violation could be enhanced due to the dynamical coupling of the scalar field to the parity-violating term, see e.g. [46] for the enhanced parity-violating GWs caused by violation of the null energy condition (NEC) [47] during inflation. Therefore, observations of a parity-violating GW background might provide us with a new way to identify physics beyond single-field slow-roll inflation.
Bouncing cosmology as a possible solution to the initial cosmological singularity problem of inflation and the Big Bang cosmology has attracted a lot of interest [48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74]. In the bouncing scenario, the universe originates from a contracting phase and enters an expanding phase after going through a non-singular bounce, where the NEC is violated. One important issue is the ghost and gradient instabilities in the bouncing phase, which is a generic feature in a large class of theories [75; 76; 77]. To acquire a healthy bouncing model, new physics effective at bouncing phase is introduced [78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95].
In principle we may explore the new physics by studying their phenomenological predictions. Unfortunately, the signals from the new physics, which is generically effective only in the bouncing phase, are suppressed by the short duration of the bounce. For example, in [90], it is found that the new physics at the bouncing phase has negligible contribution to the power spectrum. Consequently, in many studies of non-singular cosmology [96; 97; 98; 99; 100; 101; 102; 103], the signals from the bouncing phase are small, the main phenomenological contribution comes from the contraction phase, so it's difficult to probe the new physics in bouncing phase 1. Specifically, in previous literature addressing the parity-violation effect in bouncing cosmology [10], the bouncing phase is directly replaced by a simple junction condition so there is only a contraction and an expansion in their scenario.
Footnote 1: Some counterexamples comes from the quantum bounce models [104; 105; 106]. However, this is beyond our scope since we consider purely classical bouncing cosmology.
It is then interesting to study if the parity-violation effect could be generated in bouncing cosmology, especially during the bouncing phase. Intuitively, the derivative of the scalar field is non-trivial around the bouncing phase, which may be able to amplify the effect of parity-violation, as long as the scalar field is non-minimally coupled to a parity-violating term. Additionally, the effective sound horizon of the primordial GW mode could also
be nontrivial during the bouncing phase2, especially when chirality of the GW mode is considered. Therefore, we expect the non-trivial parity-violation signals to come from the bouncing phase.
Footnote 2: The bouncing phase is defined by \(dH/dt\geq 0\), where \(H\) is the Hubble parameter.
In this paper, we investigate the parity-violation effect in a toy bouncing model, where the source term is taken to be a gCS action coupled to the scalar field. We are especially interested in the following question: which phase, the contraction or the bouncing phase, contributes to the enhancement of the parity-violation effect dominantly? As we will see in section III.3, the bouncing phase can generate non-trivial parity-violation signals, while the contraction phase has negligible effect. Moreover, the enhancement is sensitive to the detailed physics during the bouncing phase, so in principle, we can probe the new physics during bouncing through parity-violation. Therefore, our result is twofold: we can not only explain the possibly observed parity-violation signal in the framework of bouncing cosmology, but also provide a possible way to probe new physics at the bouncing phase by studying their imprint on parity-violation signals.
The paper is organized as follows. In section II we briefly introduce our model. After the basic formalism for tensor perturbation in III.1, we numerically evaluate the dynamics of tensor perturbation in section III.2 and the parity-violation signal in section III.3. We comment on some conceptual issues about our result in section III.4 and explain our numerical result in a semi-analytical way in section III.5. From the semi-analytical argument, we find that our numerical result should be qualitatively valid for a large variety of bouncing models, although the numerics are taken in a toy bouncing model. We finally conclude in section IV.
Throughout this paper, we take the sign of the metric to be \((-,+,+,+)\). We will take \(\hbar=1\), \(c=1\), \(M_{p}^{2}=(8\pi G)^{-1}=1\), so that all quantities are in Planck units. The canonical kinetic term is defined as \(X\equiv-\nabla_{\mu}\phi\nabla^{\mu}\phi/2\), such that \(X=\dot{\phi}^{2}/2\) at the background level.
Model
### Action
We take the action to be
\[S=\int d^{4}x\sqrt{-g}\left[\frac{M_{p}^{2}}{2}R+\mathcal{L}_{H}+\mathcal{L}_{G}+ \mathcal{L}_{HE}\right]. \tag{1}\]
The term \(\mathcal{L}_{H}\) is responsible for setting the background evolution, where we set
\[\mathcal{L}_{H}=M_{p}^{2}f_{1}(\phi)X+f_{2}(\phi)X^{2}-M_{p}^{4}V(\phi)\, \tag{2}\]
which is eligible for the background dynamics. In the next section II.2, we will use specific coupling functions \(f_{1}\) and \(f_{2}\) to construct a cosmological bouncing model.
The \(\mathcal{L}_{G}\) term is the gravitational CS term, with
\[\mathcal{L}_{G}=\frac{f_{3}(\phi)}{8}R\wedge R=\frac{f_{3}(\phi)}{8}\epsilon^ {\alpha\beta\rho\sigma}R_{\alpha\beta\mu\nu}R_{\rho\sigma}^{\ \ \mu\nu}, \tag{3}\]
and \(\epsilon^{\alpha\beta\rho\sigma}\) to be four-dimensional Levi-Civita symbol with \(\epsilon^{0123}=-1/\sqrt{-g}\).
Finally, the term \(\mathcal{L}_{HE}\) represents the action effective at some high energy scale. Since there will be ghost or gradient instability problems in the generic bouncing models [75; 76; 99], such terms are obligated to eliminate such instabilities. We will discuss in details of this term in section II.3.
We mention that in (1) we scale the scalar field \(\phi\) to be dimensionless so that the coupling functions \(f_{i}\) are dimensionless.
### Bouncing background
It is well-known that the gCS term will not contribute to the background dynamics. We shall assume that the correction term \(\mathcal{L}_{HE}\) also satisfies this criterion. Therefore, Friedmann's equations are totally determined by the Einstein-Hilbert action and the \(\mathcal{L}_{H}\) term. In a flat FLRW background
\[ds^{2}=-dt^{2}+a^{2}(t)d\vec{x}^{2}\, \tag{4}\]
we have
\[3M_{p}^{2}H^{2}=\frac{M_{p}^{2}}{2}f_{1}\dot{\phi}^{2}+\frac{3}{4}f_{2}\dot{ \phi}^{4}+M_{p}^{4}V(\phi)\, \tag{5}\]
\[-2M_{p}^{2}\dot{H}=M_{p}^{2}f_{1}\dot{\phi}^{2}+f_{2}\dot{\phi}^{4}\, \tag{6}\]
or in terms of the scalar field \(\phi\):
\[\left(M_{p}^{2}f_{1}+3\beta\dot{\phi}^{2}\right)\ddot{\phi}+3H\dot{\phi}\left(M_ {p}^{2}f_{1}+\beta\dot{\phi}^{2}\right)+M_{p}^{4}\frac{dV}{d\phi}+\frac{M_{p}^ {2}}{2}\frac{df_{1}}{d\phi}\dot{\phi}^{2}=0. \tag{7}\]
Now we choose a similar ansatz as that from [85; 90]:
\[f_{1}(\phi)=1-\frac{g}{\cosh\omega_{1}\phi}\,\ f_{2}=\beta\equiv\text{const} \,\ V(\phi)=-\frac{V_{0}}{\cosh\omega_{V}\phi}\, \tag{8}\]
where the background dynamics are well-studied. In the initial state of the universe where \(\dot{\phi}\to 0\) and \(\phi\rightarrow-\infty\), the universe undergoes an Ekpyrotic contraction [107]
\[\phi\simeq-\frac{1}{\omega_{V}}\ln\frac{\omega_{V}^{4}V_{0}t^{2}}{\omega_{V}^ {2}-6}\,\ a(t)=a_{-}\left(\frac{t-t_{c}}{t_{-}-t_{c}}\right)^{\frac{2}{ \omega_{V}^{2}}}. \tag{9}\]
The Ekpyrotic phase makes us free from conceptual issues of bouncing cosmology [108], at the cost of requiring \(\omega_{V}^{2}>6\). Note that we set \(t=0\) to be the bouncing point, i.e. the stage where the scale factor is minimal, so need an integration constant \(t_{c}\) to correctly describe \(a\). We also use the minus sign to denote the end of the Ekpyrotic phase, e.g. \(a_{-}\) is the scale factor at the end of the Ekpyrotic contraction.
When \(|\phi|\to 1\), the hyperbolic function approaches 1, and if we take \(g>1\), the \(f_{1}X\) term inverses sign and NEC can be violated. The non-singular bounce phase starts when the NEC is violated, and the universe transit from contraction to expansion. The dynamics during the bouncing phase are generically complicated, but for a short bounce, i.e. bouncing phase with short enough time, the following parameterization can be valid
\[H=\gamma M_{p}^{2}t\,\ \gamma=\text{const.}>0\ \rightarrow\ a=a_{0}e^{\frac{1}{2} \gamma M_{p}^{2}t^{2}}\, \tag{10}\]
where we have set \(a_{0}=a(0)\), which is the scale factor at the bouncing point.
After the bouncing phase, the universe comes to an expansion phase, where the scale factor behaves as
\[a(t)=a_{+}\left(\frac{t-t_{e}}{t_{+}-t_{e}}\right)^{\frac{1}{3}}\,\ H(t)=\frac{1}{3(t-t_{e})}\, \tag{11}\]
where we similarly use the "\(+\)" sign to denote the end of the bouncing phase, and \(t_{e}\) is another integration constant.
We shall comment more on the expansion phase. Notice that, the factor \(aH\) from (11) is proportional to \((t-t_{e})^{-\frac{2}{3}}\). Hence, for any wave mode that is initially sub-horizon at \(t=t_{+}\)
it will remain sub-horizon in the whole expansion phase. This is in contrast with our general belief that, the primordial perturbation should leave the horizon in the expansion phase (like inflation) and freeze in, and re-enter the horizon in a later stage to set the initial condition for structure formation.
However, the parity-violation signal is highly dependent on the subsequent expansion phase after the bounce. In this paper, we want to compare the induction of parity-violation between the contraction phase and the bouncing phase, so we wish to get a result independent of the subsequent expansion phase. Unfortunately, we do not have a precise way to define when the bouncing phase ends, so it is hard to directly get the parity-violation status at the end of the bouncing phase. The advantage of our expansion phase (11) is that the wave mode of interests will always be in the sub-horizon region. Thus, their dynamics can be approximately described by the harmonic oscillator equation \(u_{k}^{\prime\prime}+c_{T}^{2}k^{2}u_{k}=0\), whose general solution is simply
\[u_{k}\simeq u_{k,+}e^{ik\tau}+u_{k,-}e^{-ik\tau}. \tag{12}\]
Figure 1: The background dynamics with the specific parameters (13). The upper channel shows the evolution of the Hubble parameter and background energy density, while the lower channel shows the dynamics of the scalar field \(\phi\). The bouncing phase happens at around \(t=0\) where \(H\) quickly transfers from negative to positive.
The information of parity-violation status when bounce ends is encoded in the function \(u_{k,\pm}\), and we see that the expansion phase only changes their relative phase. Thus, we may alternatively get the physics of parity-violation at the end of the bouncing phase, by tracing the statistical property of tensor perturbation during the expansion phase. We shall elaborate more about this point in section III.2.
We depict the background dynamics in figure 1, where we've adopted the following parameters
\[g=1.5\,\ \beta=2\,\ V_{0}=10^{-7}\,\ \omega_{1}=10\,\ \omega_{V}=\sqrt{10}. \tag{13}\]
### The effective action on high energy scale
As shown in figure 1, the background energy density at the bouncing phase is much higher than the other phases. Hence, it is natural to introduce some actions effective only at a high energy scale to eliminate the instability problem. In the context of effective field theory (EFT) of non-singular cosmology [78; 79], certain EFT operators such as \(R^{(3)}\delta g^{00}\) can help to evade the instabilities without altering the background dynamics.
However, when come to the realization of such EFT operators, the dynamics of tensor perturbation are generally influenced by such high-energy correction. For example, in [81; 85; 86; 87] where the EFT operator \(R^{(3)}\delta g^{00}\) is written in a covariant form, there appears a non-minimal coupling and the propagating speed of GWs is changed accordingly [109].
There are also other approaches for \(\mathcal{L}_{HE}\) to eliminate the instabilities [82; 93; 95; 110; 111; 112; 113], while generically changes either the background dynamics or the propagation of gravitational waves. It would be hard to combine all these approaches in a unified description.
In this paper, we will start with the simplest case where \(\mathcal{L}_{HE}\) has no influence on both the background dynamic and propagation of gravitational waves (this is the case of the EFT approach in [78]). The point is, we will use this case as fiducial to examine if the bouncing phase contributes more to the parity-violation than the contraction phase. If this is true, we would possibly have the opportunity to distinguish the above approaches through the GW signals.
## III Tensor perturbation
### Formalism
Now we come to the tensor mode. We've assumed that \(\mathcal{L}_{HE}\) doesn't contribute to the tensor mode, so the quadratic action for tensor perturbation is
\[S_{T}^{(2)}=\frac{M_{p}^{2}}{8}\int d\tau d^{3}x\left\{a^{2}\left[\gamma_{ij}^{ \prime 2}-(\partial\gamma_{ij})^{2}\right]-\frac{g_{3}^{\prime}}{M_{p}^{2}} \epsilon_{ijk}\left[(\partial_{i}\gamma_{jl})^{\prime}(\gamma_{k}^{l})^{ \prime}-\partial_{i}\partial_{l}\gamma_{jq}\partial^{l}\gamma_{k}^{q}\right] \right\}\, \tag{14}\]
where we've defined the conformal time \(\tau\equiv\int dt/a\) and a prime denotes differentiation with respect to \(\tau\). Before proceeding, we see that the gCS term is suppressed by the factor \(M_{p}^{2}\), so this term should be important at a high energy scale. Moreover, \(g_{3}^{\prime}=a\dot{\phi}g_{3,\phi}\), and from figure 1 that \(\dot{\phi}\) is non-trivial only during the bouncing phase. Thus, we can intuitively guess that the gCS term should be important during the bouncing phase.
We work in the Fourier space where
\[\gamma_{ij}(\tau,\vec{x})=\sum_{s=L,R}\int\frac{d^{3}k}{(2\pi^{3})}\gamma_{k} ^{(s)}(\tau)p_{ij}^{(s)}(\vec{k})e^{i\vec{k}\cdot\vec{x}}\, \tag{15}\]
with the polarization tensor satisfying
\[p_{ij}^{(R)}p^{ij(R)}=p_{ij}^{(L)}p^{ij(L)}=0\,\ p_{ij}^{(R)}p^{ij(L)}=2\,\ ik_{l} \epsilon^{qlj}p_{ij}^{(s)}=k\lambda^{(s)}p_{i}^{q(s)}. \tag{16}\]
The polarization mode is decided by the parameter \(\lambda\), such that
\[\lambda^{(L)}=-1\,\ \lambda^{(R)}=1\,\ \lambda^{(N)}=0\, \tag{17}\]
and here for convenience, we've defined a new \(N\) mode to represent the non-parity-violation case.
Finally, the parity-violation is evaluated by the chiral parameter
\[\Delta\chi\equiv\frac{P_{T}^{(L)}-P_{T}^{(R)}}{P_{T}^{(L)}+P_{T}^{(R)}}\, \tag{18}\]
where \(P_{T}^{(s)}\) are the power spectrum of the corresponding polarization modes. Although the difference \(P_{T}^{(L)}-P_{T}^{(R)}\) is of observational interest, the absolute value of \(P_{T}^{(s)}\) is highly dependent on the detailed bouncing models (for example, the tensor spectra index in our model (8) is dependent on the model parameter \(\omega_{V}\)[90]). Thus for our purpose to compare the parity-violation effect from a different phase, we shall concern with the parameter \(\Delta\chi\).
### Dynamics of tensor perturbation
The dynamical equation for the tensor mode \(\gamma_{k}^{(s)}\) is
\[u_{k}^{(s)^{\prime\prime}}+\left(k^{2}-\frac{z_{T}^{(s)^{\prime\prime}}}{z_{T}^{ (s)}}\right)u_{k}^{(s)}=0\, \tag{19}\]
where we define the Mukhanov-Sasaki variable
\[u_{k}^{(s)}\equiv z_{T}^{(s)}\gamma_{k}^{(s)}\,\ z_{T}^{(s)}\equiv\frac{a}{2} \sqrt{1-\lambda^{(s)}\frac{k}{a}\frac{g_{3,\phi}\phi^{\prime}}{aM_{p}^{2}}}\, \tag{20}\]
and the sound speed is set to be unity for all polarization modes. Notice that we require the terms in the square root to be non-negative, otherwise, there will be ghost modes [114].
Initially, all the perturbation modes of observational interests are on sub-horizon scales, where the \(k^{2}\) terms in (19) dominates. Thus, we can take the vacuum initial condition
\[u_{k}^{(s)}=\frac{e^{-ik\tau}}{\sqrt{2k}}\,\ \tau\rightarrow-\infty. \tag{21}\]
We can combine the equations (19) and (21) to get the dynamics of \(\gamma_{ij}\).
Firstly, we evaluate the term \(z_{T}^{(s)^{\prime\prime}}/z_{T}^{(s)}\) numerically, with a specific gCS coupling \(f_{3}(\phi)=\phi\). Moreover, we notice that the result depends only on the physical wavenumber \(k/a_{0}\) instead of \(k\), as long as we rescale the term \(z_{T}^{(s)^{\prime\prime}}/z_{T}^{(s)}\) by a factor \(a_{0}^{-2}\). At this point, we set a specific scale \(k/a_{0}=10^{-2}\), the averaged magnitude of maximum value of \(\dot{\phi}\) and \(H\).
We depict the term \(z_{T}^{(s)^{\prime\prime}}/z_{T}^{(s)}\) as a function of cosmic time in figure 2, with a rescale factor \(a_{0}^{-2}\). Outside the bouncing phase, the \(L\) and \(R\) modes are almost identical; while during the
Figure 2: The function \(z_{T}^{(s)^{\prime\prime}}/z_{T}^{(s)}\) as a function of time for different polarization modes, rescaled by a factor \(a_{0}^{-2}\). The left figure shows the time evolution of the whole cosmological history, and the right figure shows the dynamics near the bouncing point.
buncing phase, the two polarization modes differ significantly, and the amplitude of \(L/R\) mode is one order beyond the unpolarized mod \(N\).
Now we come to the mode function \(u_{k}^{(s)}\). As we explained at the ending part of section II.2, the modes initially on the sub-horizon scale at \(t=t_{+}\) will stay in the sub-horizon region during the expansion phase. Their evolution can then be approximated as
\[u_{k}^{(s)}\simeq u_{k,+}^{(s)}e^{ik\tau}+u_{k,-}^{(s)}e^{-ik\tau}\, \tag{22}\]
so the amplitude of the mode function will oscillate during this phase.
We depict the dynamics of \(|u_{k}^{(s)}|/|u_{k}^{(N)}|\) for different \(k/a_{0}\) value in figure 3. As we can see, for large scale such as \(k/a_{0}=10^{-4}\), the mode quickly becomes super-horizon during the bouncing phase, and \(|u_{k}^{(s)}|/|u_{k}^{(N)}|\) approaches constant. For intemediate scale like \(k=a_{0}=10^{-3}\), the mode is sub-horizon but \(z_{T}^{(s)^{\prime\prime}}/z_{T}^{(s)}\) is still comparable to \(k^{2}/a_{0}^{2}\), so the dynamics is oscillatory but not strictly identical. For small scale like \(k/a_{0}=10^{-2}\), the oscillatory feature is strong.
Now we conclude that, for sufficiently large wave mode, physical quantities such as the mode function (and hence the tensor perturbation \(\gamma\) and parameter \(\Delta\chi\)) at the end of the bouncing phase, can be represented by their statistics property at the expansion phase, since the expansion phase only add an oscillating feature to them.
One additional advantage of our treatment is that the horizon-cross condition should in principle determined by the behavior of \(z_{T}^{(s)^{\prime\prime}}/z_{T}^{(s)}\). While in the bouncing phase, this term is highly non-trivial, it simplifies to \(a^{\prime\prime}/a\) in the expansion phase and we have a simple expression.
### Parity violation signal
With the dynamics of the mode function, we can evaluate the corresponding tensor power spectrum. Notice that the tensor spectrum depends also on \(z_{T}^{(s)}\), which carries the information on different polarization, so we should first evaluate \(\gamma_{k}^{(s)}\equiv u_{k}^{(s)}/z_{T}^{(s)}\).
However, in our case, the function \(z_{T}^{(s)}\) differs only slightly. As shown in figure 4, \(z_{T}^{(s)}\) for \(L\) and \(R\) mode would have a maximum difference of order \(10^{-2}\). Hence, we may simply take
\[\frac{P_{T}^{(L)}}{P_{T}^{(R)}}=\frac{|\gamma_{k}^{(L)}|^{2}}{|\gamma_{k}^{(R) }|^{2}}=\frac{|u_{k}^{(L)}|^{2}}{|u_{k}^{(R)}|^{2}}\frac{|z_{T}^{(R)}|^{2}}{|z_ {T}^{(L)}|^{2}}\simeq\frac{|u_{k}^{(L)}|^{2}}{|u_{k}^{(R)}|^{2}}\ \rightarrow\ \Delta\chi\simeq\frac{|u_{k}^{(L)}|^{2}-|u_{k}^{(R)}|^{2}}{|u_{k}^{(L)}|^{2}+ |u_{k}^{(R)}|^{2}}\, \tag{23}\]
with a loss of precision no more than \(\mathcal{O}(10^{-2})\).
Now we can work out \(\Delta\chi\). Since \(|u_{k}^{(s)}|\) is oscillating, we expect \(\Delta\chi\) to be also oscillating, as shown in figure 5. As stated in the last part in section III.2, we will represent the parity-violation state at the end of the bouncing phase, by the statistic property of \(u_{k}\) (and hence \(\Delta\chi\)) during the expansion phase. Our strategy is, for each fixed \(k/a_{0}\), we take the value of
Figure 4: The dynamics of \(z_{T}^{(s)}\) in the whole cosmological history. The scale is chosen to be \(k/a_{0}=10^{-2}\). We see from the left figure that \(z_{T}^{(s)}\) are almost identical, and for convenience, we also plot the \(z_{T}^{(s)}\) for \(L\) and \(R\) mode respectively.
Figure 5: The parity-violation status as a function of \(t\). The oscillatory feature is expected due to the behavior of \(|u_{k}^{(s)}|\). Note that we adopted a smaller range of \(t\) for large \(k/a_{0}\), otherwise the whole picture would be totally filled.
\(\Delta\chi\)'s amplitude \(\mathcal{A}_{\Delta\chi}\) with a factor \(1/\sqrt{2}\), i.e. \(\mathcal{A}_{\Delta\chi}/\sqrt{2}\), to represent the corresponding \(\Delta\chi\) at the end of the bouncing phase, \(\Delta\chi_{b}\). Then, we can depict the dependence of \(\Delta\chi_{b}\) on the physical wavenumber \(k/a_{0}\), in figure 6.
We see from figure 6 that, the parity-violation can be induced at the bouncing phase, and for large \(k/a_{0}\), there are chances that the parameter \(\Delta\chi\) be large enough (i.e. of order \(10^{-2}\)) to generate detectable parity-violation signals.
### Comment on the resulting signal
Before proceeding, we shall comment on the result from section III.3, and clarify some potentially confusing points.
Firstly, we stress again that the signal obtained in the last section is in fact the representation of the signal at the end of the bouncing phase. In order to confront the result of observations, we need to design a more realistic expansion phase. Then, it is possible that a large parity-violation signal at \(t=t_{+}\) is suppressed by the subsequent expansion phase. Thus at the current stage, what we can conclude is that parity-violation feature can be produced at the bouncing phase where the energy scale is the highest in bouncing cosmology, and it could potentially be detectable.
Besides, we see in figure 6 that \(\Delta\chi\) is proportional to \(k/a_{0}\), which seems to be in contrast with the result from [46], where parity-violation signals are also generated by some NEC
Figure 6: The parameter \(\Delta\chi\) as a function of physical wavenumber \(k/a_{0}\). Notice that for smaller \(k/a_{0}\), the behavior of \(u_{k}^{(s)}\) would differ more from (22), so \(\Delta\chi\) would also receive more influence from the expansion phase. Thus we shall treat the data from smaller \(k/a_{0}\) with less confidentiality.
violation phase, but \(\Delta\chi\) is non-trivial only in selected wavelengths (see also [10; 11]), while our result seems to be valid for a wide range of wavelengths. This is because the scenario considered in [46] is in an inflation background. The NEC violation happens between two inflation phases, and thus the NEC violation phase is in correspondence to a specific range of wavenumber \(a_{-}H_{-}<k<a_{+}H_{+}\), where the \(\pm\) sign stands for the beginning and end of the NEC violation phase, so \(k_{\pm}=a_{\pm}H_{\pm}\) stands for the wave mode that exactly crosses the Hubble horizon at \(t=t_{\pm}\). However, in our case, the bouncing point is characterized by \(H(0)=0\), where all modes are inside the "Hubble horizon" \(1/H\to\infty\). Thus, we expect all modes "feel" the parity-violation physics during the bouncing phase.
Actually, the result displayed in figure 6 is consistent with that obtained in the bounce-inflation scenario [10], where the effect of parity-violation measured by \(\Delta\chi\) is proportional to \(k/\mathcal{H}_{*}\) for the GW modes which exit horizon during the contracting and bouncing phases (i.e., before the inflationary phase), though the bouncing phase is assumed to be negligibly short in [10].
Finally, one may naturally ask, if \(\Delta\chi\) is proportional to \(k/a_{0}\) as that in figure 6, then shouldn't \(\Delta\chi\) be of higher order like \(\mathcal{O}(1)\), and resulting in an unreasonably large parity-violation signal? The problem is, we have to cut off at some \(k\) for at least two reasons. Firstly, to avoid the appearance of ghosts, we require \(z_{T}^{(s)}\) to be real, so
\[\left|\frac{k}{a}\frac{\dot{\phi}}{M_{p}^{2}}\right|<1\ \to\ \frac{k}{a_{0}}<\max \left(\frac{\dot{\phi}}{M_{p}^{2}}\right)\, \tag{24}\]
and we have to cut off smaller scales. Besides, the effective description of our universe as a homogeneous and isotropic ideal fluid breaks down for a sufficiently small scale, i.e. large enough \(k\). This means that the value of \(a_{0}\) cannot be arbitrary. Instead, it should have a proper value such that the parity-violation happens at the correct scale, and the value of \(k/a_{0}\) always satisfies the condition (24) for reasonable \(k\).
We shall further mention that, in our toy model, the wave mode displayed in figure 6 is in the sub-horizon region. However, in a realistic model, the tensor mode will experience a decaying when evolving toward the horizon during the expansion phase. Smaller scales would exit the horizon at a later time, and they would experience more decaying. Thus, although \(\Delta\chi\) is approximately proportional to \(k/a_{0}\) at the end of the bouncing phase, it is possible that smaller scales receive more suppression in the following expansion phase, and the parity-violation effect is important only in some intermediate scales.
### Semi-analytic investigation
Although we numerically verified the existence of parity-violation signals from the bouncing phase, we wish to briefly explain the result analytically. Fortunately, the duration of the bouncing phase is small from figure 1, so we may adopt the parametrization (10). Moreover, in cosmic time, we have
\[\frac{z_{T}^{(s)^{\prime\prime}}}{z_{T}^{(s)}}=a^{2}\left[\frac{\ddot{z}_{T}^{( s)}}{z_{T}^{(s)}}+H\frac{\dot{z}_{T}^{(s)}}{z_{T}^{(s)}}\right]\,\quad z_{T}^{(s)}=\frac{a}{2}\sqrt{1- \lambda^{(s)}\frac{k}{a}\frac{\dot{\phi}}{M_{p}^{2}}}\simeq\frac{a}{2}-\lambda ^{(s)}\frac{k}{4}\frac{\dot{\phi}}{M_{p}^{2}}\, \tag{25}\]
and we may write the expression in the following
\[\frac{z_{T}^{(s)^{\prime\prime}}}{z_{T}^{(s)}}\simeq a^{2}\left[\frac{\ddot{a} +H\dot{a}}{2z_{T}^{(s)}}-\lambda^{(s)}\frac{k}{4}\frac{\ddot{\phi}H+\dddot{ \phi}}{M_{p}^{2}z_{T}^{(s)}}\right]. \tag{26}\]
The term \(\ddot{a}+H\dot{a}\) is suppressed by a factor \(t^{2}\), while the \(H\ddot{\phi}\) term suppressed by a factor \(t\), we concern on the term \(\dddot{\phi}\). Now \(\dot{\phi}\) is a \(\delta\)-like function, so we expect \(\ddot{\phi}\) to have a positive peak at \(t<0\) and a negative peak at \(t>0\). Subsequently, \(\dddot{\phi}\) should first have a positive peak at \(t<0\), then a negative peak at \(t>0\), and finally followed by a second positive peak. We illustrate this point by depicting both \(\ddot{\phi}\) and \(z_{T}^{(L)^{\prime\prime}}/z_{T}^{(L)}\) in figure 7 and see that they have exactly the same feature.
We conclude that, the feature of \(z_{T}^{(s)^{\prime\prime}}/z_{T}^{(s)}\) comes from that of \(\dddot{\phi}\), which is further decided by the \(\delta\)-function-like behavior of \(\dot{\phi}\). The mode function receives a non-trivial enhancement,
To intuitively understand how the peaks of \(z_{T}^{(s)^{\prime\prime}}/z_{T}^{(s)}\) affect the tensor mode, we may approximately take each peak as a \(\delta\)-like function. For simplicity, we take the realization of
these peaks to be a linear function
\[\left(z_{T}^{(s)^{\prime\prime}}/z_{T}^{(s)}\right)_{\rm peak}\simeq b|t-t_{c}|\,\ t_{p-}<t_{c}<t_{p+}\,\ b>0\,\ t\in(t_{p-},t_{p+})\, \tag{27}\]
so for each region, the dynamical equation for the mode function becomes (for convenience let's temporarily take \(t_{c}=0\), and also \(t\simeq\tau\) during the bouncing phase since \(a\) is almost a constant)
\[u_{k}^{(s)^{\prime\prime}}+\left(k^{2}\pm bt\right)u_{k}^{(s)}=0\, \tag{28}\]
whose general solution is the Airy function
\[u_{k}^{(s)}=c_{1}A_{i}\left(\frac{-k^{2}\mp bt}{|b|^{\frac{2}{3}}}\right)+c_{2 }B_{i}\left(\frac{-k^{2}\mp bt}{|b|^{\frac{2}{3}}}\right). \tag{29}\]
In figure 8 we depict the behavior of the Airy function. When the argument is negative, both Airy functions oscillate. When the argument is positive, one branch increases while the other shrinks. Thus, the amplitude of \(u_{k}^{(s)}\) will be enhanced in the positive-argument region.
Note that the parametrization in (27) is rough, so at the current stage, we cannot go further without the detailed expression of the peaks. Thus we can only conclude that the peaks in \(z_{T}^{(s)^{\prime\prime}}/z_{T}^{(s)}\) enhance the amplitude of tensor perturbation, and different polarization modes receive different enhancement due to the microscopic physics in the bouncing phase, which causes the parity-violation.
Nextly, we shall intuitively explain why \(\Delta\chi\) has a linear dependence on \(k/a_{0}\). For this purpose, we plot in figure 9. For large \(k/a_{0}\), the three peak value of \(z_{T}^{(s)^{\prime\prime}}/z_{T}^{(s)}\) is approximately linearly dependent on \(k/a_{0}\), so we expect the enhancement of \(u_{k}^{(s)}\) also depends on \(k/a_{0}\) linearly. For smaller \(k/a_{0}\) when the peak value of \(L\) and \(R\) modes are comparable to that of
Figure 8: Airy function
\(N\) mode, the second peak destroys the linear relationship, so we expect the linear dependence of \(\Delta\chi\) on \(k/a_{0}\) is ruined. Thus, the fitted function \(\Delta\chi\) in figure 6 is a little convex instead of perfectly straight.
Finally, we emphasize how generic our result should be. The suppression of parity-violation signals during the contraction phase comes from the smallness of \(\dot{\phi}\). Although the dynamics of \(\dot{\phi}\) relies on the details of the contraction phase, for mainstream bouncing models like matter bounce (contraction phase dominated by a stiff matter and small \(\dot{\phi}\) like that in [59; 115]) and Ekpyrotic bounce (the case described by our model where \(\dot{\phi}=-2/\omega_{V}t\)), \(|\dot{\phi}|\) is always small for large \(|t|\). Alternatively, a large \(\dot{\phi}\) would correspond to a higher energy scale, so the parity-violation effect is suppressed in the contraction phase because of the low energy scale. Notice that the contraction phase will always have a lower energy scale compared to the bouncing phase as long as we consider a classical bounce model where the contraction phase happens with an initially classic configuration. Hence, the smallness of the parity-violation signal in the contraction phase should be valid at least for many bouncing models.
We may understand the smallness of \(\dot{\phi}\) in the contraction phase by alternative arguments. One common mechanism for NEC violation is ghost condensation [116], where the kinetic Lagrangian \(\mathcal{L}(X)\) has a non-trivial stationary point at \(X\neq 0\) with a negative vacuum expectation (VEV). The contraction phase corresponds to the configuration of the false vacuum \(X=0\), while the bouncing phase corresponds to the true vacuum. Thus, a small \(\dot{\phi}\) is expected in this mechanism. Moreover, if the bouncing phase has a short duration, we also expect \(\dot{\phi}\) to have a sharp peak, whose magnitude is related to the VEV of the kinetic Lagrangian.
In view of the above argument, we see that a short duration of the bouncing phase can
lead to both the sharp peak of \(\dot{\phi}\), and the vanishing of terms other than \(\dddot{\phi}\) in (26). This is generically required by the ghost-free condition for the scalar mode, i.e. the coefficient of \(\ddot{\phi}\) in (7) does not cross 0. One popular way to evade the scalar ghost is to let the bouncing phase be short enough, such that bouncing ends before the coefficient approaches 0 [56]. In this case, the duration is severely constrained.
In conclusion, we find that certain characteristics of our toy model, i.e. \(\dot{\phi}\) small in the contraction phase, one single sharp peak for \(\dot{\phi}\) in the bouncing phase, and short duration of bouncing phase, are generic in many bouncing models. We then expect our conclusion to be also valid for these bouncing models.
## IV Conclusion and Outlook
We investigate the possible parity-violation signals in bouncing cosmology, by a coupling between the gCS term and the scalar field which triggers the bounce. Through numerical studies of a toy bouncing model, we find that the parity-violation signals are enhanced during the bouncing phase. Moreover, we study the numerical result in a semi-analytical way and find that our result obtained in the toy model can be generalized in a wild range of the bouncing models.
The significance of our result is twofold. On the one hand, we provide a possible mechanism for the generation of parity-violation signals in the framework of bouncing cosmology, enabling us to explain parity-violation physics in the GW background. On the other hand, since the parity-violation signals come from the bouncing phase, where the energy scale is the highest and new physics is believed to exist, our result provides a possible way to explore the new physics through parity-violation signals. To our best knowledge, our result is distinctive from many other phenomenological approaches, where the imprint from the bouncing phase is minimized.
The current work is a preliminary check on parity-violation physics in bouncing cosmology. There are a lot of following-up works to be finished in the future.
Firstly, since the tensor spectrum is dependent on the physics of the contraction phase and expansion phase, it is important to construct a realistic bouncing model to predict the parity-violation signals in the real world and confront them with observations. For example, for the contraction phase, we may take either an Ekpyrotic contraction or a matter contraction;
for the expansion phase, we may take either an inflating as those bounce-inflation models or an expansion dominated by radiation, such that the standard cosmology begins exactly when the bouncing phase ends. Furthermore, the physical scale at which the effect of parity-violation appears depends on the scale of the bounce and a complete construction of the evolution of the universe, which should also be addressed in future studies in order to confront the observations.
Secondly, we shall study physics with the high energy correction \(\mathcal{L}_{HE}\) specified. In this paper, we study the specific case where \(\mathcal{L}_{HE}\) has negligible impact on both the background dynamics and the propagation of gravitational waves. To probe the physics of \(\mathcal{L}_{HE}\), we shall choose a specific form of \(\mathcal{L}_{HE}\), study their effect at both background and perturbative levels, and get their possible unique imprints.
Finally, there are issues beyond our current framework. For example, we are working with a classical bounce. What would happen if we have a quantum bounce? Besides, there could also be parity-violation signals from the coupling between the E mode and B mode. It's interesting to ask if our results hold in this scenario. Last but not least, it's interesting to consider alternative parity-violation mechanism [21; 24; 36; 25]. These questions are interesting to address and are open for the following study.
## V Acknowledgement
We thank Shingo Akama, Yi-Fu Cai, Chao Chen, Alexander Ganz, Chunshan Lin, Astuhisa Ota, Yun-Song Piao, Yi Wang and Yunlong Zheng for their helpful discussions and comments. M. Z. is supported by grant No. UMO 2018/30/Q/ST9/00795 from the National Science Centre, Poland. Y. C. is supported in part by the National Natural Science Foundation of China (Grant No. 11905224), the China Postdoctoral Science Foundation (Grant No. 2021M692942), and Zhengzhou University (Grant No. 32340282).
|
2309.07355 | Space-Time Adaptive Processing for radars in Connected and Automated
Vehicular Platoons | In this study, we develop a holistic framework for space-time adaptive
processing (STAP) in connected and automated vehicle (CAV) radar systems. We
investigate a CAV system consisting of multiple vehicles that transmit
frequency-modulated continuous-waveforms (FMCW), thereby functioning as a
multistatic radar. Direct application of STAP in a network of radar systems
such as in a CAV may lead to excess interference. We exploit time division
multiplexing (TDM) to perform transmitter scheduling over FMCW pulses to
achieve high detection performance. The TDM design problem is formulated as a
quadratic assignment problem which is tackled by power method-like iterations
and applying the Hungarian algorithm for linear assignment in each iteration.
Numerical experiments confirm that the optimized TDM is successful in enhancing
the target detection performance. | Zahra Esmaeilbeig, Kumar Vijay Mishra, Mojtaba Soltanalian | 2023-09-13T23:53:03Z | http://arxiv.org/abs/2309.07355v3 | # Space-Time Adaptive Processing in Connected and
###### Abstract
In this study, we develop a holistic framework for space-time adaptive processing (STAP) in connected and automated vehicle (CAV) radar systems. We investigate a CAV system consisting of multiple vehicles that transmit frequency-modulated continuous-waveforms (FMCW), thereby functioning as a multistatic radar. Direct application of STAP in a network of radar systems such as in a CAV may lead to excess interference. We exploit time division multiplexing (TDM) to perform transmitter scheduling over FMCW pulses to achieve high detection performance. The TDM design problem is formulated as a quadratic assignment problem which is tackled by power method-like iterations and applying the Hungarian algorithm for linear assignment in each iteration. Numerical experiments confirm that the optimized TDM is successful in enhancing the target detection performance.
Zahra Esmaeilbeig\({}^{\star}\), Kumar Vijay Mishra\({}^{\dagger}\), and Mojtaba Soltanalian\({}^{\star}\)\({}^{\star}\)ECE Department, University of Illinois Chicago, USA
\({}^{\dagger}\)United States DEVCOM Army Research Laboratory, USA
This work was sponsored in part by the National Science Foundation Grant ECCS-1809225, and in part by the Army Research Office, accomplished under Grant Number W911NF-22-1-0263. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
## 1 Introduction
Recent innovations in connected and automated vehicle (CAV) systems provide new opportunities to enable better sensing of the environment. CAVs offer advanced collision avoidance through onboard sensor technologies like radar, cameras, and lidar [1]. Platooning in the CAV systems is when vehicles travel in groups with very short inter-vehicle spacing. The connectivity among the vehicles in a platoon enables a CAV to receive information from surrounding vehicles and infrastructure [2]. In this regard, a network of automotive radar systems assisting each other in sensing the environment is enabled. Whilst a single vehicle radar may suffer from obstruction, fading, or lack of radial velocity component of the target with respect to the radar, it is unlikely that this will be the case with multi-vehicle systems with different transmitter-target-receiver paths, also known as networked radar in the literature [3, 4].
In this study, we aim to leverage the cooperative capabilities of transportation systems--specifically, vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications--to develop a distributed space-time adaptive processing (STAP) scheme for CAVs. Each vehicle is assumed to travel at a speed and in a direction that can be different from those of the other vehicles. However, the properties of the environment, positioning and velocity information of the vehicles in the platoon are assumed to be accessible at each vehicle through V2V communication, therefore making a cooperative STAP scheme feasible. We formulate the cooperative STAP as a decentralized multistatic target detection problem [5].
Direct application of STAP can reduce target detection performance because of high sidelobes and susceptibility to interference [6, 7, 8, 9, 10, 11, 12]. On the transmitter side, the interference can be addressed by transmitting well-designed radar signals that are nearly orthogonal to each other in the spectral or temporal domains [13]. In order to enhance robustness against interference, we propose a time division multiplexing(TDM) framework in which among all the transmitters in the CAV, only one is scheduled to transmit during each pulse. In other words, we assume that frequency-modulated continuous-waveforms (FMCW) signals with similar properties but with active or silent chirps are generated. We consider an extended version of the transmitter scheduling framework proposed by [7] and demonstrate that the transmitter scheduling problem is a quadratic assignment problem (QAP) and address it by means of power method-like iterations. At each iteration the problem is boiled down to a linear assignment problem which is efficiently solved by the Hungarian algorithm [14].
The rest of this paper is organized as follows. In the next section, we introduce the system model for the cooperative STAP in CAVs. In Section 3, we formulate the design problem for obtaining the optimal TDM in our system. Section 4 presents our approach based on the Hungarian algorithm to optimize the TDM matrix. We evaluate our methods via numerical experiments in Section 5 and conclude the paper in Section 6.
_Notation:_ Throughout this paper, we use bold lowercase and bold uppercase letters for vectors and matrices, respectively. The \((m,n)\)-th element of the matrix \(\mathbf{A}\) is \(\mathbf{A}_{mn}\). The sets of complex and real numbers are \(\mathbb{C}\) and \(\mathbb{R}\), respectively; \((\cdot)^{\top}\), \((\cdot)^{*}\) and \((\cdot)^{\dagger}\) are the vector/matrix transpose, conjugate, and Hermitian transpose, respectively. The trace of a matrix is \(\mathrm{Tr}(\cdot)\); the function \(\mathrm{diag}(\cdot)\) returns the diagonal elements of the input matrix; and \(\mathrm{Diag}(.)\) and \(\mathrm{Blkdiag}\)\((\cdot)\) produce a diagonal/block-diagonal matrix with the same diagonal entries/blocks as their vector/matrices argument. The Hadamard (element-wise) and Kronecker products are \(\odot\) and \(\otimes\), respectively. \(l_{2}\)-norm of a and Frobenius norm of \(\mathbf{A}\) is denoted by \(\left\|\mathbf{a}\right\|_{2}\) and \(\left\|\mathbf{A}\right\|_{k}\), respectively. \(\mathrm{vec}_{M,N}^{-1}\) (\(\mathbf{a}\)) reshapes the input vector \(\mathbf{a}\in\mathbb{C}^{MN\times 1}\) into a matrix \(\mathbf{A}\in\mathbb{C}^{MN\times N}\) such that \(\mathrm{vec}\left(\mathbf{A}\right)=\mathbf{a}\).
## 2 System Model
We consider a network of \(K\) cooperative vehicles, each equipped with radars that have \(N\) transmit antennas and \(M\) receive antennas arranged as a uniform linear array (ULA). Each vehicle transmits \(L\) FMCW chirps during the CPI time of \(T\), with similar bandwidth \(B\) and chirp time \(T_{c}\). The transmit waveform at \(l-\)th pulse of one
transmitter antenna is
\[s(t,l)=\text{rect}\left(\frac{t-lT_{c}}{T}\right)e^{\mathrm{j}2\pi\left[f_{c}+ \frac{H}{2}\left(t-lT_{c}\right)\right](t-lT_{c})}. \tag{1}\]
We denote the position of the \(n-\)th transmitter on vehicle \(k\) by \(\mathbf{p}_{{}_{T,kn}}\in\mathds{R}^{2\times 1}\) and the position of \(m-\)th receiver on vehicle \(i\) by \(\mathbf{p}_{{}_{R,im}}\in\mathds{R}^{2\times 1}\). We further assume that the target at position \(\mathbf{p}_{{}_{t}}\in\mathds{R}^{2\times 1}\) is moving with a velocity \(\mathbf{v}_{t}\in\mathds{R}^{2\times 1}\). The Doppler velocity of the target with respect to vehicle \(k\) is
\[\nu_{{}_{k}}=\mathbf{v}_{t}^{\top}\mathbf{p}_{{}_{tk}}, \tag{2}\]
where \(\mathbf{p}_{{}_{tk}}=[\sin\theta_{{}_{tk}},\cos\theta_{{}_{tk}}]^{\top}\) is the direction vector of the target with DoA \(\theta_{{}_{tk}}\) with respect to vehicle \(k\).
The range from the \(n\)-th transmitter on vehicle \(k\) to the target is
\[R_{{}_{k}}(t,l,n)=\left\|\mathbf{p}_{{}_{t}}-\mathbf{p}_{{}_{R,im}}\right\|_ {2}+\nu_{{}_{k}}(t+(n-1+(l-1)N)T_{c}), \tag{3}\]
and range from target to the m-th Rx on vehicle \(i\) is
\[R_{{}_{i}}(t,l,m)=\left\|\mathbf{p}_{{}_{t}}-\mathbf{p}_{{}_{R,im}}\right\|_ {2}+\nu_{{}_{i}}(t+(m-1+(l-1)M)T_{c}). \tag{4}\]
Consequently, the delay introduced into the signal is
\[\tau_{{}_{ki}}=\frac{R_{{}_{k}}(t,l,n)+R_{{}_{i}}(t,l,m)}{c}, \tag{5}\]
where \(c\) is the speed of light.
We assume that each pair of vehicles \(i,k\in\{1,\ldots,K\}\) is connected via V2V communication links. In automotive radar, the signal processing flow sequentially comprises of sampling, range estimation, Doppler processing, and DoA estimation. In STAP, after range processing the Doppler and DoA are processed simultaneously by means of 2D adaptive matched filters [16, 5]. Therefore, after sampling the signal backscattered from the target i.e. \(s(t-\tau_{{}_{ki}},l)\) and estimating the range, the received signal at the designated range bin at the \(m\)-th Rx on vehicle \(i\) from the \(n\)-th Tx on vehicle \(k\) is
\[s_{{}_{ki}}(l,n,m)= \alpha_{k}e^{-\mathrm{j}\frac{2\pi f_{c}}{c}\nu_{k}((l-1)N+(n-1) )T_{c}}\] \[e^{-\mathrm{j}\frac{2\pi f_{c}}{c}\nu_{i}((l-1)M+(m-1))T_{c}}\] \[e^{\mathrm{j}2\pi f_{c}}\Big{(}\mathbf{p}_{{}_{T,kn}}^{\top} \mathbf{p}_{{}_{tk}}\Big{)}e^{\mathrm{j}2\pi f_{c}}\Big{(}\mathbf{p}_{{}_{R,im }}^{\top}\mathbf{p}_{{}_{tk}}\Big{)}, \tag{6}\]
where the delay in (5) is expanded with respect to the first element of ULA in a similar manner as in [7] and \(\alpha_{k}\) is the complex target reflection factor. We introduce the Doppler steering vector as
\[\mathbf{a}_{{}_{d,N}}(\nu)=\left[1,e^{-\mathrm{j}\frac{2\pi f_{c}\nu}{c}NT_{c }},\ldots,e^{-\mathrm{j}\frac{2\pi f_{c}\nu}{c}(L-1)NT_{c}}\right]^{\top}, \tag{7}\]
and the auxiliary Doppler steering vector as
\[\mathbf{a}_{{}_{D,M}}(\nu)=\left[1,e^{-\mathrm{j}\frac{2\pi f_{c}\nu}{c}T_{c }},\ldots,e^{-\mathrm{j}\frac{2\pi f_{c}\nu}{c}(M-1)T_{c}}\right]^{\top}. \tag{8}\]
The transmit array steering vector at vehicle \(k\) is
\[\mathbf{a}_{{}_{T,k}}(\theta)=\left[e^{\mathrm{j}2\pi f_{c}\left(\mathbf{p}_{{} _{T,k1}}^{\top}\mathbf{p}_{{}_{tk}}\right)},\ldots,e^{\mathrm{j}2\pi f_{c} \left(\mathbf{p}_{{}_{T,kN}}^{\top}\mathbf{p}_{{}_{tk}}\right)}\right]^{\top}, \tag{9}\]
and the receive array steering vector at vehicle \(i\) is
\[\mathbf{a}_{{}_{R,i}}(\theta)=\left[e^{\mathrm{j}2\pi f_{c}\left(\mathbf{p}_{{} _{R,i1}}^{\top}\mathbf{p}_{{}_{ti}}\right)},\ldots,e^{\mathrm{j}2\pi f_{c} \left(\mathbf{p}_{{}_{R,iM}}^{\top}\mathbf{p}_{{}_{ti}}\right)}\right]^{\top}. \tag{10}\]
The snapshot signal received at vehicle \(i\) from \(n\)-th Tx on vehicle \(k\) is
\[\mathbf{s}_{{}_{ki}}(n)= \left[\mathbf{a}_{{}_{d,N}}(\nu_{k})\odot\mathbf{a}_{{}_{d,M}}( \nu_{i})\right]\] \[\otimes\left[\mathbf{a}_{{}_{R,i}}(\theta)\odot\mathbf{a}_{{}_{D,M} }(\nu_{i})\right]\in\mathds{C}^{LM\times 1}. \tag{11}\]
By stacking the echoes from all N Tx on vehicle \(k\), we obtain
\[\mathbf{s}_{{}_{ki}} =\begin{bmatrix}\mathbf{s}_{{}_{ki}}(1)\\ \vdots\\ \mathbf{s}_{{}_{ki}}(N)\end{bmatrix}\] \[=\left(\mathbf{a}_{{}_{T,k}}(\theta)\odot\mathbf{a}_{{}_{D,N}}(\nu _{k})\right)\otimes\Big{(}\left(\mathbf{a}_{{}_{d,N}}(\nu_{k})\odot\mathbf{a}_{{} _{d,M}}(\nu_{i})\right)\] \[\otimes\left(\mathbf{a}_{{}_{R,i}}(\theta)\odot\mathbf{a}_{{}_{D,M }}(\nu_{i})\right)\Big{)}\in\mathds{C}^{NLM\times 1}. \tag{12}\]
As illustrated in Fig. 1, we focus on a scenario wherein vehicle \(i\), serving as the lead in the platoon, receives assistance from all other vehicles for its sensing task. The target is in the field of view (FoV) of all vehicles and therefore the whole CAV can perform as a multi-static radar to sense it. To be succinct, hereafter we remove the subscript \(i\) i.e. without loss of generality \(\mathbf{s}_{ki}\) will be replaced by \(\mathbf{s}_{k}\). In order to decide whether a target is present in a particular known range-cell, we perform binary hypothesis testing between \(\mathcal{H}_{0}\) (target-free hypothesis) and \(\mathcal{H}_{1}\) (target-present hypothesis), i.e.,
\[\mathcal{H}_{0}:\quad\mathbf{y}_{k}=\mathbf{n}_{k}\] \[\mathcal{H}_{1}:\quad\mathbf{y}_{k}=\alpha_{k}\mathbf{s}_{k}+ \mathbf{n}_{k}, \tag{13}\]
where \(\alpha_{k}\) is the complex target reflectivity factor and \(\mathbf{n}_{k}\) is the noise and interference with covariance \(\mathbf{R}_{k}\)[17]. The log-likelihood ratio test statistic is given by [5],
\[\zeta=\sum_{k=1}^{K}\frac{|\mathbf{s}_{k}^{\mathrm{H}}\mathbf{R}_{k}^{-1}\mathbf{ y}_{k}|^{2}}{\frac{|\alpha_{k}|^{2}}{2}+\mathbf{s}_{k}^{\mathrm{H}}\mathbf{R}_{k}^{-1} \mathbf{s}_{k}}\overset{\mathcal{H}_{1}}{\underset{\mathcal{H}_{0}}{\overset{ \mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{ \mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{ \mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{ \mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{ \mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{1}}{\overset{ \mathcal{H}_{
where \(\mathbf{C}_{k}=\mathbf{s}_{k}^{\mathbf{H}}\mathbf{R}_{k}^{-1}\mathbf{s}_{k}\). The probability of detection and false alarm are obtained respectively as
\[\begin{split}\mathtt{P}_{\mathtt{D}}&=\Pr\left\{ \zeta>\gamma|\mathcal{H}_{1}\right\}=1-\Pr\left\{\zeta\leq\gamma|\mathcal{H}_{ 1}\right\}=1-F_{\zeta|\mathcal{H}_{1}}(\gamma|\mathcal{H}_{1}),\\ \mathtt{P}_{\mathtt{FA}}&=\Pr\left\{\zeta>\gamma| \mathcal{H}_{0}\right\}=1-\Pr\left\{\zeta\leq\gamma|\mathcal{H}_{0}\right\}=1 -F_{\zeta|\mathcal{H}_{0}}(\gamma|\mathcal{H}_{0}),\end{split} \tag{16}\]
where \(F_{\zeta|\mathcal{H}}(.)\) is the cumulative distribution function of the test statistic with hypo-exponential distribution. By carefully observing (14), one can verify that the weighting \((\frac{|\alpha_{k}|^{2}}{2}+\mathbf{s}_{k}^{\mathbf{H}}\mathbf{R}_{k}^{-1} \mathbf{s}_{k})^{-1}\) accounts for the contribution of vehicle \(k\) in the test statistic. If \(\mathbf{s}_{k}^{\mathbf{H}}\mathbf{R}_{k}^{-1}\mathbf{s}_{k}\ll\frac{|\alpha _{k}|^{2}}{2}\) then we can interpret it as the signal propagated from vehicle \(k\) not reaching the receiver. Therefore, for the sake of interpretation here we assume \(\mathbf{s}_{k}^{\mathbf{H}}\mathbf{R}_{k}^{-1}\mathbf{s}_{k}\gg\frac{|\alpha _{k}|^{2}}{2}\) for all vehicles i.e. \(k\in\{1,\ldots,K\}\). Under this condition, (15) is
\[\mathbb{E}\left\{\zeta|\mathcal{H}_{1}\right\}=\sum_{k=1}^{K}1+2|\alpha_{k}|^ {2}\mathbf{C}_{k} \tag{17}\]
The signal received at vehicle \(i\) from all vehicles, i.e., \(k\in\{1,\ldots,K\}\), is thus given by
\[\mathbf{s}=\begin{bmatrix}\mathbf{s}_{1}\\ \vdots\\ \mathbf{s}_{K}\end{bmatrix}\in\mathbb{C}^{KNLM}, \tag{18}\]
where \(\mathbf{s}\) is also equivalent to the 2D space-time steering vector of the target as observed at vehicle \(i\).
## 3 Tdm Design
Since multiple signals share the same communication channel, the transmitted signals need to be orthogonal to be distinguishable at the receiver. For this matter in a CPI the antennas need to take turns to transmit. We propose to incorporate TDM by designing a transmitter scheduling matrix for the platoon of vehicles. In this particular TDM scheme, at most one antenna within the platoon is allowed to transmit during each pulse. We reformulate the steering vectors under the TDM scheme. If \(\mathbf{e}_{k_{m}}\) is a one-hot vector of size \(L\times 1\) with only one element as unity and remaining elements zero, then under TDM, (11) is
\[\mathbf{s}_{ki}(n)=\begin{bmatrix}\mathbf{e}_{k_{m}}\odot\mathbf{a}_{d,N}( \nu_{k})\odot\mathbf{a}_{d,M}(\nu_{i})\end{bmatrix}\otimes\begin{bmatrix} \mathbf{a}_{R,i}(\theta)\odot\mathbf{a}_{{}_{D,M}}(\nu_{i})\end{bmatrix}, \tag{19}\]
where the unity element in \(\mathbf{e}_{kn}\), indicates the pulse at which the transmitter antenna \(n\) on vehicle \(k\) transmits. Let \(\mathbf{J}_{k}=\begin{bmatrix}\mathbf{e}_{k_{1}}\end{bmatrix}\ldots\begin{bmatrix} \mathbf{e}_{k_{N}}\end{bmatrix}\in\{0,1\}^{L\times N}\). Following the same procedure leading to (12), we can write
\[\mathbf{s}_{k} =\left(\operatorname{vec}\left(\mathbf{J}_{k}\right)\otimes \mathbf{1}\right)\odot\Big{(}\begin{array}{c}\left(\mathbf{a}_{r,k}(\theta) \odot\mathbf{a}_{{}_{D,N}}(\nu_{k})\right)\otimes\\ \left(\mathbf{a}_{d,N}(\nu_{k})\odot\mathbf{a}_{d,M}(\nu_{i})\right)\otimes \left(\mathbf{a}_{{}_{R,i}}(\theta)\odot\mathbf{a}_{{}_{D,M}}(\nu_{i}) \right)\Big{)}\] \[=\left(\operatorname{vec}\left(\mathbf{J}_{k}\right)\otimes \mathbf{1}\right)\odot\mathbf{s}_{k} \tag{20}\]
and (18) after performing the TDM is
\[\mathbf{s} =\begin{bmatrix}\left(\operatorname{vec}\left(\mathbf{J}_{1} \right)\otimes\mathbf{1}\right)\odot\mathbf{s}_{1}\\ \vdots\\ \left(\operatorname{vec}\left(\mathbf{J}_{k}\right)\otimes\mathbf{1}\right) \odot\mathbf{s}_{K}\end{bmatrix}=\left(\operatorname{vec}\left(\mathbf{J} _{1}\right.\ldots\left|\mathbf{J}_{K}\right)\right)\otimes\mathbf{1}_{M} \right)\odot\mathbf{s}\] \[=\left(\operatorname{vec}\left(\mathbf{J}\right)\otimes\mathbf{1} _{M}\right)\odot\mathbf{s}\in\mathbb{C}^{KNLM\times 1}, \tag{21}\]
where \(\mathbf{J}=\begin{bmatrix}\mathbf{J}_{1}\end{bmatrix}\ldots\begin{bmatrix} \mathbf{J}_{K}\end{bmatrix}\in\{0,1\}^{L\times KN}\) is the waveform selection matrix. Without loss of generality, we design a TDM scheme in which the antennas in the whole platoon take turns over the pulses to transmit. Under this scheme for a CPI of length \(L=KN\), each antenna selected to transmit at pulse \(l\), excludes all other antennas to be transmitting and as a consequence the waveform selection matrix \(\mathbf{J}\) is a permutation matrix of size \(L\times L\). We intend to maximize the target detection performance. We use the mean of the test statistic as the design criteria. Consequently the TDM design problem is
\[\mathcal{P}_{1}:\text{ }\underset{\mathbf{J}}{\text{minimize}} \mathbb{E}\left\{\zeta|\mathcal{H}_{1}\right\}\] subject to \[\sum_{p}\mathbf{J}_{pn}=1,\qquad p,n\in\{1,\ldots,L\};\] \[\sum_{n}\mathbf{J}_{pn}=1;\] \[\mathbf{J}_{pn}\in\{0,1\}. \tag{22}\]
## 4 Solution Methodology
In this section, we first demonstrate that \(\mathcal{P}_{1}\) is a QAP [19] which is a combinatorial optimization problem that is NP-hard in general form. Consequently, we introduce a computationally efficient procedure to obtain a local optimum of QAP. Our proposed method takes advantage of the power method-like iterations introduced in [20], which resembles the well-known power method for computing the dominant eigenvalue and vector pairs of matrices.
We accumulate the interference covariance matrices in \(\mathbf{R}\) such that \(\mathbf{R}=\text{Blkdiag}\left(\mathbf{R}_{1},\ldots,\mathbf{R}_{K}\right)\). By substituting (21) in (17) we obtain
\[\mathbb{E}\left\{\zeta|\mathcal{H}_{1}\right\}=\mathbf{s}^{\mathbf{ H}}\mathbf{R}^{-1}\mathbf{s}\] \[=\Big{(}\operatorname{vec}\left(\mathbf{J}\right)\otimes \mathbf{1}_{M}\Big{)}^{\text{H}}\operatorname{Diag}\left(\mathbf{s}\right)^{ \text{H}}\mathbf{R}^{-1}\text{Diag}\left(\mathbf{s}\right)\left(\operatorname{ vec}\left(\mathbf{J}\right)\otimes\mathbf{1}_{M}\right)\] \[=\operatorname{Tr}\left(\left(\operatorname{vec}\left(\mathbf{J} \right)\otimes\mathbf{1}_{M}\right)^{\text{H}}\mathbf{Q}\Big{(}\operatorname{ vec}\left(\mathbf{J}\right)\otimes\mathbf{1}_{M}\Big{)}\right)\] \[=\left(\operatorname{vec}\left(\mathbf{J}\right)\otimes\mathbf{1}_{M} \right)^{\text{H}}\operatorname{vec}\left(\mathbf{Q}\Big{(}\operatorname{ vec}\left(\mathbf{J}\right)\otimes\mathbf{1}_{M}\Big{)}\right)\] \[=\operatorname{vec}\left(\mathbf{J}\right)^{\text{H}}\mathbf{G}^{ \text{H}}\mathbf{Q}\;\mathbf{G}\;\operatorname{vec}\left(\mathbf{J}\right) \tag{23}\]
where
\[\mathbf{Q}=\operatorname{Diag}\left(\mathbf{s}\right)^{\text{H}} \mathbf{R}^{-1}\text{Diag}\left(\mathbf{s}\right) \tag{24}\] \[\mathbf{G}=\left(\mathbf{K}_{{}_{1,KNL}}\otimes\mathbf{1}_{M} \right)(\mathbf{I}_{KNL}\otimes\mathbf{1}_{M}) \tag{25}\]
and \(\mathbf{K}_{{}_{1,KNL}}\) is the commutation matrix satisfying
\[\mathbf{K}_{{}_{1,KNL}}\operatorname{vec}\left(\mathbf{J}\right)= \operatorname{vec}\left(\mathbf{J}^{\top}\right). \tag{26}\]
The above algebraic manipulations cast \(\mathcal{P}_{1}\) equivalent to a QAP as
\[\mathcal{P}_{2}:\text{ }\underset{\mathbf{J}\in\mathbf{\Omega}}{\text{minimize}} \operatorname{vec}\left(\mathbf{J}\right)^{\text{H}}\mathbf{S}\;\operatorname{ vec}\left(\mathbf{J}\right), \tag{27}\]
where \(\mathbf{S}=\mathbf{G}^{\text{H}}\mathbf{Q}\mathbf{G}\) and \(\mathbf{\Omega}\) is the set of permutation matrices i.e.
\[\mathbf{\Omega}=\Bigg{\{}\mathbf{J}\;\left|\;\sum_{p}\mathbf{J}_{pn}=1,\quad \sum_{n}\mathbf{J}_{pn}=1,\right. \tag{28}\]
\[\mathbf{J}_{pn}\in\{0,1\},\quad p,n\in\{1,\ldots,L\}\Bigg{\}}.\]
By performing diagonal loading, i.e., substituting \(\mathbf{S}\) with positive semi-definite matrix \(\bar{\mathbf{S}}=\lambda_{m}\mathbf{I}-\mathbf{S}\), with \(\lambda_{m}\) being the maximum eigenvalue of \(\mathbf{S}\), \(\mathcal{P}_{2}\) will be turned into the equivalent maximization problem:
\[\mathcal{P}_{3}:\;\underset{\mathbf{J}\in\mathbf{\Omega}}{\text{ maximize}}\quad\mathrm{vec}\left(\mathbf{J}\right)^{\mathrm{H}}\bar{\mathbf{S}}\; \mathrm{vec}\left(\mathbf{J}\right). \tag{29}\]
One can locally optimize \(\mathcal{P}_{3}\) by resorting to _power method-like_ iterations of the form [20, 21]:
\[\mathcal{P}_{4}:\;\underset{\mathbf{J}^{(s+1)}\in\mathbf{\Omega}}{ \text{minimize}} \quad\|\mathrm{vec}\left(\mathbf{J}^{(s+1)}\right)-\bar{\mathbf{S}}\; \mathrm{vec}\left(\mathbf{J}^{(s)}\right)\|_{2} \tag{30}\] \[\equiv \underset{\mathbf{J}^{(s+1)}\in\mathbf{\Omega}}{\text{minimize}} \quad\|\mathbf{J}^{(s+1)}-\mathrm{vec}_{L,L}^{-1}\left(\bar{ \mathbf{S}}\;\mathrm{vec}\left(\mathbf{J}^{(s)}\right)\right)\|_{\text{p}}.\]
We define the matrix \(\mathbf{C}^{(s)}=-\mathrm{vec}_{L,L}^{-1}\left(\bar{\mathbf{S}}\;\mathrm{vec} \left(\mathbf{J}^{(s)}\right)\right)\). It is straightforward to see
\[\|\mathbf{J}^{(s+1)}+\mathbf{C}^{(s)}\|_{\text{p}}^{2}=\mathrm{ Tr}\left((\mathbf{J}^{(s+1)}+\mathbf{C}^{(s)})^{\mathrm{H}}(\mathbf{J}^{(s+1)}+ \mathbf{C}^{(s)})\right)\] \[=\mathrm{Tr}\left(\mathbf{I}+\mathbf{C}^{(s)\mathrm{H}}\mathbf{C} ^{(s)}\right)+\mathrm{Tr}\left(\mathbf{C}^{(s)\mathrm{H}}\mathbf{J}^{(s+1)}+ \mathbf{J}^{(s+1)\mathrm{H}}\mathbf{C}^{(s)}\right)\] \[=\mathrm{Tr}\left(\mathbf{I}+\mathbf{C}^{(s)\mathrm{H}}\mathbf{C} ^{(s)}\right)+2\mathrm{Tr}\left(\mathbf{J}^{(s+1)}\mathbf{C}^{(s)\mathrm{H}} \right), \tag{31}\]
where we used the orthogonality property of permutation matrices i.e. \(\mathbf{J}^{(s+1)\mathrm{H}}\mathbf{J}^{(s+1)}=\mathbf{I}\) in the first equality. Consequently, \(\mathcal{P}_{4}\) is equivalent to
\[\mathcal{P}_{5}:\;\;\underset{\mathbf{J}^{(s+1)}\in\mathbf{\Omega}}{\text{ minimize}}\quad\mathrm{Tr}\left(\mathbf{J}^{(s+1)}\mathbf{C}^{(s)\mathrm{H}} \right). \tag{32}\]
Note that the above problem is in fact a _linear assignment problem_ with cost matrix \(\mathbf{C}^{(s)\mathrm{H}}\) that can be solved efficiently using the _Hungarian algorithm_ also known as _Munkres assignment algorithm_, with computational complexity of \(\mathcal{O}(L^{2})\)[14]. Our final proposed algorithm for transmitter scheduling in CAVs based on power method-like iterations is presented in Algorithm 1. As shown in [20], the objective \(f(\mathbf{J})=\mathrm{vec}\left(\mathbf{J}\right)^{\mathrm{H}}\bar{\mathbf{S}} \;\mathrm{vec}\left(\mathbf{J}\right)\) is increasing through the power method-like iterations and convergent in the sense of the objective value. Consequently, we set the stopping criteria \(\big{|}\left[\;f(\mathbf{J}^{(s+1)})-f(\mathbf{J}^{(s)})\;\right]\big{/}f( \mathbf{J}^{(s)})\;\big{|}<\epsilon\) for the algorithm.
```
1:Input The overall steering vector of the CAV \(\bar{\mathbf{s}}\)
2:Initialization \(\mathbf{J}^{(0)}\in\mathbf{\Omega}\), \(s=0\)
3:\(\bar{\mathbf{S}}=\lambda_{m}\mathbf{I}-\mathbf{S}\)
4:While\(\big{|}\big{|}\left[\;f(\mathbf{J}^{(s+1)})-f(\mathbf{J}^{(s)})\;\right]\big{/}f( \mathbf{J}^{(s)})\;\big{|}\geq\epsilon\)do
5:\(\mathbf{C}^{(s)}=-\mathrm{vec}_{L,L}^{-1}\left(\bar{\mathbf{S}}\;\mathrm{vec }\left(\mathbf{J}^{(s)}\right)\right)\)
6:\(\mathbf{J}^{(s+1)}\leftarrow\text{Hungarian}(\mathbf{C}^{(s)\mathrm{H}})\)
7:\(s\gets s+1\)
8:\(\mathbf{J}_{\text{opt}}\leftarrow\mathbf{J}^{(s)}\)
9:Output\(\mathbf{J}_{\text{opt}}\)
```
**Algorithm 1** Power method-like iterations for transmitter scheduling in CAVs.
## 5 Numerical Investigation
We carried out numerical experimentation to evaluate the performance of the proposed algorithm for TDM in CAVs. We considered a platoon consisting of \(K=3\) vehicles, each equipped with \(M=N=8\) TX and Rx antennas arranged as ULA. All vehicles are assumed to have FMCW radar systems operating at carrier frequency \(f_{c}=77\) GHz, bandwidth \(B=150\) MHz and chirp time \(T_{c}=8\)\(\mu\)s. The vehicles are moving with 2D velocities \(\mathbf{v}_{1}=[20,20]\) m/s, \(\mathbf{v}_{2}=[-10,-20]\) m/s and \(\mathbf{v}_{3}=[30,15]\) m/s. Algorithm 1 is performed to design the TDM for a CPI of length \(L=24\) pulses. The receiver operating characteristics (RoC) associated with the described CAV when one, two or three of the vehicles are cooperating and transmitting with optimized TDM is illustrated in Fig. 2. The results demonstrate significant improvement in comparison with sequential transmission where the transmitters on each vehicle are activated with in order of their indexes \(\{1,2,\ldots,N\}\) i.e. the TDM matrix is identity.
## 6 Summary
Vehicle-to-vehicle communication is a fundamental capability that enables CAVs to perform distributed STAP. A cooperative STAP scheme for the vehicles in a CAV platoon was within the purview of this paper. Furthermore, we introduced a TDM scheme for orthogonal transmission of the antennas in the CAV platoon. The transmitter scheduling was formulated as a quadratic assignment problem. By means of the well-known power method-like iterations we showed that the TDM design problem can be reduced to a linear assignment problem in each iteration and consequently the efficient Hungarian algorithm can effectively address it. Through numerical simulations, we confirmed the efficacy of the suggested TDM approach in enhancing target detection performance.
|
2309.11537 | GWAK: Gravitational-Wave Anomalous Knowledge with Recurrent Autoencoders | Matched-filtering detection techniques for gravitational-wave (GW) signals in
ground-based interferometers rely on having well-modeled templates of the GW
emission. Such techniques have been traditionally used in searches for compact
binary coalescences (CBCs), and have been employed in all known GW detections
so far. However, interesting science cases aside from compact mergers do not
yet have accurate enough modeling to make matched filtering possible, including
core-collapse supernovae and sources where stochasticity may be involved.
Therefore the development of techniques to identify sources of these types is
of significant interest. In this paper, we present a method of anomaly
detection based on deep recurrent autoencoders to enhance the search region to
unmodeled transients. We use a semi-supervised strategy that we name
Gravitational Wave Anomalous Knowledge (GWAK). While the semi-supervised nature
of the problem comes with a cost in terms of accuracy as compared to supervised
techniques, there is a qualitative advantage in generalizing experimental
sensitivity beyond pre-computed signal templates. We construct a
low-dimensional embedded space using the GWAK method, capturing the physical
signatures of distinct signals on each axis of the space. By introducing signal
priors that capture some of the salient features of GW signals, we allow for
the recovery of sensitivity even when an unmodeled anomaly is encountered. We
show that regions of the GWAK space can identify CBCs, detector glitches and
also a variety of unmodeled astrophysical sources. | Ryan Raikman, Eric A. Moreno, Ekaterina Govorkova, Ethan J Marx, Alec Gunny, William Benoit, Deep Chatterjee, Rafia Omer, Muhammed Saleem, Dylan S Rankin, Michael W Coughlin, Philip C Harris, Erik Katsavounidis | 2023-09-20T18:00:00Z | http://arxiv.org/abs/2309.11537v1 | # GWAK: Gravitational-Wave Anomalous Knowledge with Recurrent Autoencoders
###### Abstract
Matched-filtering detection techniques for gravitational-wave (GW) signals in ground-based interferometers rely on having well-modeled templates of the GW emission. Such techniques have been traditionally used in searches for compact binary coalescences (CBCs), and have been employed in all known GW detections so far. However, interesting science cases aside from compact mergers do not yet have accurate enough modeling to make matched filtering possible, including core-collapse supernovae and sources where stochasticity may be involved. Therefore the development of techniques to identify sources of these types is of significant interest. In this paper, we present a method of anomaly detection based on deep recurrent autoencoders to enhance the search region to unmodeled transients. We use a semi-supervised strategy that we name _"Gravitational Wave Anomalous Knowledge"_ (GWAK). While the semi-supervised nature of the problem comes with a cost in terms of accuracy as compared to supervised techniques, there is a qualitative advantage in generalizing experimental sensitivity beyond pre-computed signal templates. We construct a low-dimensional embedded space using the GWAK method, capturing the physical signatures of distinct signals on each axis of the space. By introducing signal priors that capture some of the salient features of GW signals, we allow for the recovery of sensitivity even when an unmodeled anomaly is encountered. We show that regions of the GWAK space can identify CBCs, detector glitches and also a variety of unmodeled astrophysical sources.
* July 2023
_Keywords_: Machine Learning, Semi-supervised Learning, Anomaly Detection, Gravitational-Wave physics, Autoencoders
## 1 Introduction
Since the original observation of gravitational waves (GW) [1] by Advanced LIGO [2] and Advanced VIRGO [3], and with the recent introduction of KAGRA [4], more than 90 gravitational-wave events [5, 6, 7] have been catalogued to date, fundamentally transforming our way of observing the universe. While all of the detected signals thus far correspond to the coalescence of binary black hole (BBH), binary neutron star (BNS), or black hole - neutron star (BHNS) mergers [8, 9, 10, 11, 12], there is growing interest in the detection of unmodeled signals, which are not described by any known theoretical waveforms, computationally prohibitive to simulate, or are stochastic in nature. These signals may originate from sources such as core-collapse supernovae (CCSN) [13], as well as exotic sources such as cosmic strings [14, 15], axion stars [16], neutron star glitches, or primordial black holes [17, 18]. Detection of such sources could lead to new insights into the fundamental physics of the universe.
Typical methods for detecting GWs rely on matched filtering techniques [19], which compare the observed data to a known signal template. These methods require precise knowledge of the signal prior, such as the waveform and the parameters of the source, in order to detect the signal. While matched filtering is a well-established and powerful method for detecting gravitational waves, it has certain limitations. For example, matched filtering is sensitive only to signals that match the known templates, and may miss signals with different waveforms or parameters. This makes matched filtering unsuitable for unmodeled signal searches.
The GW community has developed several unmodeled approaches that do not rely on a specific waveform model. Some of these currently used by the International Gravitational-wave Network (IGWN) include cWB [20, 21], which searches for and reconstructs GW transient signals without relying on a specific waveform model. Another framework, oLIB [22] uses the Q transform to decompose GW strain data into several time-frequency planes of constant quality factors. The pipeline flags data segments containing excess power and searches for clusters of these segments to identify possible GW candidate events. MLy (read as "Emily") [23] is a machine-learning-based search for generic sub-second-duration transient GW signals in the 20 to 500 Hz frequency band; MLy works by utilizing convolutional neural networks (CNNs) trained to recognize signals that are simultaneous and coherent between detectors.
In this paper, we explore the use of a method introduced in Ref. [24] deployed within the High Energy Physics community for the development of an anomaly detection pipeline for data collected by GW observatories. We introduce _"Gravitational Wave Anomalous Knowledge"_ (GWAK), a strategy for anomaly search that combines deep learning (DL) techniques with prior information on potential signals to improve the sensitivity of detection. The GWAK algorithm is based on the intuition that unknown transient sources should loosely resemble known signals, as well as be coherent between present GW detectors. We apply the GWAK method to GW datasets, by introducing signal priors that capture some of the salient features of GW signatures, allowing for the
recovery of sensitivity even when the observed signal mismatches the known priors.
Because GWAK does not rely on precise prior knowledge of the signal and can detect signals with unknown waveforms or parameters by matching incoming data streams with salient features (cross-correlation, oscillations, etc.) that are generic to broad types of GWs, GWAK is more robust and powerful for the detection of unknown signals. As such, it can be used as a complementary approach to methods such as matched filtering for detecting transient GWs, and it has the potential to improve the sensitivity of GW detection systems to sources of this type.
This paper is organized as follows: in Section 2, we provide a brief review of deep learning in GW detection and previous works in machine learning anomaly detection. In Section 3.2, we describe the data used for this study. In Section 3, we present the GWAK method for constructing embedded spaces for anomalous searches and the autoencoder and transformer architectures used to build these embedded spaces. In Section 4, we discuss the performance of the GWAK method on real GW data. Conclusions and next steps are provided in Section 5.
The code used to analyze data and generate results and plots can be found at 1.
Footnote 1: [https://github.com/ML4GW/gwak](https://github.com/ML4GW/gwak)
## 2 Related work
Deep learning approaches for GW detection are well explored [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35]. However, these methods typically rely on supervised learning techniques, which provide competitive efficiency by exploiting neural network nonlinearity and the information provided by ground-truth labels. By construction, these methods rely on a realistic simulation of the signal generated by a specific kind of source, which is assumed upfront. With supervised DL approaches, there is no guarantee of generalizability to an out-of-training event, what we refer to as "anomalies."
Autoencoders are commonly implemented for a variety of GW applications. They can be used for non-linear subtraction of noise from incoming time-series of GW strain [36, 37, 38] in real-time [39]. Additionally, autoencoders are used in generative models, speeding up the computation of GW waveforms relative to very computationally expensive numerical relativity simulations [40]. Finally, due to the transient nature of single-detector artifacts, "glitches", autoencoders can be used for glitch classification in an unsupervised manner [41].
There have also been explorations into unsupervised GW detection to enhance detection capabilities beyond signal templates and simulation. Initial studies with a Convolutional Neural Network (CNN) based autoencoder [42] and Long-Short Term Memory (LSTM) based autoencoder [43] show that unsupervised detection is possible. Both studies rely on learning the typical features of the background. Once a model is trained, it is then used to evaluate the similarity between background priors and new unseen data. To do this, the algorithms use reconstruction loss, computed by comparing the original signal and the signal outputted by the model trained on the
background prior, as a detection statistic. By comparing the reconstruction error of new data with a threshold corresponding to an allowed false alarm rate (FAR), data points that deviate enough from the normal pattern are identified as anomalous. This relies on the autoencoder's inability to reconstruct any potential signal that deviates from the background, or generally the prior on which the model was trained, triggering a high reconstruction loss. This approach has been shown to effectively detect out-of-training anomalies in GW datasets and has the potential to improve the performance of anomaly detection systems. However, for a specific signal, an algorithm trained with an unsupervised procedure on unlabeled data is typically less accurate than a supervised classifier trained on labeled data.
We build upon this approach by training several autoencoders; each with a different signal prior to enhance signal sensitivity. The signals that were chosen to be used in this paper are described in Sec. 3.2. Unlike an earlier study [43] which used simulated Gaussian background as a proof of concept, our method is trained with real background data. This is significantly more challenging than just simulated Gaussian noise, and therefore no direct comparison of the results presented here to the previous ones can be made.
Although the autoencoder architecture can implicitly learn detector correlation, a related method explicitly leverages the correlation across detectors [44]. The method trains detector correlation with a white-noise burst (WNB) [45, 46] signal prior, which should be harder to detect than any other type of signal given its lack of a distinctive morphology. Similarly to [44], we choose not to rely on our autoencoders to learn the correlation between the two detector sites. Instead, we directly compute and include the correlation in our final metric that is used to find signal events as another axis in the GWAK embedded space. As such, the present version of our method is applicable to the case of aligned GW detectors. However, more off-plane GW detectors are being proposed, and a future area of work is generalizing the method to an arbitrary detector network.
## 3 GWAK algorithm
GWAK (reads: _guac_) builds on the concept of semi-supervision, pulling on concepts from both supervised and unsupervised learning. The semi-supervised method is manifested by using simulated signals as approximations for anomalous-unmodeled signals in the GWAK embedded space. We use five classes of datasets that can help us to build an informative space to search for these unmodeled signals. A separate unsupervised autoencoder network is then trained on each class of samples separately, resulting in a low-dimensional "GWAK" space consisting of the coherence metrics between the autoencoder inputs and outputs for each autoencoder, which is then used to search for anomalous signals. This approach is particularly useful for detecting new physics phenomena, where the signal prior is unknown but the simulation of some potential signal pattern, like BBH and sine-Gaussians, is available. The GWAK method results
in classes of anomalous signals inhabiting different regions of the continuous GWAK space, each being reconstructed differently by the five autoencoders. Searches can then be performed on the lower-dimensional embedded space in the regions that anomalies are expected to inhabit.
### Network architectures
One of the main advantages of LSTM autoencoders is their ability to handle sequential data with temporal dependencies. This makes them suitable for anomaly detection in time-series data, such as GWs, speech signals, and sensor data. The LSTM autoencoder consists of an encoder and a decoder, where the encoder maps the input sequence to a fixed-length vector, and the decoder maps the vector back to the original sequence. We use a similar architecture to that optimized in [43].
For the signal classes, we used the LSTM autoencoder as described above, with the bottleneck size of 4, 8 and 8 for binary black hole (BBH), low and high frequency sine-gaussian (SG) signals, respectively. For the background classes, we used a fully connected dense model. This was done as the signal classes have temporal behavior
Figure 1: The 2D and 3D schematic depicting the main idea behind the GWAK algorithm. For plotting purposes we only show one background and two signal axes, while in this paper we are actually using 5 (2 for background and 3 for signals), and the main idea remains the same. All the fruits (orange, avocado and apple) are “signals” that we would like to select, broccoli represents background that we would like to suppress and the duck represents true anomalies, such as detector malfunctioning, and glitches. Shaded pink and green region depicts selection regions that would correspond to different FARs. For higher FAR (pink shading) more signal-like anomalies would be picked up (orange, avocado and apple), while for a selection that corresponds to a lower FAR (green shading), only “avocado signal” would be picked up.
to exploit, whereas glitches have smaller-scale, localized features. The total number of trainable parameters for the BBH LSTM AE is 510324, for SG 64-512 Hz and SG 512-1024 Hz are 511672, for background AE 243352 and for glitches AE is 241302.
### Data samples
The dataset used in this study was collected by the LIGO Hanford (H1) and LIGO Livingston (L1) [47] gravitational-wave detectors during the first half of the third observing run (O3a), which took place between 1st April 2019 and 1st Oct 2019. We specifically used publicly available data between GPS times of 1238166018 and 1238170289, right at the beginning of the run. Next, the time-series data were downsampled from 16384 Hz to 4096 Hz, and processed to remove and create a separate dataset of transient instrumental artifacts (glitches) using the excess power identification algorithm Omicron [48]. We used Q\({}_{min}=3.3166\), Q\({}_{max}=108\), and f\({}_{min}=32\) for the Omicron algorithm. We then took 4 s segments of the data without noise artifacts to
Figure 2: Graphical illustration of LSTM autoencoder, preserving the traditional encoder and decoder structure, which allows for a reconstruction loss detection statistic.
serve as the baseline for the injection of signals. As such, we created five different classes of data as proxies for signals and background signatures:
* Simulated BBH signals using IMRPhenomPv2 [49, 50, 51] injected into the real background noise, as shown in Fig. 5(top left). The simulation parameters are given in Table 1.
* Background from O3a with DQsecDB \(\lx@sectionsign\) state flag DCS-ANALYSIS_READY_C01:1 applied and excess power glitches [48] and known GW-events removed, as shown in Fig. 4(bottom).
* Generic low frequency signal model used to simulate generic GW sources, as shown in Fig. 5 (middle left).
* Generic high frequency signal model used to simulate generic GW sources, as shown in Fig. 5 (bottom left).
* Transient instrumental glitches (often of unknown origin) flagged by Omicron [48] as having excess power, as shown in Fig. 4 (top).
To create samples of BBH and SG signals, we used numerical simulations to generate
Figure 3: Graphical illustration of DNN autoencoder, preserving the traditional encoder and decoder structure, which allows for a reconstruction loss detection statistic.
\(h_{+}\) and \(h_{\times}\) polarization modes. We then sampled sky localizations uniformly in the sky, projected the polarization modes onto the sky location, and injected the projected modes into the two LIGO detectors.
The BBH sample is generated with the parameters and priors [52] as shown in Table 1, taken from bilby processing spins BBH prior, and the sine-Gaussian samples are generated with the parameters and priors as shown in Table 1.
We employed a series of digital signal processing techniques to prepare data for training autoencoders in the context of GW detection. Specifically, we first applied a whitening filter to normalize the data with respect to one hour of surrounding background data. This filter effectively suppressed frequency regions dominated by noise and reduced effects from spectral lines 1 4. Moreover, we implemented a band-pass filter within the frequency range of 30-1500 Hz to further attenuate noise outside of the most sensitive frequency range of GW instruments. After applying these filters, we removed 1 s intervals from each end of the data samples to eliminate any edge effects from preprocessing. The remaining 2 s samples, each containing either an injected signal, pure background, a low/high frequency sine-Gaussian, or a glitch artifact, were used to generate training data.
Footnote 1: [https://gwosc.org/s6speclines/](https://gwosc.org/s6speclines/)
Footnote 2: [https://dcc.ligo.org/LIGO-T1500415/public](https://dcc.ligo.org/LIGO-T1500415/public)
To obtain a set of windows suitable for training, we extracted 200 data-points (total duration of 50 ms sampled at 4096 Hz) from each sample. Our experimentation revealed
\begin{table}
\begin{tabular}{l|l|c c c} \hline & Parameter & Prior & Limits & Units \\ \hline \multirow{7}{*}{
\begin{tabular}{l} **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** **CEN** \\ **CEN** \\ **CEN** \\ **CEN** **CEN** \\ **CEN** \\ **CEN** **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN**CEN** \\ **CEN** \\ **CEN** \\ **CEN** **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** **CEN** \\ **CEN** \\ **CEN** **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** \\ **CEN** **CEN
that training the autoencoders with input lengths greater than 200 data-points would result in degraded performance, due to the difficulty to capture long-term behavior in an LSTM, especially on an evolving signal. While reducing the number of data-points below 200 could enhance computational efficiency, it reduces the autoencoder's ability to learn the evolution of a shorter-duration signal.
To optimize the data processing and facilitate learning by the network, the data are normalized to have a standard deviation of one on a sample-per-sample basis. This normalization was undertaken mainly for the reason that the neural networks struggled to learn with unnormalized samples. The strongest example of this is with the glitch dataset, where strain magnitude can reach amplitudes hundreds to thousands of times above the background.
For each of the non-coherent classes (background and glitch), the dataset is split
Figure 4: Example of GWAK classes: glitch (top) and background (bottom) strains. The light blue shading highlights an example region that is passed as input to the autoencoders for training.
into three parts: 80,000 training samples (80%), 10,000 validation samples (10%), and 10,000 test samples (10%). For each signal class (BBH, low frequency SG and high frequency SG), we generate 5 sub-datasets, each with a specified SNR injection range, as shown in Fig. 6. Each of these sub-datasets has an identical splitting procedure: 80,000 training samples (80%), 10,000 validation samples (10%), and 10,000 test samples (10%). The training and validation datasets were used for the autoencoders to create loss values on which to update and score the networks respectively. The test samples were used for the recreation plots, as in Fig. 7, as well as to show the GWAK feature space, Fig. 10.
Signal events needed to build the GWAK space were created injecting simulated GWs on top of artifact-free detector noise. This provides an analogous situation to a real
Figure 5: Example of signal-like classes: BBH (top left), WNB (top, middle right), sine-gaussian (middle, bottom left) and Supernova (bottom right) strains. The light blue shading highlights an example region that is passed as input to the autoencoders for training.
GW, in which the strain from the incoming wave is recorded in combination with the detector noise. We do not explore the case of coincident detector artifacts with transient astrophysical signals. The injection of signal also accounts for the difference in GW time-of-arrival at each detector owing to light travel time from the sky localization of the signal, which is significant at a sampling rate of 4096 Hz, corresponding to a maximum of 40 samples.
### Autoencoder Training
To train an autoencoder, the input sequence is passed through the encoder and the decoder, and the model is optimized to minimize the reconstruction error between the input and the output sequence. Once the model is trained, it can be used to identify data points that deviate from the normal pattern by comparing the reconstruction error of new data with a threshold.
To train our five autoencoders, each corresponding to one of the data classes, we used two different schemes. To train the glitch and background classes, we use the same dataset for all 200 epochs, the Adam [53] optimizer, and Mean Absolute Error (MAE) loss, computed between the autoencoder input and autoencoder reconstruction. To train the binary black hole, sine-Gaussian low frequency and high frequency classes, we used a curriculum scheme. Over 200 epochs, we used 5 different datasets, each with identical injections but different uniform SNR priors: \(U[192,384]\) (epochs 1-40), \(U[96,192]\) (epochs 41-80), \(U[8,96]\) (epochs 81-120), \(U[24,48]\) (epochs 121-160), \(U[12,24]\) (epochs 161-200), as shown in Fig. 6. Here, we used the Adam optimizer, reset after each step of the curriculum, and computed error between the autoencoder output of a noisy input (injection into the real background with specified SNR) and the "clean," noise-less template. This was intended to train the autoencoders to learn to reconstruct the signal itself, without any noise. To compute the validation loss at each epoch, we used a subset of the \(U[12,24]\) SNR dataset for each curriculum. This was intended to provide a fair computation of the validation loss across each curriculum. The MAE loss \(\mathbf{L}\) between original data \(\mathbf{D}\) and reconstruction \(\mathbf{R}\) is given by
\[\mathbf{L}=\frac{1}{N}\frac{1}{2}\frac{1}{200}\sum_{i=1}^{N}\sum_{j=1,2}\sum_{ k=1}^{200}|\mathbf{D}_{i}[j,k]-\mathbf{R}_{i}[j,k]|\]
\(\mathbf{D}_{i}[j,k]\) represents the i-th sample of the original data, taken from the j-th detector at the k-th timestep, and likewise \(\mathbf{R}_{i}[j,k]\) represents the i-th sample of the autoencoder output, taken from the j-th detector at the k-th timestep. The loss curves are shown in Fig. 6. The example of autoencoder reconstruction obtained with pre-trained autoencoders is shown in Fig. 7, and recreation samples of other training classes are shown at the end of the paper.
### Feature extraction
While the MAE loss was used in training, at evaluation time we opted for a frequency-domain based features, computed between the input and autoencoder output. The intuition for choosing to compute features in the frequency domain is based on the fact that signal autoencoders were trained with clean, noiseless targets. Since the reconstruction is a noise-less signal, and one \(50\,\mathrm{ms}\) window of a signal does not exist in a wide range of frequencies, the reconstruction will be a localized peak in frequency space. On the contrary, the original signal, especially one of low SNR, will contain both the
Figure 6: Autoencoder training and validation losses for signal classes, using curriculum learning to progressively reduce validation loss. The validation loss for each training SNR range is computed on the validation data from the last SNR step. In solid/dashed colours are the training/validation losses for BBH (blue), SG 64–512 Hz (salmon) and SG 512–1024 Hz (dark yellow). A light green shaded region depicts the SNR range for each step of training, spanning the range of injected SNR corresponding to each curriculum. For example, the first curriculum contains injections in the SNR range [192, 384]. At the transition between curricula, we see a small spike in the training loss, especially for the SG 512–1024 Hz model. This is due to the transition to a “more difficult” training dataset, as the lower SNR leads to a less distinguishable signal. With the transition between curricula, we see a sharp drop in validation loss over the course of a few epochs. This is due to the fact that the validation set is maintained for each autoencoder through training as belonging to the lowest - \(U\)[12, 24] SNR range. With the transition to a new curriculum with a lower SNR range, the training data for that curriculum will more closely match the validation set, explaining the rapid drop.
narrow-bandwidth frequency feature along with noise distributed approximately evenly throughout the entire frequency range. When computing the MAE between the original input and reconstructed output, the presense of high frequency noise will inflate the loss, since the autoencoder, by design, does not fit noise. To bypass this, we chose to compute a dot product in frequency space. In the frequency regime where the true signal exists, both original and reconstructed signals will be similar, and as such the dot product will yield a high value. In the other frequency regimes where the signal is not present, the reconstructed signal is close to zero, and as such these noisy contributions get removed.
We choose two features per autoencoder to be the following: Let \(H_{O},L_{O},H_{R},L_{R}\) correspond to the original Hanford and Livingston signals and reconstructed Hanford and Livingston signals respectively, from a single autoencoder. Each are 200-datapoint segments, sampled at 4096 Hz. We then take the Fourier transform of each, yielding \(\widetilde{H_{O}},\widetilde{L_{O}},\widetilde{H_{R}},\widetilde{L_{R}}\). The two features, per autoencoder, are \(|\widetilde{H_{O}}\cdot\widetilde{H_{R}}|\) and \(|\widetilde{L_{O}}\cdot\widetilde{L_{R}}|\). We also used a general "frequency space correlation" feature to compliment the Pearson correlation 3.5, defined by \(|\widetilde{H_{O}}\cdot\widetilde{L_{O}}|\). \(|\widetilde{A}\cdot\widetilde{B}|\) represents the magnitude of the dot product of two complex vectors, namely the Fourier transforms of \(A\) and \(B\). Training an autoencoder to optimize directly on the \(|\widetilde{H_{O}}\cdot\widetilde{H_{R}}|\), \(|\widetilde{L_{O}}\cdot\widetilde{L_{R}}|\) features proved to be too simple of a task. Each autoencoder would consistently learn the largest feature in Fourier space, leading autoencoders generalizing too well, i.e. not being specific enough to their respective training class. By scoring the network's performance in the time domain, i.e. with MAE between the input and reconstructed output, the network must accurately recreate more details, such as temporal offsets, signal evolutions, as well as the corresponding frequency components, forcing the specificity to the class. Finally, by
Figure 7: Example of recreation on injected BBH signal, with the noise-less template also shown. The recreation of the BBH autoencoder (in blue) follows closely the original signal injection (in brown) as expected, since it was trained to reconstruct the noiseless input. While background (in purple), glitches (in green), SG 64–512 Hz and SG 512–1024 Hz clearly fail to reconstruct the injected BBH signal, as expected.
training with MAE, it provides us with a nice visual picture of the autoencoder output which we can easily compare against the input, but this would not necessarily be the case of using the frequency-domain features directly for loss.
### Pearson cross-correlation
To derive a comprehensive metric for inference, we incorporated information regarding the cross-correlation between the two detector sites, in conjunction with the GWAK information. Given that any astrophysical signal will invariably manifest in both detector sites, the correlation of the measured strains is of critical significance for the signal search procedure. Although we utilized information from both detectors during the GWAK space training phase, we opted to directly incorporate cross-correlation information in our final metric.
To accomplish this, we employed the Pearson correlation coefficient [54]. Specifically, we computed the Pearson correlation coefficient between the Hanford and inverted Livingston sites by selecting the maximum correlation coefficient from all possible time shifts for a 200 datapoint window. Since the physical separation between the detectors is about 3000 km, corresponding to a time of flight of 10 ms, we must iterate over all possible temporal shifts within 10 ms to contain the correct time delay. At a sampling rate of 4096 Hz, this corresponds to a shift of 40 datapoints in either direction. This iteration over all possible time shifts is an advantage of the Pearson correlation coefficient over the frequency-domain correlation coefficient, and as such we chose to include both in our GWAK space. The source of this time delay is due to the sky location of the gravitational wave source. The Livingston detector is inverted to account for the detectors' relative orientations [55]. The Pearson correlation coefficient is a widely used statistical measure that provides a measure of the strength of a linear relationship between two variables, in this case, the strain measurements from the two sites. The coefficient ranges from \(-1\) to \(1\), with values close to \(1\) indicating a strong positive correlation, values close to \(-1\) indicating a strong negative correlation, and values close to \(0\) indicating a lack of correlation between the two variables.
Given two detector streams, the Pearson cross-correlation at time t was computed via
\[\mathbf{P}=\max_{\Delta}\frac{\sum_{k=t-100}^{t+100}(H_{k}-<H>)\cdot(L_{k- \Delta}-<L>)\cdot(-1)}{\sqrt{\left(\sum_{i=t-100}^{t+100}(H_{i}-<H>)^{2}\right) \left(\sum_{j=t-100-\Delta}^{t+100-\Delta}(L_{j}-<L>)^{2}\right)}},\]
\[<H>=\frac{1}{200}\sum_{i=t-100}^{t+100}H_{i},\ <L>=\frac{1}{200}\sum_{j=t-100- \Delta}^{t+100-\Delta}L_{j}\]
The presence of a multiplicative factor of \(-1\) serves as the inversion of the Livingston detector due to the relative orientation. The range of \(\Delta\) is within the maximum time of flight in units of datapoints, so \(\Delta\in[-40,40]\). In addition to the autoencoder frequency domain features, this yields 5 (autoencoders) \(\cdot 2\) (features per autoencoder) +1 (pearson) +1 (frequency-domain correlation) = 12 overall features.
### Artificial background with timeslides
To create background data that we used a technique called timeslides. This involved temporally shifting the data from one detector relative to another by at least 10 ms, the light travel time between detectors, guaranteeing that there is no astrophysical correlation present. As a background dataset for 3.7, we computed 8 hours worth of timeslides. To evaluate our entire algorithm and report false alarm rates for detections, we computed 1 year worth of timeslides.
### Linear combination optimization
To construct a final metric which combines the information from the above 12 features, we opted for a linear combination of those values to produce one final metric value. Given that we are trying to generalize to unknown anomalous signals, opting for a simple linear model aims to reduce any bias towards known signal regions. To find optimal values for the parameters of the linear classifier, we used a simple Linear Support Vector Machine (SVM), which aims to optimize the binary classification of background and signal classes. The classification by a Linear SVM is simply given by \(\vec{W}^{T}\vec{X}+b\), where \(\vec{W}\) represents the learned weight vector, of the same dimensionality as the GWAK vector \(\vec{X}\), which contains the 12 GWAK features. \(b\) represents a bias term. For the background dataset, we used 8 hours worth of timeslides as described in 3.6. Also using this stretch of timeslides, we computed a set of normalization coefficients. For each of the 12 features, these were the mean value and standard deviation of that feature across 8 hours of analysis. These were used as to rescale each feature to mean zero and standard deviation one independently. This helped with training the linear classifier, as while the Pearson correlation coefficient is O(1), the frequency domain coefficients can be O(1000). For the signal dataset, we generated a new dataset, comprising of 6 classes - BBH, SG (64 - 512 Hz), SG ( 512 - 1024Hz), low-frequency white noise burst (40-400Hz), high frequency white noise burst (400-1000Hz), and supernova [56] (85 \(M_{\odot}\) progenitor mass, SFHo equation of state), each with a SNR prior of \(U(10,100)\).
We trained this linear fit for 5000 epochs, with a learning rate of 0.01 and using the Adam optimizer. Fig. 8 shows the learned coefficients for each of the 12 GWAK features. This serves as a sanity check that the impact of each feature on the final metric score is as we expect. Signals are classified via a more negative value, so the features corresponding to the signal autoencoders should have more negative values, as well as the correlation features, as a higher correlation score is more representative of an astrophysical event. On the contrary, autoencoders corresponding to non-astrophysical data (background or glitch) should have more positive values, since a stronger relationship to those classes will be less indicative of the signal. These intuitions are all demonstrated via Fig. 8, so we pass this sanity check.
### Smoothing for the final metric
As the GWAK algorithm assigns a single metric value to each \(50\,\mathrm{ms}\) window, it does not naively carry sensitivity to signals longer then the \(50\,\mathrm{ms}\) window. A longer signal will have multiple evaluation points, but to compute FAR only the lowest metric value is taken. To increase sensitivity by specifying the GWAK algorithm to signals greater than one window in length, we convolved the timeseries of final metric evaluations with uniform kernels of varying size, with the idea being that the sensitivity to a given anomalous signal is maximized when the kernel length is of order the signal length. We present the detection efficiency using this method in Fig. 15.
## 4 Results
This section describes how our proposed methodology can be used to discover anomalous events. Additionally, we evaluate the effectiveness of our semi-supervised approach in detecting several potential sources of GWs, without using information about these signals during the training phase.
### Background and glitch mitigation
As the goal of this method is to identify anomalous signals in the background, the mitigation of non-astrophysical data being identified as signal-like improves sensitivity to real signals by reducing the corresponding false alarm rate. In particular, detector artifacts or "glitches" can often pose a problem as they can have signal-like morphology [57, 58] as well as possess high SNR in a single detector. This serves as the motivation for training a glitch autoencoder, as it should be able to recognize single detector artifacts reliably. Upon "recognition" of a glitch, via the glitch autoencoder frequency domain features, an automatic down-weight will be given to the event in question. In addition, as
Figure 8: Learned coefficients of a linear combination of the 12 GWAK features. As expected, the features corresponding to the signal autoencoders have more negative values, as well as the correlation features, as a higher correlation score is more representative of an astrophysical event. While background and glitch features have more positive values since a stronger relationship to those classes will be less indicative of the signal.
glitches are uncorrelated between detectors, occurring locally, the use of both correlation features also helps to mitigate false alarms caused by glitches. As there is no correlation between observations in one detector stream and another within the maximum light travel distance during a glitch, those features will correspondingly have small values, and similarly down-weight the glitch to have a less signal-like score.
### Anomaly metric
We employ the linear combination method outlined in Sec. 3.7, which includes the two frequency-domain features per autoencoder, the frequency-domain correlation, and the Pearson cross-correlation presented in Sec. 3.5. The example of a full GWAK pipeline is shown in Fig. 9.
The resulting coefficients of the linear classifier, physically describing a hyperplane, are then used to project points from the 12-dimensional feature space down to a one-dimensional space via the dot product, or geometrically the perpendicular distance from that point to the classifying hyperplane. This one-dimensional real number is our final metric value. Since our classifier was trained to predict 0 for signals and 1 for backgrounds, a more negative final metric value corresponds to a more "signal-like" or louder input, whereas a less negative/positive metric value indicates background or no signal of interest. Once we compute the final metric value for a hypothetical signal, we would like to know the corresponding false alarm rate. This corresponds to the frequency that a non-astrophysical input (glitch or otherwise) from the gravitational-wave detectors would lead to a trigger of the same significance as the hypothetical signal. The 12
Figure 9: Example of metric calculation on a hypothetical event. The event is reconstructed with each of the 5 pre-trained autoencoders. Pearson and frequency domain correlation are computed on the given input and then each of the values is multiplied with a corresponding coefficient which arises from the pre-trained linear metric. The sum of all the features multiplied by their coefficients is referred to as the final metric and is used to make a decision on if the event is signal-like.
features multiplied by the corresponding coefficient and grouped by a corresponding autoencoder are shown in Fig. 10. We show each of the five data classes in this space to highlight how strongly the signals are separated and grouped in this new, learned GWAK space.
### Evaluation on Core-Collapse Supernova
We use existing core-collapse supernovae simulations to see how our approach extends to anomalous signals which GWAK has not seen during training. We use [56] (85 \(M_{\odot}\)
Figure 10: Trained GWAK space on 5 signal classes: BBH, background, glitch, and low and high frequency Sine-gaussian; demonstrating that regions of the GWAK space correspond to different event classes
progenitor mass, SFHo equation of state). On the top left of Fig. 11, the evaluation of GWAK axes and Pearson correlation with time and on the top right total metric value and FAR are shown as an example of the algorithm's "reaction" to unseen signals. Both BBH and sine-Gaussian losses drop at the time of the signal, which indicates that the strain at that moment is more signal-like. The Pearson correlation increases indicating a strong correlation between the two detector sides. The FAR at the moment of the event drops to a level consistent with one or fewer events per month, which means that even with strong trigger restrictions, detection of that type of event would be possible with our new proposed algorithm.
The bottom plot in Fig. 11 (left) reflects the scan of minimum metric value for different signal-to-noise ratio ranges of the injected core-collapse supernova models. As expected, with the higher signal-to-noise ratio, the total metric is lower resulting in a lower FAR. If the trigger bandwidth would allow for up to 1 false event per hour being triggered, GWAK would be able to detect core-collapse supernova events with \(\sim\)22 SNR.
### Evaluation on White Noise Bursts
Furthermore, we assessed the performance of our method on white-noise bursts, which are signals characterized by the presence of \(h^{+}\) and \(h^{\times}\) polarizations that are independent time series of Gaussian noise, which is whitened over a specific frequency range and multiplied by a sigmoid envelope. The bandwidth of each injected signal is selected uniformly and randomly from a range spanning 40-400 Hz and 400-1000 Hz. The duration of the signal is chosen to be 0.1 s. Theoretically, these would be the most difficult signals to detect with our algorithm, as their lack of distinctive morphology would render the signal autoencoder features useless. However, as shown in Fig. 11, the SG autoencoders were able to generalize to the WNBs.
To evaluate the performance of our algorithm, we generated these signals with SNRs uniformly distributed between 10 and 100. The average final metric value and corresponding standard deviation for various SNR ranges are shown in Fig. 11 (right) and the lines corresponding to different false alarm rates.
The demonstration of the GWAK algorithm from strain to final metric is shown in Fig. 11. Starting with the whitened strain, we split up our data into 200 datapoint windows with a step of 5 datapoints between windows. For each window, the 10 autoencoder features are computed, along with the frequency domain correlation, and Pearson correlation, as described in Sec. 3.5. Using the learned SVM weights as shown in Fig. 8, we reduce the 12-dimensional GWAK space to 7 dimensions, and show those values at each timestep in the second panel. For the correlation features, this is simply done by multiplying the value by the corresponding weight. For the autoencoder features, the same is done, but the \(|\widetilde{H_{O}}\cdot\widetilde{H_{R}}|\) and \(|\widetilde{L_{O}}\cdot\widetilde{L_{R}}|\) values are combined into a single value via addition. Finally, all of those weighted features are summed yielding the final metric value, which is shown in the third panel, along with various false alarm rate thresholds.
The method performance on different signals at various SNR values is shown in
Fig. 12. For each signal type, we generate 10,000 waveforms using the SNR prior \(U[5,50]\). We then run the full GWAK algorithm and obtain the minimum metric value achieved in each injection, as shown in Fig. 11. We then group the injections into SNR bins of width 5, and show the average metric value as well as the \(1\sigma\) range for each bin, via the solid line and filled region respectively. The final metric values corresponding to certain false alarm thresholds are also shown. We first see that the three training classes are not significantly different from the original ones, but the two training classes are not significantly different from the original ones.
Figure 11: Strain (top), GWAK metric response (middle) and final metric response (bottom) for WNB (right) and Supernova (left). The evaluation of GWAK axes and Pearson correlation with time and on the top right total metric value and FAR are shown as an example of the algorithm’s “reaction” to unseen signals. Both BBH and sine-Gaussian losses drop at the time of the signal, which indicates that the strain at that moment is more signal-like. The Pearson correlation increases indicating a strong correlation between the two detector sides. At the moment of the event, the FAR drops to a level consistent with or below one event per year, which means that even with strong trigger restrictions, detection of that type of event would be possible with our new proposed algorithm.
- BBH, SG 64-512 Hz, SG 512-1024 Hz are most efficiently detected, which is expected as they have specific autoencoder models. Moving on to the anomalous signals
- WNB 40-400 Hz, WNB 400-1000 Hz, and Supernova, we see that they are not detected as readily as the training signals, but are still able to achieve satisfactory false alarm rates around 1-2/month around 20-25 SNR.
We show the detection efficiency using a receiver operating characteristic (ROC) curve in Fig. 13 in order to compare it with other ML techniques. Here, we pick a fixed false alarm rate threshold of 1/year and compute what fraction of injections, for each signal, at each SNR, have detection statistics below the threshold. Similar to Fig. 12, we see that the signals corresponding to training classes can be more readily detected at lower SNRs, and the anomalous signals require slightly higher SNRs to be detected. This graph can be compared to the one presented by MLy [23], while for the BBH and SGs we achieve similar performance, the efficiencies for WNBs and Supernovae are lower in our case. This is expected since the MLy algorithm was trained in a supervised manner, using WNBs during the training, while we only used those signals for finding the linear coefficients of the final metric but not as the GWAK axes.
Figure 12: The final metric as a function of SNR for GWAK axes training signals, BBH (blue), SG 64–512 Hz (yellow), SG 512–1024 Hz (salmon) and for potential anomalies, WNB 40–400 Hz (pink), WNB 400–1000 Hz (purple), and Supernova (orange). The black lines of varied width correspond to different FARs, from the FAR of 1 per hour to 1 per year. For each of the lines, the events that are below that line would be detected. As expected, the signals used to train the GWAK axes are on average detected better, eg more events with a smaller SNR are detected for a given FAR threshold.
### Comparison to a supervised search
To quantify the loss of efficiency of the unsupervised GWAK method in comparison to a supervised search, we perform the following study. Firstly, we use pre-trained GWAK axes for the BBH search by using a small, dense network instead of a simple linear combination for the final metric. We train this network in a supervised manner, using BBH as the signal class and timeseries as the background class. In this way, we can quantify how much signal efficiency is lost when using a general linear combination instead of overfitting on a specific signal. The results are shown in Fig. 14. We observe that BBH detection efficiency surpasses that achieved with the linear final metric. However, as anticipated, the detection efficiency for all other signals significantly decreases. We omitted the use of a smoothing window, as it was determined to be the most efficient for BBH-supervised search. Thus, we must compare it to the BBH ROC without the application of a smoothing window, as depicted in the Fig. 15.
We demonstrate that, in general, enhancing the detection efficiency for a specific signal through supervised training is feasible. However, this improvement often incurs a noticeable decline in performance for other signals. Given our objective to develop an algorithm capable of detecting unknown signals, we lack the necessary information to train it in a supervised manner. Nevertheless, in follow-up works, we may consider exploring the adoption of a more sophisticated final metric function, though we must exercise caution to prevent overfitting to the signals employed in optimizing this metric.
Figure 13: Detection efficiency as a function of SNR for GWAK axes training signals, and for potential anomalies. We see that the signals corresponding to training classes can be more readily detected at lower SNRs, and the anomalous signals require slightly higher SNRs to be detected.
### Comparison of Different Smoothing Windows
In Fig. 15, we illustrate that the optimization of detection efficiency, both for known and anomalous signals, occurs when employing smoothing kernels with sizes approximately equal to the signal length. These findings affirm that utilizing smoothing within the final metric space is an effective strategy to adapt the GWAK algorithm to diverse signal lengths
## 5 Conclusions
In this study, we utilized the Gravitational Wave Anomalous Knowledge (GWAK) method to identify anomalies in datasets acquired by ground-based GW observatories. The GWAK method relies on the notion of introducing alternative signal priors that capture some of the salient features of new physics signatures, enabling the restoration of sensitivity even when the alternative signal is incorrect. We separately trained five unsupervised autoencoders on a dataset consisting of normal background noise, glitches, and a collection of simulated signals that incorporate the physical characteristics of a potential new physics signature. We then established a 12-dimensional GWAK space, comprising two reconstruction losses for each of the detector sides for each of the five autoencoders and two features representing the correlation between the detector sides and searched regions of the GWAK space for anomalous signals. We then combined all
Figure 14: Detection efficiency for BBH and other signals and anomalies obtained using the final metric trained in a supervised manner on BBH signals pre-processed by the GWAK algorithm. This demonstrates that the GWAK method can approach a supervised search when given this specific task.
the 12 features into one final metric by multiplying with a corresponding coefficients from Fig. 8 and summing the result.
Our findings indicate that the GWAK method efficiently detected anomalies in the GW datasets. In particular, unmodelled sources like core-collapse supernovae and white noise bursts. Additionally, the GWAK method could differentiate signal-like anomalies from anomalous events, such as those resulting from detector glitches.
Our proposed method demonstrates promising results, detecting these sources with high accuracy and without prior knowledge of their characteristics. These findings underscore the potential value of our approach in detecting new and unexpected sources of GW signals, while simultaneously reducing the dependence on labeled training data.
Figure 15: Different smoothing windows for training signals (left column) and anomalous signals (right column).
In addition to serving as an unsupervised search, the machine-learning based approach allows for the implementation of the GWAK algorithm as a low-latency search tool. By detecting anomalies rapidly, alerts can be sent to electromagnetic telescopes for follow-up.
In future work, there are many potential avenues to explore. One is using the recreation as a denoising tool instead of just a detection statistic, allowing for rapid parameter estimation. This is especially important with electromagnetic follow-up, as the telescopes need information on the source location for detailed observation. On the detection side of things, an idea is to use normalizing flows to learn the high dimensional manifolds on which signals lie and use the probability value as a score alternate to the less intuitive engineered autoencoder frequency domain features. There is also potential to improve the search by changing the way that the 12 GWAK features are reduced to a single value. While the linear fit served as a simple and explainable method and did not really possess the potential to overfit the known signals and therefore decrease sensitivity to anomalies, it suffers the drawback that potentially useful patterns between features are not exploited. A simple example is with the features \(|\widetilde{H_{O}}\cdot\widetilde{H_{R}}|\) and \(|\widetilde{L_{O}}\cdot\widetilde{L_{R}}|\) for a single autoencoder. For a glitch event, you would expect only one of these values to increase, corresponding to the detector in which the glitch occurred, so you would expect an asymmetry between these features for a non-astrophysical event. The opposite is true for a BBH event, for example, as the fact that it is present in both detector channels means that there should be symmetry in the BBH autoencoder features. While this is just one intuitive example, more complex relations could certainly exist. Yet another area to explore is modifying the network architecture to allow for a longer signal length to generalize the algorithm to various signal durations.
To conclude, the GWAK method displays potential as a powerful tool for detecting anomalies in GW datasets and has the potential to enhance the performance of GW anomaly detection systems.
The authors would like to thank Tino Tibaldo for the 3D figure/representation ideas. The authors would like to thank William Patrick McCormack and Jeffery Krupa for the QUAK discussions. The authors acknowledge support from the National Science Foundation with grant numbers OAC-2117997 and CSSI-1931469. This research was undertaken with the support of the LIGO computational clusters. MWC and SM also acknowledge support from the National Science Foundation with grant number PHY-2010970. EM acknowledges support from the National Science Foundation with grant number GRFP-2141064. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation.
## 6 Appendix
### Backgrounds training curve
For completeness, in Fig. 16 we show the training and validation losses for the background and glitch autoencoders.
Figure 16: Autoencoder training and validation losses for background classes. The training/validation losses are in bold/dashed for background (in purple) and for glitches (in green).
### Recreations
For completeness, in Fig. 17, 18, 19 and 20 we show recreation plots for the low frequency SG, high frequency SG, glitches and background correspondingly. For each of the dataset types we can see that the corresponding autoencoder is the best in reconstructing the input, as expected.
Figure 17: Example of recreation plots obtained with all five pre-trained autoencoders on low frequency sine-gaussian dataset.
Figure 18: Example of recreation plots obtained with all five pre-trained autoencoders on high frequency sine gaussian dataset. |
2309.09378 | Dynamics of Fisheries in the Azores Islands: A Network Analysis Approach | In the context of the global seafood industry, the Azores archipelago
(Portugal) plays a pivotal role due to its vast maritime domain. This study
employs complex network analysis techniques to investigate the dynamics of
Azores fisheries, using time series data converted into networks. We uncover
associations between Tunas and specific islands, consistent links among fish
classifications, and identify other pivotal nodes within the fishing network.
Remarkably, nodes with high degrees and a local clustering coefficient of one
provide crucial insights into the fishing ecosystem. This study highlights the
value of network analysis for understanding fisheries complexities and offers
insights into sustainable management and the preservation of marine ecosystems.
It also emphasizes the urgency for ongoing research and data collection to
enrich our understanding of this multifaceted domain. | Brenda Nogueira, Ana Torres, Nuno Moniz, Gui M. Menezes | 2023-09-17T20:56:02Z | http://arxiv.org/abs/2309.09378v2 | # Dynamics of Fisheries in the Azores Islands: A Network Analysis Approach
###### Abstract
In the context of the global seafood industry, the Azores archipelago (Portugal) plays a pivotal role due to its vast maritime domain. This study employs complex network analysis techniques to investigate the dynamics of Azores fisheries, using time series data converted into networks. We uncover associations between Tunas and specific islands, consistent links among fish classifications, and identify other pivotal nodes within the fishing network. Remarkably, nodes with high degrees and a local clustering coefficient of one provide crucial insights into the fishing ecosystem. This study highlights the value of network analysis for understanding fisheries complexities and offers insights into sustainable management and the preservation of marine ecosystems. It also emphasizes the urgency for ongoing research and data collection to enrich our understanding of this multifaceted domain.
Keywords:Fisheries, complex networks, sustainability
## 1 Introduction
The demand for seafood has experienced a significant surge, with approximately 179 million tons of fish produced worldwide in 2018. Out of this, 156 million tons were used for human consumption, representing an annual supply of 20.5 kg per capita [6].
The Autonomous Region of Azores (Portugal), consisting of nine islands spread over 600 km, significantly contributes to the size of Portugal's EEZ [4]. Despite the expansive maritime territory, the waters around the Azores pose fishing challenges due to depth, currents, and seabed nature. For this reason, local fishing occurs near islands, banks, and seamounts less than 1,000 meters deep [9], resulting in more artisanal, multi-segmented fleets, that use varied gear, and a diversity of targeted species. Furthermore, annual landings average 11,000 tons (worth E33 million) [4], indicating the importance of fishing for local communities and the Azores' economy.
For these reasons, analyzing fisheries data can help understanding the complex marine interactions of the area, by exploring, not only temporal dynamics, but also visible patterns in different contributors, with the goal of aiding the decision-making process regarding fisheries practices.
However, it's crucial to note that this data presents an inherent complexity, extending into univariate and high-dimensional time series analysis, which continue to grapple with limitations across diverse contexts [10]. To address this, a promising solution emerges in the form of mapping time series onto networks, as these hold the potential to encapsulate intricate dependencies among constituent processes, encompassing both immediate and delayed inter-plays, as well as serial dependencies [10]. Therefore, this paper chooses to apply the methodology of transforming the time series of observations into temporal networks, and then to use tools of network analysis to uncover interesting insights.
This paper is structured as follows: Section 2 provides a background of fundamental concepts. Section 3 introduces the data and outlines the data preparation process for generating time series. Section 4 describes the methodology chosen to construct the networks and presents the results and discussion of the analysis, and, finally, in Section 5, the study's findings are summarized and discussed.
## 2 Background Concepts
A network or graph, represented as \(G(V,E)\), is an ordered pair where \(V\) represents the set of nodes (or vertices) and \(E\) the set of edges (or links) between pairs of nodes belonging to \(V\). A graph is most commonly represented as an adjacency matrix, denoted as \(A\), in which an entry \(A_{i,j}\) equals \(1\), if there is an edge connecting the two nodes \(i\) and \(j\), or \(0\) otherwise [10].
There are various approaches to construct a network from a time series or a set of time series. We will be focusing on using a set of time series for this transformation, which works by mapping states of the time series into nodes of the network and creating links between those nodes based on a measure of distance or similarity [10]. This process begins with the computation of the distance between all pairs of time series, resulting in a distance matrix (\(D\)).
In this case, we will be using Dynamic Time Warping (DTW) as the distance function, which aligns time series using a warping path, distinct from lock-step measures like Euclidean distance [1]. It optimizes the warping path to minimize the global warping cost, calculated through dynamic programming and a cumulative distance formula for two time series \(X\) and \(Y\) (both of length \(T\)), which is defined by:
\[d_{dtw}(X,Y)=dtw(i=T,j=T)=\begin{cases}\infty&\text{if }i=0\oplus j=0\\ 0&\text{if }i=j=0\\ \|X_{i}-Y_{i}\|+min\begin{cases}dtw(i-1,j)\\ dtw(i,j-1)\\ dtw(i-1,j-1)\end{cases}&\text{otherwise}\end{cases}\]
DTW stands out as a powerful technique for analyzing time series datasets due to its adaptability to variations and dynamic patterns, making it superior to rigid similarity measures. Its primary advantage lies in its invariance against shifting and scaling along the time axis. This unique feature has made DTW highly favored in pattern matching tasks. Notably, DTW not only provides a distance measure between two sequences but also offers insights into how these sequences are aligned with each other. In certain cases, understanding the alignment can be as informative, if not more so, than the distance itself [3].
Following the computation of these distances, the next step involves converting the distance matrix \(D\) into an adjacency matrix \(A\), that will represent the network. For this conversion, we can choose from a variety of methods:
* **k-Nearest Neighbors Network (k-NN):** each node is connected to the \(k\) other nodes with the shortest distances. This requires finding the \(k\) closest elements for each row \(i\) in \(D\)[7].
* \(\epsilon\)**-Nearest Neighbors Network (\(\epsilon\)-NN):** each node is connected to all the nodes whose distance is shorter than \(\epsilon\), a user-defined threshold [7].
* **Weighted Network:** is constructed by connecting all pairs of nodes and using their distances as weights. Typically, shorter distances correspond to stronger links, and the weighted adjacency matrix can be defined as \(A=1-D\) or normalized \(D_{norm}\)[7].
* **Networks with Significant Links:** connects nodes only if their distance is statistically significant [7]. For example, the significance of the Pearson correlation coefficient can be tested using the z-transformation.
## 3 Data Exploration
In this section, we introduce the fundamental datasets that underpin our study. Our primary data sources include the LOTACOR/OKEANOS-UAc daily landings dataset and the PNRD/OKEANOS-UAc inquiries database [9]. These inquiries are systematically collected by samplers during fishery landings in the Azores' main fishing harbors, offering rich insights into fishing activities.
The inquiries encompass a wealth of information, including the precise locations (island and harbor) of the landings, the common names and major species groups of the captured fish, the weight of the catch, the types of fishing gear employed and other essential vessel-related details. Each individual observation within these datasets is uniquely identified and timestamped. Spanning the years from 2010 to 2017, our data comprises a total of 30,281 observations.
### Data Description
As there was a wide variety of different fish species, we decided to study the major fish groups (classifications) instead. To further understand each classification, it is important to consider that demersal fish live and feed on or near the bottom of water bodies [13], while Pelagic fish live in the pelagic zone of ocean, which
comprises the open, free waters away from the shore [12]. Additionally, there are two primary categories of demersal fish: those that are exclusively benthic and can reside on the seafloor, and those that are benthopelagic and can hover in the water column just above the seafloor [13]. Similarly, marine pelagic fish can be classified into two groups: pelagic coastal fish and oceanic pelagic fish [12].
Table 1 provides a overview of the 13 classification types utilized to categorize the fish in our data, as well as the total number of landings, the total weight of fish caught and the average weight for each classification, in kilograms (Kg).
The calculation of average weights involved the utilization of the most frequently encountered fish species within each group, taking into account the proportions of these individual species' weights. Our focus was on those species whose combined weight contributed to 90% of the total classification weight. We obtained the necessary weight data through the _rfishbase_ resource [2].
It's also worth noting that the weight caught per fishery is influenced, not only by the average weight of the species caught, but also by the amount of fish caught in each landing, which is influenced by the behavior of different species moving together. Certain species may have a tendency to aggregate or swim together, leading to a higher catch per landing. For instance, Tunas are known to exhibit aggregative behavior, often forming schools or loose aggregations [8].
We also analysed the methods of fishing. Table 2 provides a list of the 14 different types of fishing gear, also known as metiers, along with their descriptions and the corresponding number of landings and total weights associated to each.
Another important factor is the location. Our data comprises 9 different harbors, that belong to 5 different islands. For visualization purposes, a map with the harbors marked is provided in Fig. 1, with the total weight of fishes caught associated to each of the islands (labels without background) and the number of landings registered for each harbor (labels with white background).
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Classification** & \multicolumn{1}{c|}{**Landings**} & \multicolumn{1}{c|}{**Total**} & \multicolumn{1}{c|}{**Average**} \\ & **Amount** & **Weight** & **Weight** \\ \hline Tunas (T) & 998 & 6131140 & 115.30 \\ \hline Continental Shelf Slope Demersals (CSSD) & 6298 & 672199 & 57.92 \\ \hline Small Pelagics (SP) & 2886 & 505364 & 2.09 \\ \hline Deep-Sea Species (DS) & 2028 & 474636 & 3.56 \\ \hline Continental Shelf Slope Benthopelagic (CSSB) & 5247 & 432526 & 4.00 \\ \hline Demersals (D) & 3085 & 215418 & 79.10 \\ \hline Mollusks (M) & 3181 & 104611 & 1.50 \\ \hline Coastal Demersals (CD) & 5270 & 72392 & 7.55 \\ \hline Large Migratory Pelagics (LMP) & 141 & 25019 & 650.0 \\ \hline Coastal Pelagics (CP) & 877 & 23561 & 13.55 \\ \hline Small Coastal Demersals (SCD) & 158 & 1605 & 0.30 \\ \hline Other Spp (OS) & 110 & 651 & NA \\ \hline Crustaceans (C) & 2 & 13 & 1.50 \\ \hline \end{tabular}
\end{table}
Table 1: Major Fish Classifications
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Metier** & **Description** & **Landings** & **Total** \\ & & **Amount** & **Weight** \\ \hline LHP-TUN & Pole-and-line for tuna species & 1002 & 6130088 \\ \hline LLS-PD & Bottom longline & 10396 & 1203759 \\ \hline PS-PPP & Purse seine nets for small pelagic fish & 2362 & 480957 \\ \hline LLD-PP & Deep-water drift bottom longline & 115 & 293457 \\ \hline LHP-PB & Bottom fish handles & 11247 & 277624 \\ \hline LLS-DEEP & Deep-water non-drifting bottom longline & 807 & 134733 \\ \hline LHP-CEF & Handling jigging for catching squids & 3148 & 103761 \\ \hline GNS-PB & Coastal gillnets & 943 & 17616 \\ \hline LLD-GPP & Surface drifting longline for large migratory pelagics & 42 & 13595 \\ \hline PS-PB & Lifting nets for small coastal fishes & 84 & 1375 \\ \hline LHP-PBC & Pole-and-line and coastal trolling for small pelagics & 36 & 1071 \\ \hline FPO-PB & Fish traps & 80 & 1014 \\ \hline FPO-CRU & Crustacean traps & 2 & 53 \\ \hline NEI & Not Identified & 17 & 34 \\ \hline \end{tabular}
\end{table}
Table 2: Metier Description
Figure 1: Azores Harbors
### Data Preparation
We observed a data gap from January 2014 to March 2014 and opted to fill it using data from the corresponding period in 2013, as it exhibited a similar number of landings. However, given the substantial differences in weight values between 2013 and 2014, we introduced a scaling factor. This was determined by comparing total weights from April to December in both years. We then applied this factor to the 2013 data to estimate the missing weight values for 2014.
We intentionally excluded data from the harbors Angra do Heroismo and Povoação, the classification Crustaceans, and the metier FPO-CRU due to limited data availability and low landing volumes. Additionally, the classification Other Spp and the metier NEI were also omitted from our analysis as they had limited relevance. These exclusions were made to streamline our network analysis and align it more closely with our research objectives.
After completing the imputation and data cleaning steps, we proceeded to generate the time series. To construct these, we aggregated the observations by calculating the mean value for each classification, metier, and island, per month, and then normalized each series, as presented in Figure 2. Notably, we excluded harbors from this aggregation since they are closely tied to their respective islands and could introduce more complexity than desired.
Figure 2: Time Series of Islands, Metiers and Classifications
## 4 Experimental Analysis
Understanding the dynamic changes in fisheries practices over the years is a challenging and complex task. We delved into three key questions related to alterations in the structure of networks:
1. In transforming distance values to a network, which method yields a more suitable network for investigating dynamic changes?
2. Which connections undergo changes over time, and which ones exhibit consistent patterns in the realms of classification-meier, classification-island, and classification-classification connections?
3. Which nodes demonstrate higher interaction levels with others in each year?
Initially, our objective was to identify the most suitable methods for transforming the distance matrix into an adjacency matrix for a dynamic study. Given our emphasis on observing variations in community structure over the years, we examined modularity values for each year. Additionally, to avoid an excessively sparse network with numerous isolated nodes, we investigated the network density. In conclusion, we observed the trade-off between these characteristics.
Next, we examined changes in networks, focusing on connections between classifications and fishery gears over the years. This allowed us to identify classifications with limited flexibility regarding fishery gears and those that were more adaptable. Additionally, we explored connections between classifications and islands to discern any migratory patterns or changes in species habitats over time. Finally, we observed connections between classifications to identify consistent patterns of fishery across species.
Finally, we investigated the nodes with the highest degree in each year, reflecting those with stronger connections to other nodes. We observed whether this pattern remained consistent over the years or underwent changes.
In conclusion, our study aimed to examine intrinsic changes in the dynamics of fisheries through a visible and flexible method.
### Methods
We decided to create multiple networks, one for each year from 2010 to 2017. This decision aligns with our goal of uncovering relationships among different factors, showing how they influence or remain unaffected by others, over time. In this case, each network will have as node either an island, a metier or a fish group and the edges between nodes signify a strong similarity.
To transform our set of time series into networks, we employed the 'ts2net' [7] library in R, with Dynamic Time Warping (DTW) being the distance function used, as justified in 2.
### Results
This section provides an overview of the answers to the research questions proposed in this study.
**Network Construction:** In our exploration of different network construction approaches, we tested the methods described in Section 2: \(k-NN\) network, \(\epsilon-NN\) network, weighted network and significant links network. We considered various parameter values, specifically, different values for \(k\) (2, 3, 5, 7, and 10) and \(\epsilon\) (0.3, 0.5, 0.7, and 0.9).
Although the network with significant links initially exhibited the highest mean modularity, we ultimately selected the k-NN network with \(k=\)2 as our preferred choice. This decision was made because the network with significant links suffered from a lack of connections, resulting in a highly fragmented and disconnected network with a mean density of 0.02656478, while \(k=\)2 shows a mean density of 0.11386721, which is sparse network, but not a fragmented one, being more suitable for the goals of our study.
**Network Analysis:** For visualization purposes, we employed the 'igraph' package. In Fig. 3, the 2-NN networks for each year are depicted. Red edges signify new connections formed from one year to the next, while black edges represent retained connections. Triangle-shaped nodes correspond to classifications, circle-shaped nodes represent islands, and square-shaped nodes denote metiers to a better visualization. Nodes are color-coded to signify their communities in each year, identified using the "cluster_walktrap" function from the 'igraph' package [5]. This function identifies communities based on random walks.
Clearly, the rate of new edges that emerges between years is significantly high, demonstrating a great changing in the network over the years.
Figure 3: Networks over the years
class-metier_Notably, certain associations remain steady over the years, such as the evident links between Tunas and LHP-TUN, Mollusks and LHP-CEF, Small Pelagics and PS-PPP, as well as Deep-sea Species and LSS-DEEP. More specific interactions involve CSS Demersals and LLS-PD, which are prominently associated in the initial years, followed by a gap in utilization, but then a resurgence. Linked to Coastal Demersals, we observe GNS-PB, until 2014, and PS-PPP, particularly more recently.
Conversely, several classifications, including Continental Shelf Slope (CSS) Benthopelagics, Demersals, and Large Migratory Pelagics, demonstrate a diverse use of or no connection to fishing gears. This lack of consistent patterns suggests a flexible approach to fishing methods. Small Coastal Demersal present varying methods employed, transitioning from LHP-BP initially to LHP-CEF more recently, highlighting adaptability in fishing practices.
class-islandsExamining island connections reveals intriguing trends. Tunas exhibit strong links with Sao Miguel, Pico, Terceira, and other islands in various years. However, these connections seem to have diminished in recent years. As the connection with Tunas decreases, Large Migratory Pelagics appear to play a more pivotal role in Sao Miguel's fishing activities.
Faial displays connections to Deep-sea Species in recent years, contrasting with its past association with Coastal Pelagics. Meanwhile, Terceira exhibits connections to Deep-sea Species and CSS Benthopelagics, although only during specific years, indicating variability in fishing patterns. Santa Maria stands out for its lack of strong connections to classifications, reflecting a diverse and potentially evolving ecosystem within the region.
class-classFurthermore, examining links between classifications themselves, reveals intriguing dynamics. Robust connections persist between Demersal-related categories and Small Pelagics over the years. On the opposite hand, the associations between Tunas and Coastal Pelagics seem to have diminished recently.
Furthermore, Deep-sea Species and Large Migratory Pelagics displayed a strong connection in earlier years, but this link has since waned. As for Mollusks, similar patterns to both CSS Benthopelagics and Small Coastal Demersals are observed, but during different time periods, highlighting temporal variations in their connections.
High Degree nodesNodes with high degrees often indicate the ecological importance of a species, the adaptability of a fishing gear, or the significance of an island within the fishing network for a specific year. Key nodes in the analysis include FPO-PB, maintaining a consistently high degree in the last three years: a degree of 5 in 2015 and 2016, and 6 in 2017. PS-PB also stands out, demonstrating significance in the last two years with a degree of 7 in 2016 and 6 in 2017. GNS-PB emerges as important in 2012 with a degree of 6 and in 2013 with a degree of 4, while LHP-TUN appears in 2014 and 2016, both with a degree of 5, alongside LLD-GPP in 2013 with a degree of 4 and 2017 with a degree of 6.
### Discussion
Our study sheds light on a dynamic and evolving fishing network, where relationships among fish classifications, fishing methods (metiers), and geographical locations (islands) constantly shift.
Tunas emerge as a prominent contributor to the overall weight of fisheries in the Azores. However, as time has progressed, we've discerned a concerning decline in the total catch weight of these species. This decline, coupled with diminishing connections to once-consistent islands, raises pressing questions. It compels us to consider the sustainability of tuna populations and the possibility of shifts in their migration patterns. This underscores the paramount need for vigilant fisheries management in these regions.
Moreover, the vanishing connections involving various fish classifications and Faial in more recent years raise another layer of concern. They suggest a noteworthy alteration in the marine ecosystem around the island over the years.
Within our network analysis, specific nodes also stand out. Notably, FPO-PB and PS-PB emerged as pivotal nodes recently, highlighting the importance of specific fishing methods.
## 5 Conclusions
In this paper, we offer a complex network analysis approach, that has offered profound insights into the temporal trends detected within our time series data. By unveiling intricate relationships across various features and identifying critical nodes within the network, we've not only shed light on the changes observed over time but also acquired a more profound understanding of the complex dynamics of Azorean fisheries.
This understanding isn't just informative, as network analysis emerges as a pivotal tool in real-world scenarios. Particularly, in identifying challenges in fisheries management, it can aid the critical decision-making processes, particularly concerning quota definition and tracking, ensuring the sustainability of marine ecosystems and the livelihoods of those dependent on them.
Our analysis represents just one aspect of fisheries research, emphasizing the ongoing need for further investigations and collaborations in this critical field. Additionally, the flexibility of 'ts2net', with its ability to explore various parameter settings, offers diverse insights into the dynamics of Azores fisheries. As a potential avenue for future research, we could explore other approaches like NetF [11], which transforms a single time series into a network using quantile and visibility graphs, extracting significant topological measures. This could provide valuable additional perspectives on the subject.
In conclusion, our study reaffirms the importance of complex network analysis in real word data. By translating intricate time series into networks and exploring their properties, we gain valuable insights into temporal fisheries dynamics, essential to guiding us toward sustainable practices and emphasizing the urgency of continued research and data collection in this intricate and ever-changing marine ecosystem. |
2309.16258 | QonFusion -- Quantum Approaches to Gaussian Random Variables:
Applications in Stable Diffusion and Brownian Motion | In the present study, we delineate a strategy focused on non-parametric
quantum circuits for the generation of Gaussian random variables (GRVs). This
quantum-centric approach serves as a substitute for conventional pseudorandom
number generators (PRNGs), such as the \textbf{torch.rand} function in PyTorch.
The principal theme of our research is the incorporation of Quantum Random
Number Generators (QRNGs) into classical models of diffusion. Notably, our
Quantum Gaussian Random Variable Generator fulfills dual roles, facilitating
simulations in both Stable Diffusion (SD) and Brownian Motion (BM). This
diverges markedly from prevailing methods that utilize parametric quantum
circuits (PQCs), often in conjunction with variational quantum eigensolvers
(VQEs). Although conventional techniques can accurately approximate ground
states in complex systems or model elaborate probability distributions, they
require a computationally demanding optimization process to tune parameters.
Our non-parametric strategy obviates this necessity. To facilitate assimilating
our methodology into existing computational frameworks, we put forward
QonFusion, a Python library congruent with both PyTorch and PennyLane,
functioning as a bridge between classical and quantum computational paradigms.
We validate QonFusion through extensive statistical testing, including tests
which confirm the statistical equivalence of the Gaussian samples from our
quantum approach to classical counterparts within defined significance limits.
QonFusion is available at
\url{https://boltzmannentropy.github.io/qonfusion.github.io/} to reproduce all
findings here. | Shlomo Kashani | 2023-09-28T08:51:18Z | http://arxiv.org/abs/2309.16258v1 | Shlomo Kashani
###### Abstract
In the present study, we delineate a strategy focused on non-parametric quantum circuits for the generation of Gaussian random variables (GRVs). This quantum-centric approach serves as a substitute for conventional pseudorandom number generators (PRNGs), such as the torch.rand function in PyTorch. The principal theme of our research is the incorporation of Quantum Random Number Generators (QRNGs) into classical models of diffusion. Notably, our Quantum Gaussian Random Variable Generator fulfills dual roles, facilitating simulations in both Stable Diffusion (SD) and Brownian Motion (BM). This diverges markedly from prevailing methods that utilize parametric quantum circuits (PQCs), often in conjunction with variational quantum eigensolvers (VQEs). Although conventional techniques can accurately approximate ground states in complex systems or model elaborate probability distributions, they require a computationally demanding optimization process to tune parameters. Our non-parametric strategy obviates this necessity. To facilitate assimilating our methodology into existing computational frameworks, we put forward QonFusion, a Python library congruent with both PyTorch and PennyLane, functioning as a bridge between classical and quantum computational paradigms. We validate QonFusion through extensive statistical testing, including tests which confirm the statistical equivalence of the Gaussian samples from our quantum approach to classical counterparts within defined significance limits. QonFusion is available at [https://boltzmannentropy.github.io/qonfusion.github.io/](https://boltzmannentropy.github.io/qonfusion.github.io/) to reproduce all findings here.
Quantum Random Number Generators]In the present study, we delineate a strategy focused on non-parametric quantum circuits for the generation of Gaussian random variables (GRVs). This quantum-centric approach serves as a substitute for conventional pseudorandom number generators (PRNGs), such as the torch.rand function in PyTorch. The principal theme of our research is the incorporation of Quantum Random Number Generators (QRNGs) into classical models of diffusion. Notably, our Quantum Gaussian Random Variable Generator fulfills dual roles, facilitating simulations in both Stable Diffusion (SD) and Brownian Motion (BM). This diverges markedly from prevailing methods that utilize parametric quantum circuits (PQCs), often in conjunction with variational quantum eigensolvers (VQEs). Although conventional techniques can accurately approximate ground states in complex systems or model elaborate probability distributions, they require a computationally demanding optimization process to tune parameters. Our non-parametric strategy obviates this necessity. To facilitate assimilating our methodology into existing computational frameworks, we put forward QonFusion, a Python library congruent with both PyTorch and PennyLane, functioning as a bridge between classical and quantum computational paradigms. We validate QonFusion through extensive statistical testing, including tests which confirm the statistical equivalence of the Gaussian samples from our quantum approach to classical counterparts within defined significance limits. QonFusion is available at [https://boltzmannentropy.github.io/qonfusion.github.io/](https://boltzmannentropy.github.io/qonfusion.github.io/) to reproduce all findings here.
###### Contents
* 1 Introduction
* 1.1 The Convergence of Quantum Computing and Generative AI models
* 1.2 A general overview of our pipeline
* 1.3 Diffusion Models
* 1.3.1 Stable Diffusion
* 1.3.2 Brownian Motion
* 2 The Fundamentals of Classical Pseudo-Random Number Generation
* 2.1 The Marsaglia Polar Method Explained
* 3 Quantum Random Number Generators (QRNGs)
* 3.1 Parametric Quantum Circuits (PQCs)
* 3.2 Non-Parametric Quantum Circuits
* 3.3 Generation of Uniform Random Numbers
* 3.4 Sampling vs. Measuring Expectation Values
* 3.5 Generating Uniformly Distributed Rotations
* 3.6 Generating GRV's
* 4
Results * 4.1 Statistical Evaluation of Quantum-Generated Distributions * 4.2 Quantum-Augmented Gaussian Noise in Forward Diffusion Processes * 4.3 Quantum-Enhanced Gaussian Noise in Brownian Motion Simulations
* 5 Discussion
* 6 Acknowledgements. List of Figures List of Tables
## 1 Introduction
### The Convergence of Quantum Computing and Generative AI models
Our investigations into QRNGs reveal a broad spectrum of potential applications. Chief among these is the ability for QRNGs to serve as direct replacements for traditional random number generators like torch.rand or numpy.rand, which are routinely employed in the realm of deep learning. Furthermore, QRNGs are effective in introducing Gaussian noise during the forward pass of SD models. They also offer valuable perspectives for simulating reversible Stochastic Differential Equations (SDEs), such as BM. A noteworthy discovery, referenced in section (1.3.2), is the finding that even a minimal deviation from a zero mean in the Gaussian Probability Density Function (PDF) during a BM simulation via QRNG results in irreversibility. This utility is especially relevant in the current scientific landscape where deep learning intersects extensively with quantum many-body systems, particularly in high-energy physics [24]. Simultaneously, the interface between quantum computing and quantum neural networks is attracting substantial academic interest.
While these research directions might initially appear disparate, they inherently exhibit complementary attributes, particularly in the realm of generative SD models [17]. Foundational contributions from Dhariwal et al. [14] and Moghadam et al. [15] have highlighted the efficacy of diffusion models in the creation of both general-purpose and medical images. Such models present a viable substitute for the labor-intensive and economically taxing process of manual tumor annotation.
Given these insights, an imperative question arises: Is it feasible to substitute all elements of a classical Machine Learning (ML) diffusion model with their quantum equivalents? The answer, although layered, is primarily negative. As a case in point, consider the U-Net architecture [18], renowned for its original purpose in image segmentation. It has since been adapted to perform denoising functions within diffusion models. This dual-functionality architecture, comprising both an encoder and a decoder, can generate noise-highlighting masks that can be integrated into specific denoising algorithms. Yet, when one peels back the layers of the quantum realm, it quickly becomes apparent that the field is less developed than its classical counterpart.
To bridge our initial foray into QRNGs with the more nuanced world of Quantum Machine Learning (QML), it's crucial to establish the state-of-play in both the classical and quantum settings. QML, unlike its classical ML analogue, lacks well-defined quantum versions of key architectural elements such as encoders and decoders. This shortfall presents a significant barrier to the complete transition from classical to quantum frameworks in diffusion models. Consequently, our current research aims are situated in the early stages of classical diffusion models, specifically focusing on incorporating and simulating basic quantum circuits for QRNGs, as illustrated in Figure 1[18, 19]. Though this focus might appear limiting, it serves as a gateway to a broader, yet largely unexplored, landscape of opportunities.
Most of our experiments are conducted on classical computing platforms, harnessing the power of PennyLane's quantum simulator backends to serve as the computational engines for our quantum circuits. Optimised for computational efficiency, these backends are well-suited for numerical simulations in quantum computing, particularly those pertinent to SD or BM. Finally, it's important to note that our chosen default parameters are not intended to represent optimal configurations but rather to serve as a reference framework for evaluating Gaussian Random Variables (GRVs). A more detailed account of our research methodology will follow in subsequent sections of this paper.
In light of these developments, it becomes pertinent to explore how quantum computing can further enrich the field of generative models, particularly diffusion models. The following sections delve into this subject, examining the foundational principles of diffusion models and their potential for quantum augmentation.
### A general overview of our pipeline
In the hybrid quantum-classical framework we describe, as illustrated in Figure 1, we make use of **two** individual quantum circuits in a sequential manner. The first circuit, which utilises \(N\) qubits, aims to produce a uniform distribution over \(2^{N}\) unique states. These states are subsequently converted into
Figure 1: An illustrative depiction of our proposed quantum pipeline. 1 Utilizing \(N\) qubits, an initial non-parametric (3.2) quantum circuit, founded on Hadamard transformations, engenders a uniform distribution (3.3) across \(2^{N}\) discrete values through sampled measurement outcomes, rather than expectation values (further elucidated in 3). These outcomes are subsequently transmuted into equidistant angles, which are then channeled into a static quantum circuit that neither adapts nor learns, with the purpose of inducing stochastic rotations. 2 This static assembly, encompassing \(M>N\) qubits, functions as follows: a Hadamard gate is applied to each qubit, thus creating a superposition of states. Upon activation of the useRot flag, the circuit administers a rotation (3.5) to every qubit, utilizing the **previously generated angles as the parameters for these rotations**. The output 3 from this circuit is then employed in the 4 Marsaglia polar method to transmute the uniform distribution into a zero-mean Gaussian 5. This transformation is achieved through the equation \(Z=\sqrt{-2\log U}\cos(2\pi V)\), where \(U\) and \(V\) are two uniform pseudo-random numbers (PRN’s), and by substituting our quantumly-generated uniform random bits for \(U\) and \(V\), we synthesize a source of quantum (7) Gaussian random variables. 6 The Marsaglia polar method serves as a paradigmatic example of how quantum random bits can seamlessly supplant classical PRN’s.
equiangular values. These equiangular values are then fed into the second, pre-determined quantum circuit. This latter circuit, containing \(M>N\) qubits, applies a Hadamard gate to each qubit and carries out rotations as determined by the angles generated in the first circuit. The output from this series of quantum operations is then used in the classical Marsaglia polar method to transform the uniform distribution into a Gaussian distribution centred at zero. This serves as an illustrative example of how QRNGs can replace classical PRNGs in models like SD or BM, particularly during the forward diffusion stage. The numerical trials are conducted on idealised quantum simulators running on classical hardware, thereby approximating the essential elements of diffusion methodologies.
As we transition from the quantum-classical hybrid framework to a more focused discussion on diffusion models, it is crucial to remember that our ultimate aim is to explore the quantum-classical interface. The subsequent section on diffusion models should be viewed as an extension of this aim, offering a detailed background against which our quantum contributions can be more fully understood.
### Diffusion Models
Our work is fundamentally anchored in the generation of pseudo-random sequences, a critical aspect of diffusion techniques that often takes up a significant portion of the computational resources. While this is a focal point of our study, it's worth noting that the applications of SD models are manifold, including, but not limited to, the realms of speech, image, and audio generation, as well as denoising techniques [1, 13, 14, 15, 16]. Given the extensive scope of these applications, a complete discussion in this article would be impractical.
Figure 2 captures a singular moment from an animated sequence that visualizes the operational stages of the QRNG. The complete animation can be viewed at [https://boltzmannnetropy.github.io/qonfusion.github.io/](https://boltzmannnetropy.github.io/qonfusion.github.io/). This visualization serves as a practical illustration of **our central research theme: the integration of QRNGs into classical diffusion models**. While the field of diffusion models is vast and rich in applications, our focus remains on identifying areas where quantum computing, particularly QRNGs, can make a meaningful impact.
#### 1.3.1 Stable Diffusion
Building on the theme of QRNG integration, the domain of diffusion models is rich in seminal contributions. Works by Sohl-Dickstein et al. [17] and Song et al. [18, 19, 20] stand as milestones in establishing the theoretical underpinnings of this field. Their groundbreaking research has set the stage for the work by Ho et al. [15], which offers a unified, practical framework that has become the go-to standard for multiple implementations. For readers seeking a deeper theoretical dive, the studies by Song et al. [19, 20] are invaluable. On the mathematical front, Luo et al. [16] have compiled essential formulations for anyone interested in a comprehensive understanding of diffusion models. Furthermore, for an overview of the diversity of applications, one cannot overlook the work by Yang et al. [21].
#### 1.3.2 Brownian Motion
Having established the foundational principles and key contributions in the domain of SD models, it is instructive to delve into a specific, yet seminal, stochastic process that often serves as the underpinning of a different group of diffusion models. This leads us to the discussion of Brownian Motion, a process that not only has historical significance but also offers a robust mathematical framework for understanding diffusion dynamics. BM is a cornerstone of stochastic processes and is unique for being the first to incorporate continuous time and state variables, thus leaving an indelible mark on subsequent research, notably in Gaussian processes, martingales, and Markov processes. Initially brought to light by Einstein [14], BM provides a robust framework for the analytical representation of the random motion of particles in fluids, a subject further explored by Perrin [22], Chandrasekhar [15], and Langevin [13]. The process is governed by stochastic differential equations and, distinctively,
employs Gaussian-distributed stochastic variables for its diffusion dynamics. This stands in contrast to other stochastic processes, which often resort to Poisson or Bernoulli distributions [1, 2].
The reason we are testing our QRNG in BM is that BM is highly sensitive to any deviations in the mean of the Gaussian distribution, and this serves as a benchmark for our quantum generator. Our findings concerning the application of our QRNG to Brownian Motion simulations are detailed in Section 4.3, entitled _Quantum-Augmented Gaussian Fluctuations in Simulations of Brownian Motion_.
## 2 The Fundamentals of Classical Pseudo-Random Number Generation
As a prelude to our exploration of quantum-generated random numbers, it is instructive to delineate the foundational methods employed in classical computing to produce uniform and Gaussian distributions [1]. Classical Pseudo-Random Numbers (PRNs) are generated using deterministic algorithms that offer repeatability while producing a sequence of numbers that seemingly fall within the random range of \((0,1)\)[1, 10]. The effectiveness of such generators is gauged through two primary criteria:
1. **The Length of the Period**: This measure signifies the duration before the generated sequence repeats. Period lengths below \(2^{32}\) are generally considered insufficient, and a minimum period length of \(2^{40}\) is usually recommended for robust applications. For further insights into the nuances of such sequences,
Figure 2: An illustrative schema elucidating the employment of quantum-sourced uniform random variables for the generation of Gaussian noise, specifically for the perturbation of images in a Stable Diffusion context. 1��❝ presents a histogram of the quantum-derived uniform random variables. 2� Demonstrates the Gaussian distribution fitted to the uniform variables, facilitated by the Marsaglia polar method. 3� Exhibits an image in two dimensions that has been disrupted by quantum-originated Gaussian noise.
refer to the works by [1, 13].
2. **Robustness in Statistical Terms**: Any generator worth its salt will have undergone meticulous statistical evaluation to ascertain its quality of 'randomness'. Software that passes these stringent tests is considered reliable. Moving forward, we delve into the techniques used to generate numbers following a non-uniform, specifically Gaussian, distribution. The mathematical expression for the probability density function (PDF) of such a Gaussian distribution can be articulated as: \[P(x)=\exp\left(-\frac{(x-x_{0})^{2}}{2\sigma^{2}}\right)\] (1) The Central Limit Theorem comes to the fore here. The theorem posits that the summation of a large number of independent random variables tends to a Gaussian distribution, with its width inversely related to \(\sqrt{N}\). Therefore, it's not a stretch to infer that by summing \(n\) uniform random numbers, one can approximate a Gaussian distribution. The closer \(n\) is to a large number, the more faithfully the resulting distribution will mirror a Gaussian form. To customise this distribution with a particular mean \(\bar{x}\) and width \(\sigma\), the following transformation of the summation \(S\) of \(n\) uniform random variables can be employed: \[x=\bar{x}-2\sigma\left(\frac{S}{n}-\frac{1}{2}\right)\sqrt{3}\] (2) It's worth mentioning that the traditional approach of approximating a Gaussian distribution through the summation of \(n\) uniform random variables is not computationally efficient. This is because generating a single Gaussian Random Variable (GRV) [1, 2, 13] requires the prior generation of \(n\) uniform random variables within the range \((0,1)\). While classical methods for generating these uniform random variables abound, our attention is rather riveted on how to transform quantum-generated uniform random numbers into GRVs via classical computational techniques. Here, we elucidate four key methods for this purpose:
1. **The Box-Muller Technique** : This method stands out for its computational efficiency and is well-adopted in practice. It takes two independent uniform random variables as inputs and produces two independent standard GRVs as outputs [12].
2. **Marsaglia's Polar Approach** : This is essentially a variant of the Box-Muller method but avoids the use of trigonometric functions, offering a quicker alternative in certain scenarios [1].
3. **Marsaglia's Ziggurat Method** : Known for its efficiency in generating normal and exponential random variables, this method employs precomputed tables, making it especially useful for large-scale simulations [14].
4. **Inverse Cumulative Distribution Function (CDF) Transformation** : Though computationally more demanding, this method has the flexibility to handle any distribution with a known inverse CDF. It transforms a uniform random variable into a GRV through the inverse of the CDF [15].
### The Marsaglia Polar Method Explained
GRVs play a pivotal role in the forward phase of SD models, serving as the source of Gaussian noise introduced into images. These variables find extensive utility not only in machine learning but also in the physical sciences. A standard GRV, denoted \(Z\), has a mean \(\mu\) of zero and a variance \(\sigma^{2}\) of one. Its Probability Density Function (PDF) and Cumulative Distribution Function (CDF) are described as follows:
\[\phi(z) =\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}z^{2}\right), \tag{3}\] \[\Phi(z) =\mathbb{P}[Z<z]=\int_{-\infty}^{z}\phi(s)\,\mathrm{d}s. \tag{4}\]
The Marsaglia method, which can be viewed in Algorithm 1, commences by generating two uniform random variables, \(u1\) and \(u2\). These are subsequently transformed to lie within the interval \([-1,1]\), creating \(u\) and \(v\). Here, \(u\) and \(v\) act as Cartesian coordinates on a two-dimensional plane. The algorithm's crux is to ascertain if the point \((u,v)\) resides within a unit circle. This is determined by calculating \(s=u^{2}+v^{2}\) and ensuring that \(s<1\) and \(s\neq 0\). Should the point fall outside the unit circle, a new set of \(u\) and \(v\) is generated. Once a valid point is identified, a scaling factor is computed using \(s\), and this factor is applied to \(u\) and \(v\) to produce \(z1\) and \(z2\), two normally distributed variables. The algorithm then outputs \(z2\). Herein, we offer an exhaustive account of the Marsaglia method's inner workings.
## 3 Quantum Random Number Generators (QRNGs)
While the prevailing literature often emphasises Parametric Quantum Circuits (PQCs) [11, 10] and variational quantum eigensolvers (VQEs) [12, 13], our approach diverges by adopting a non-parametric methodology, obviating the need for hybrid optimisation strategies. For completeness, we first expound upon the parametric methods.
### Parametric Quantum Circuits (PQCs)
The quantum circuitry employed for generating GRVs has garnered significant attention, particularly within the context of Quantum Circuit Born Machines (QCBMs) [10]. A recurring theme in current methodologies is the integration of quantum and classical systems. Romero and Aspuru-Guzik [11], for instance, employed a variational quantum generator within a Generative Adversarial Network (GAN), with the quantum circuit functioning as the generator and a classical neural network acting as the discriminator. Similarly, works by Gili et al. [11] and Liu and Wang [14] have further enriched the landscape of quantum-classical hybrid systems in the context of probability distribution learning and optimisation.
### Non-Parametric Quantum Circuits
Our methodology, depicted in Figure 1, centres around non-parametric quantum circuits [1, 12, 13, 14]. It specifically aims at transforming uniform distributions to Gaussian noise, with the details elaborated upon subsequently. The quantum-enhanced implementation of our algorithm, delineated in Code snippet 1, is modelled on Algorithm 1.
### Generation of Uniform Random Numbers
Figure 3 portrays a quantum circuit designed for generating random rotations. These rotations, functioning as non-trainable parameters, are subsequently utilised in a separate ansatz to generate uniform distributions. The quantum noise emanating from these circuits is employed in image corruption tasks and the simulation of diffusion processes akin to Brownian motion.
### Sampling vs. Measuring Expectation Values
In quantum computing, **sampling and calculating expectation values** are distinct yet interconnected concepts. Much like expectation values, sampling plays a crucial role in quantum algorithms and simulations. While qml.expval() computes expectation values without collapsing the wavefunction, qml.sample() performs a measurement that results in wavefunction collapse, yielding a specific eigenvalue of the measured operator [1].
**Definition 1**.: (Expectation Values) The expectation value of an operator \(\hat{O}\) for a state \(|\Psi\rangle\) is the average value of the observable corresponding to \(\hat{O}\), given by
\[\langle\hat{O}\rangle=\langle\Psi|\hat{O}|\Psi\rangle. \tag{5}\]
**Example 3.1**.: For a qubit in the state \(|\Psi\rangle=\alpha|\uparrow\rangle+\beta|\downarrow\rangle\), the expectation value of the Pauli-Z operator can be calculated as follows:
\[\langle\hat{\sigma}_{z}\rangle =\langle\Psi|\hat{\sigma}_{z}|\Psi\rangle, \tag{6}\] \[=\left(\alpha^{*}\langle\uparrow|\;\beta^{*}\langle\downarrow| \right)\hat{\sigma}_{z}\left(\alpha|\uparrow\rangle+\beta|\downarrow\rangle \right),\] (7) \[=\alpha^{*}\alpha\langle\uparrow|\;\hat{\sigma}_{z}|\uparrow \rangle+\beta^{*}\beta\langle\downarrow|\;\hat{\sigma}_{z}|\downarrow\rangle,\] (8) \[=|\alpha|^{2}\cdot 1+|\beta|^{2}\cdot(-1),\] (9) \[=|\alpha|^{2}-|\beta|^{2}. \tag{10}\]
Figure 3: A Simplified Circuit Ansatz for Generating Random Rotations: (a) Circuit Ansatz visualisation, (b) Corresponding Python code snippet.
**Example 3.2**.: Consider the state
\[\left|\psi\right\rangle=\frac{3}{7}|00\rangle+\frac{6}{7}|01\rangle+\frac{2}{7}|10\rangle, \tag{11}\]
and the operator \(I\otimes\sigma_{z}\), representing the \(\sigma_{z}\) operator acting on qubit B. The expectation value of \(\sigma_{z}\) for qubit B can be calculated as:
\[\left\langle\sigma_{z}\right\rangle_{B} =\left\langle\psi\right|\left(I\otimes\sigma_{z}\right)\left|\psi\right\rangle \tag{12}\] \[=\left(\frac{3}{7}\right)^{2}\cdot 1+\left(\frac{6}{7}\right)^{2} \cdot(-1)+\left(\frac{2}{7}\right)^{2}\cdot 1\] (13) \[=\frac{9}{49}-\frac{36}{49}+\frac{4}{49}\] (14) \[=\frac{-23}{49} \tag{15}\]
**Definition 2**.: (Sampling) Sampling in the context of a quantum state \(\left|\Psi\right\rangle\) refers to the process of performing a measurement on the state in a specific basis, such as the Pauli-Z basis. The outcome is probabilistic, and the state collapses to one of the eigenstates of the measured operator. Mathematically, if \(|\phi_{i}\rangle\) are the eigenstates of an operator \(\hat{O}\) with eigenvalues \(o_{i}\), then sampling \(|\Psi\rangle\) with respect to \(\hat{O}\) yields \(o_{i}\) with probability \(|\langle\phi_{i}|\Psi\rangle|^{2}\).
**Example 3.3**.: Consider a qubit in the state \(|\Psi\rangle=\frac{1}{\sqrt{2}}(|\uparrow\rangle+|\downarrow\rangle)\). The probabilities for each outcome when sampling this state in the Pauli-Z basis can be calculated as follows:
\[P(+1) =\left|\langle\uparrow|\Psi\rangle\right|^{2} \tag{16}\] \[=\left|\frac{1}{\sqrt{2}}\right|^{2}\] (17) \[=\frac{1}{2},\] (18) \[P(-1) =|\langle\downarrow|\Psi\rangle|^{2}\] (19) \[=\left|\frac{1}{\sqrt{2}}\right|^{2}\] (20) \[=\frac{1}{2}. \tag{21}\]
Therefore, sampling would yield \(+1\) with a probability of \(0.5\) and \(-1\) with a probability of \(0.5\).
**Example 3.4**.: For a two-qubit state
\[|\psi\rangle=\frac{1}{\sqrt{3}}(|00\rangle+|01\rangle+|10\rangle), \tag{22}\]
sampling the first qubit in the Pauli-Z basis would yield the following probabilities:
\[P(+1) =\left|\frac{1}{\sqrt{3}}\right|^{2}+\left|\frac{1}{\sqrt{3}} \right|^{2} \tag{23}\] \[=\frac{1}{3}+\frac{1}{3}\] (24) \[=\frac{2}{3},\] (25) \[P(-1) =\left|\frac{1}{\sqrt{3}}\right|^{2}\] (26) \[=\frac{1}{3}. \tag{27}\]
Hence, \(+1\) would be obtained with a probability of \(\frac{2}{3}\) and \(-1\) with a probability of \(\frac{1}{3}\).
We are **concerned with sampling here**. The primary distinction between the expressions [qml.sample(qml.PauliZ(i)) ] and [qml.expval(qml.PauliZ(i))] resides in the nature of the result they yield [BIS\({}^{+}\)18]. The former returns a sampled measurement outcome, while the latter procures the expectation value. qml.sample() executes a measurement on the qubit in the Pauli-Z basis, causing the collapse of the wavefunction and **randomly returning either 0 or 1** based on the probabilities inherent to the qubit state. On the other hand, qml.expval() computes the expectation value \(\langle Z\rangle\) of the Pauli-Z operator on the qubit, **yielding the average value** we would anticipate measuring, without inducing the collapse of the wavefunction. In essence, qml.sample() provides a random measurement sample (0 or 1) while qml.expval() offers the expected average value of a measurement (a value between 0 and 1). **Sampling induces the collapse of the state, while expectation values permit further quantum processing.** The Python method toBinaryIndex performs the task of converting the sampling outcomes to a binary index value. It takes a list of qubit measurements as input and returns an integer value representing the binary index.
\[\text{value}=\sum_{i=0}^{n-1}2^{i}\times\text{sample}_{i} \tag{28}\]
Figure 4: Plots of random rotation angles generated by a quantum circuit. The circuit applies a Hadamard gate to each qubit, creating a superposition of states. Each state is then measured in the Pauli-Z basis, resulting in a series of 0s and 1s (binary outcomes). These binary outcomes are converted to a binary index value, which is then normalised to a value between 0 and 1 by dividing by the total number of possible states (the space span). Each plot shows the distribution of these generated values for different numbers of qubits. The limited range of qubit string bits is due to the finite number of binary outcomes that can be generated by a given number of qubits.
The binary index is then normalised to a value between 0 and 1 by dividing it by the total number of possible states, effectively spanning the state space.
\[\text{Normalised Value}=\frac{\text{value}}{2^{n}-1} \tag{29}\]
The distribution of these normalised values is plotted in Figure 4 for varying numbers of qubits. The limited range of qubit string bits is a consequence of the finite number of binary outcomes that can be generated by a given number of qubits.
### Generating Uniformly Distributed Rotations
As previously highlighted, a central premise of our manuscript is the cascading of two quantum circuits, wherein the first circuit supplies the second with random rotations, thereby introducing an additional layer of stochasticity. Quantum rotations serve as the cornerstone for myriad quantum manipulations and are essential for altering quantum states. Within this article, we briefly examine the mathematical formulations of quantum rotations, specifically concentrating on rotations about the \(x,y,\) and \(z\) axes, symbolised as \(R_{x},R_{y},\) and \(R_{z}\) respectively.
**Definition 3**.: (Quantum Rotation Operators) The quantum rotation operators \(R_{x}\), \(R_{y}\), and \(R_{z}\) are
Figure 5: Plots of random rotation angles generated by a quantum circuit. The circuit employs a Hadamard gate on each qubit to create a superposition of quantum states. Following state preparation, each qubit state is measured in the Pauli-Z basis, yielding a series of binary outcomes, represented as 0s and 1s. These outcomes are then converted into a single binary index value, which is normalized to a value between 0 and 1 by dividing by the total number of possible states. This normalized value is subsequently scaled to a rotation angle between 0 and \(2\pi\) radians. Each plot illustrates the distribution of these generated angles for varying numbers of qubits, depicted in polar coordinates with the angle represented by the position on the circle. The limited range of angles is a consequence of the finite number of binary outcomes that can be generated by a given number of qubits.
defined as:
\[R_{x}(\theta)=e^{-i\theta X/2} =\cos\left(\frac{\theta}{2}\right)I-i\sin\left(\frac{\theta}{2} \right)X, \tag{30}\] \[R_{y}(\theta)=e^{-i\theta Y/2} =\cos\left(\frac{\theta}{2}\right)I-i\sin\left(\frac{\theta}{2} \right)Y,\] (31) \[R_{z}(\theta)=e^{-i\theta Z/2} =\cos\left(\frac{\theta}{2}\right)I-i\sin\left(\frac{\theta}{2} \right)Z, \tag{32}\]
where \(X\), \(Y\), and \(Z\) are the Pauli matrices, \(I\) is the identity matrix, and \(\theta\) is the angle of rotation.
**Example 3.5**.: The matrix representations of the quantum rotation operators \(R_{x}\), \(R_{y}\), and \(R_{z}\) are given by:
\[R_{x}(\theta) =\left[\begin{array}{cc}\cos\frac{\theta}{2}&-i\sin\frac{ \theta}{2}\\ -i\sin\frac{\theta}{2}&\cos\frac{\theta}{2}\end{array}\right], \tag{33}\] \[R_{y}(\theta) =\left[\begin{array}{cc}\cos\frac{\theta}{2}&-\sin\frac{ \theta}{2}\\ \sin\frac{\theta}{2}&\cos\frac{\theta}{2}\end{array}\right],\] (34) \[R_{z}(\theta) =\left[\begin{array}{cc}e^{-i\frac{\theta}{2}}&0\\ 0&e^{i\frac{\theta}{2}}\end{array}\right]. \tag{35}\]
**Example 3.6**.: We consider a three-qubit quantum circuit where each qubit undergoes a Hadamard transformation followed by a general rotation characterized by three parameters: \(\theta,\phi,\lambda\).
1. **Initial State:** The initial state of the three qubits is prepared by another quantum circuit consisting solely of Hadamard gates. Mathematically, this is represented as: \[|\psi_{\rm initial}\rangle=(H\otimes H\otimes H)|0\rangle^{\otimes 3}.\] (36)
2. **Total Operation:** The total unitary operation acting on the initial state consists of Hadamard gates followed by general rotations on each qubit. This is given by: \[U_{\rm total}=(U(\theta_{1},\phi_{1},\lambda_{1})\otimes U(\theta_{2},\phi_{2 },\lambda_{2})\otimes U(\theta_{3},\phi_{3},\lambda_{3}))\cdot(H\otimes H \otimes H)\cdot(H\otimes H\otimes H).\] (37)
3. **Final State:** The final state of the system is then: \[|\psi_{\rm final}\rangle=U_{\rm total}|\psi_{\rm initial}\rangle.\] (38)
Let's consider a numerical example for the system. We'll use the following rotation angles for each qubit: \(\theta_{1}=\frac{\pi}{4}\quad\phi_{1}=\frac{\pi}{2}\quad\lambda_{1}=\pi\) \(\theta_{2}=\frac{\pi}{3}\quad\phi_{2}=\frac{\pi}{4}\quad\lambda_{2}=\frac{\pi }{2}\quad\) The Hadamard gate \(H\) is represented as: \(\theta_{3}=\frac{\pi}{6}\quad\phi_{3}=\frac{\pi}{3}\quad\lambda_{3}=\frac{\pi }{4}\)
\[H=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix} \tag{39}\]
And a general rotation \(U(\theta,\phi,\lambda)\) is given by:
\[U(\theta,\phi,\lambda)=\begin{pmatrix}\cos\left(\frac{\theta}{2}\right)&-e^{ i\lambda}\sin\left(\frac{\theta}{2}\right)\\ e^{i\phi}\sin\left(\frac{\theta}{2}\right)&e^{i(\phi+\lambda)}\cos\left(\frac{ \theta}{2}\right)\end{pmatrix} \tag{40}\]
First, we'll find the initial state \(|\psi_{\rm initial}\rangle\) after applying the Hadamard gates:
\[|\psi_{\rm initial}\rangle=\frac{1}{\sqrt{8}}(|000\rangle+|001\rangle+|010 \rangle+|011\rangle+|100\rangle+|101\rangle+|110\rangle+|111\rangle) \tag{41}\]
Next, we'll find the unitary operation \(U_{\text{total}}\) for the Hadamard and rotation gates:
\[U_{\text{total}}=(U(\theta_{1},\phi_{1},\lambda_{1})\otimes U(\theta_{2},\phi_{2 },\lambda_{2})\otimes U(\theta_{3},\phi_{3},\lambda_{3}))\times(H\otimes H\otimes H) \tag{42}\]
Finally, we'll find the final state \(|\psi_{\text{final}}\rangle\):
\[|\psi_{\text{final}}\rangle=U_{\text{total}}|\psi_{\text{initial}}\rangle \tag{43}\]
The evaluation of \(|\psi_{\text{final}}\rangle\) is executed solely via numerical software, notably the PennyLane quantum simulation library.
### Generating GRV's
In the present investigation, we solely employ the Marsaglia [21] polar algorithm (1) as a mechanism for generating normally distributed random numbers. While the extant literature presents a variety of enhancements or alternatives to the Marsaglia polar method, such as the one mentioned above (1), we have chosen not to explore these avenues, given the empirical efficacy of the method under consideration. It is crucial to underscore that while various studies propose potential refinements to the method, the selection of an optimal approach is intrinsically tied to the specific requirements of the application in focus. The prospect of harnessing the uncertainty of quantum measurements for generative modelling presents an intriguing research direction [17, 18]. The premise, posits that it is feasible to iteratively superimpose tiny amounts of quantum noise Figure 6, denoted as \(\epsilon\), onto any image \(X\) over \(t\) timesteps, thereby transmuting \(X\) into a pure Gaussian noise sample \(T\). In a reciprocal manner, given that \(t\) is adequately large the noise-laden sample \(T\) can be reverted to the original, noise-free image \(X\) through the systematic elimination of the superimposed noise.
In our software stack QonFusion, the QuantumRandomGenerator class is designed to generate a uniform random distribution utilizing the quantum circuits discussed above. The initialization of the class sets up a quantum device, defines the number of qubits, the number of layers, and the quantum circuit. An option to use rotations is also provided, which can be turned on or off with the
Figure 6: This figure depicts the probability density function (PDF) of a standard normal (Gaussian) distribution, generated from 2000 samples of a quantum circuit. The red dashed lines mark positions of one, two, and three standard deviations away from the mean, in both the positive and negative directions [17]. The labels ‘-1\(\sigma\)’, ‘+1\(\sigma\)’, ‘-2\(\sigma\)’, ‘+2\(\sigma\)’, ‘-3\(\sigma\)’, ‘+3\(\sigma\)’ represent points one, two, and three standard deviations away from the mean, respectively. This visualisation assists in understanding the empirical rule for a standard normal distribution, also known as the 68-95-99.7 rule.
useRot flag. If the useRot flag is set, the generator employs the QuantumRandomRotationGenerator to produce rotation angles that are subsequently fed into the second quantum circuit. This circuit applies Hadamard gates to each qubit, creating a superposition of states, and then applies the rotations (if the useRot flag is set). Each qubit is then measured in the Pauli-Z basis. The measurements produce a binary string, which is converted to an index using the toBinaryIndex method. This method iterates over the binary outcomes, treating each binary value as a digit in a binary number and summing the corresponding powers of 2 to produce a decimal number. The Q_uniform_rnd method runs the process, generating a random number according to a uniform distribution. If the isIndex flag is set, it returns the binary index directly. Otherwise, it normalizes the index to a value between 0 and 1 by dividing by the total number of possible states (the space span), effectively producing a continuous uniform random number. This process exemplifies the potential of quantum computing in generating truly random numbers and highlights the flexibility of quantum circuits in constructing complex probability distributions.
## 4 Results
To test our quantum generative modeling techniques, we ran experiments both on a simulator using classical computing resources and on actual quantum hardware from IBM with 20 qubits. Running on the simulator allows us to validate the theoretical performance of the quantum circuits before real-world execution. We then benchmarked the same experiments on IBM's quantum computer to compare the practical results and quantify the error rates. While the simulator produces idealized noise-free outputs, the real hardware introduces errors from effects like decoherence and gate imperfections. By conducting experiments under both simulated and real conditions, we can verify the validity of our quantum approach and characterize the degree of deviation on current noisy devices. Analyzing the discrepancies between these results is crucial for determining the readiness of near-term quantum computers for practical generative modeling. Our evaluation provides an empirical demonstration of running quantum generation algorithms and contrasts the accuracy achieved on simulators versus real quantum chips.
### Statistical Evaluation of Quantum-Generated Distributions
Our investigation confirms that quantum circuits are capable of generating both continuous uniform and Gaussian distributions. When employed as an alternative to classical random number generators, quantum noise effectively corrupts images for diffusion processes. The integrity of the quantum-generated Gaussian distribution is corroborated through a series of statistical tests [11, 12, 13], including a statistical permutation test, as summarized in Table (1).
### Quantum-Augmented Gaussian Noise in Forward Diffusion Processes
In this study, we conducted experiments with a variable number of qubits (4, 5, 6), different PennyLane devices (e.g., 'qml.device('default.qubit')'), and varying numbers of shots. In PennyLane, as is the case with many quantum simulators, the number of shots dictates the number of times the circuit is executed.
### Quantum-Enhanced Gaussian Noise in Brownian Motion Simulations
This section provides an illustration of the utility of our Quantum Gaussian generator in simulating Brownian motion, serving as a quality metric for our QRNG. In conventional Brownian motion, any deviation of the Gaussian distribution's mean from zero disrupts the motion's fidelity to true Brownian behaviour (Theorem (1)); we prove this in Proof (4.3). Our QRNG, however, accurately encapsulates the fundamental characteristics of Brownian motion.
**Definition 4**.: Standard Brownian motion \(W(t)\) is a stochastic process that satisfies the following
1. \(W(0)=0\)
2. \(W(t)\) has independent increments.
3. \(W(t)-W(s)\sim\mathcal{N}(0,t-s)\) for \(0\leq s<t\).
4. \(W(t)\) is continuous in \(t\).
**Theorem 1**.: Any deviation of the Gaussian distribution's mean from zero disrupts the motion's fidelity to true Brownian behavior.
Proof.: Let's consider a Gaussian distribution with a mean \(\mu\neq 0\) and variance \(\sigma^{2}\). If this distribution is used for the increments in Brownian motion, then the increment \(W(t)-W(s)\) would be distributed as \(\mathcal{N}(\mu,t-s)\). We examine each of the properties of standard Brownian motion to see if they still hold:
\begin{table}
\begin{tabular}{|c|c|} \hline
**Test** & **Value** \\ \hline KS Statistic & \(0.052\) \\ KS P-Value & \(0.124\) \\ MMD & \(0.001\) \\ KL Divergence & \(0.030\) \\ Permutation test & \(0.718\) \\ \hline \end{tabular}
\end{table}
Table 1: The quantum random Gaussian generator underwent a battery of statistical tests, juxtaposing the quantum-generated samples with classical Gaussian samples. The Kolmogorov-Smirnov (KS) test yielded a statistic of \(S=0.052\) and a \(p\)-value of \(p=0.124\). Given that the \(p\)-value exceeds the conventional threshold of \(0.05\), the null hypothesis, which posits that the quantum and classical samples are drawn from the same continuous distribution, could not be rejected. This suggests that the quantum samples do not significantly deviate from their classical Gaussian counterparts. The Maximum Mean Discrepancy (MMD), a measure that quantifies the difference between the mean embeddings of two distributions in a Reproducing Kernel Hilbert Space (RKHS), recorded a value of \(MMD=0.001\). This minimal MMD value indicates an inconsequential difference between the mean embeddings of the quantum and classical Gaussian distributions in the RKHS. The Kullback-Leibler (KL) divergence, which measures how much the quantum distribution diverges from the classical Gaussian distribution, was \(D_{KL}=0.030\). This modest KL divergence value corroborates that the quantum distribution closely approximates the classical Gaussian distribution. Finally, a statistical permutation test was conducted, with high values suggesting that the quantum and classical samples are indistinguishable. These results are consistent with the findings from the KS test and MMD, further reinforcing the congruence between the quantum and classical Gaussian distributions.
Figure 7: This figure illuminates the role of Gaussian noise in forward diffusion algorithms. These algorithms convert images into pure Gaussian noise through the sequential addition of small noise perturbations. While the reverse operation, which utilizes a U-Net neural architecture to remove the noise, is not examined in this study, it plays a crucial role in reverting the image to its original, noise-free state. The efficacy of stochastic diffusion (SD) models is largely dependent on the successful integration of Gaussian noise, an aspect that could be further optimized by employing quantum random number generation (QRNG).
1. \(W(0)=0\) still holds.
2. \(W(t)\) would still have independent increments.
3. \(W(t)-W(s)\) would now be \(\mathcal{N}(\mu,t-s)\), which violates the standard definition requiring a zero mean.
4. \(W(t)\) would still be continuous in \(t\).
The violation occurs at the third property, thus proving that any deviation from a zero mean disrupts the fidelity to true Brownian motion.
## 5 Discussion
This study presents an innovative yet straightforward strategy for generating Gaussian random variables through the utilisation of non-parametric quantum circuits. Empirical analyses substantiate a close concordance between quantum and classical Gaussian distributions, thereby affirming the efficacy of the quantum approach. Further experiments underscore the applicability of quantum-generated Gaussian noise in diffusion-centric methods such as SD and Brownian Motion.
Although the approach still necessitates classical post-processing, the replacement of pseudo-random numbers with quantum-generated random bits marks a noteworthy advancement towards more robust generative models. This hybrid methodology dispenses with the laborious task of parametric optimisation within quantum circuits. Nevertheless, the full realisation of quantum generative modelling continues to pose considerable challenges, particularly in the scaling of quantum convolutional layers. Despite these hurdles, the present study lays the groundwork for an auspicious avenue of exploration.
Figure 8: This figure showcases the efficacy of our Quantum Gaussian generator in simulating Brownian motion. The plot serves as a quality metric for our Quantum Random Number Generator (QRNG). Unlike conventional Brownian motion, where any deviation from a zero mean in the Gaussian distribution disrupts the fidelity to true Brownian behavior, our QRNG maintains this fidelity. The plot illustrates that the generated Brownian motion closely adheres to the theoretical expectations, thereby validating the utility of quantum-enhanced Gaussian noise in stochastic simulations.
Acknowledgements.
List of Figures
* 1 An illustrative Depiction of our Proposed Quantum Pipeline.
* 2 Quantum-Generated Gaussian Noise in Stable Diffusion Processes
* 3 A Simplified Circuit Ansatz for Generating Random Rotations
* 4 Plots of Random Rotation Angles Generated by a Quantum Circuit
* 5 Plots of Quantum Sampling Measurement Results on a Unit Circle
* 6 Plots of 2000 Gaussian samples from a quantum circuit
* 7 Quantum-Enhanced Gaussian Noise in Forward Diffusion Models
* 8 Quantum-Enhanced Gaussian Noise in Brownian Motion Simulations for 9 Qubits
List of Tables
* 1 Statistical tests
|
2309.04884 | RecAD: Towards A Unified Library for Recommender Attack and Defense | In recent years, recommender systems have become a ubiquitous part of our
daily lives, while they suffer from a high risk of being attacked due to the
growing commercial and social values. Despite significant research progress in
recommender attack and defense, there is a lack of a widely-recognized
benchmarking standard in the field, leading to unfair performance comparison
and limited credibility of experiments. To address this, we propose RecAD, a
unified library aiming at establishing an open benchmark for recommender attack
and defense. RecAD takes an initial step to set up a unified benchmarking
pipeline for reproducible research by integrating diverse datasets, standard
source codes, hyper-parameter settings, running logs, attack knowledge, attack
budget, and evaluation results. The benchmark is designed to be comprehensive
and sustainable, covering both attack, defense, and evaluation tasks, enabling
more researchers to easily follow and contribute to this promising field. RecAD
will drive more solid and reproducible research on recommender systems attack
and defense, reduce the redundant efforts of researchers, and ultimately
increase the credibility and practical value of recommender attack and defense.
The project is released at https://github.com/gusye1234/recad. | Changsheng Wang, Jianbai Ye, Wenjie Wang, Chongming Gao, Fuli Feng, Xiangnan He | 2023-09-09T22:23:05Z | http://arxiv.org/abs/2309.04884v1 | # RecAD: Towards A Unified Library for Recommender Attack and Defense
###### Abstract.
In recent years, recommender systems have become a ubiquitous part of our daily lives, while they suffer from a high risk of being attacked due to the growing commercial and social values. Despite significant research progress in recommender attack and defense, there is a lack of a widely-recognized benchmarking standard in the field, leading to unfair performance comparison and limited credibility of experiments. To address this, we propose RecAD, a unified library aiming at establishing an open benchmark for recommender attack and defense. RecAD takes an initial step to set up a unified benchmarking pipeline for reproducible research by integrating diverse datasets, standard source codes, hyper-parameter settings, running logs, attack knowledge, attack budget, and evaluation results. The benchmark is designed to be comprehensive and sustainable, covering both attack, defense, and evaluation tasks, enabling more researchers to easily follow and contribute to this promising field. RecAD will drive more solid and reproducible research on recommender systems attack and defense, reduce the redundant efforts of researchers, and ultimately increase the credibility and practical value of recommender attack and defense. The project is released at [https://github.com/gusye1234/recad](https://github.com/gusye1234/recad).
Recommender Systems; Shilling Attack and Defense; Benchmark +
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: The first two authors contributed equally to this research.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
+
Footnote †: Corresponding author.
ensuring that the recommendation model uses as much real data as possible (Krizhevsky et al., 2017). Currently, the mainstream defense model can be divided into three types according to whether the label of fake users can be obtained (Beng et al., 2017; Chen et al., 2018; Wang et al., 2018). As new attack methods emerge, defense models are constantly evolving to keep up with these threats. Therefore, staying up-to-date with the latest attack and defense methods is essential for maintaining the security of recommender systems.
With the continuous emergence of new attack and defense algorithms in the field of recommender system security, there are several research challenges that deserve attention. Firstly, while many articles provide details about their experiments, there is often a lack of standardization in dataset processing methods, which can lead to unfair comparison. Secondly, there is a lack of unified settings for attack experiments. Various works usually leverage different experimental setting, making it difficult to compare and evaluate different models. It is critical to establish a standardized approach for similar attack settings to facilitate the comparison and evaluation of different models. Additionally, many works lack public code, which can create repetition and difficulties for subsequent researchers trying to advance the field. To address these challenges, researchers should strive to provide clear and standardized descriptions of dataset processing methods, unified settings for attack experiments, and make their code publicly available to facilitate replication and extension of their work. These efforts can help promote the development of the recommender attack and defense and contribute to more robust and effective recommender system security solutions.
To address the aforementioned challenges, we have initiated a project to develop a unified framework for Recommender Attack and Defense, named RecAD. RecAD aims to improve the reproducibility of existing models and simplify the development process for new recommender attack and defense models. Our benchmarking library design is innovative and effective, revealing several advantages compared to earlier attempts.
* **Unified library framework.** RecAD is implemented using Pytorch1, one of the most popular deep learning frameworks. The library is composed of three core modules, namely the data module, model module, and evaluation module. The library supports a vast array of options provided by each module, and a straightforward configuration ensures that users can promptly complete algorithm reproduction and comparison. The seamless interface integration of the three core modules also enables the minimal adjustment for incorporating new algorithms, allowing for continuous development and extension within our framework in the future. Footnote 1: [https://pytorch.org/](https://pytorch.org/).
* **Comprehensive benchmark models and datasets.** RecAD provides support not only for replacing individual models but also for integrating a wide range of research issues. From generating fake attack data to defending against existing data and injecting data into victim models, RecAD covers the entire spectrum of shilling attack and defense research. It provides an array of choices for all models and datasets, guaranteeing an ample assortment of combinations for researchers to utilize. This allows them to execute, compare, and assess the entire procedure, relying on lucid instructions and configurations. RecAD is highly adaptable and scalable, with original dataset copies that can be effortlessly transformed into a practical form using the provided preprocessing tools or scripts. Additionally, we are continuously expanding our library with additional datasets and methods to better serve the needs of the community and researchers.
* **Extensive and standard evaluation protocols.** RecAD offers evaluation methods from two perspectives: attack evaluation and defense evaluation. Researchers interested in continuing the offensive direction or those focusing on the defensive direction can use the corresponding evaluation methods. Additionally, it provides standard evaluation techniques for assessing the effectiveness of defense models, encapsulating the entire evaluation process within a singular module enables RecAD to more readily accommodate more evaluation techniques, thus enhancing its adaptability and versatility.
* **Openness and high integration of models.** Openness is crucial for promoting transparency, collaboration, and reproducibility in computer science research. RecAD adopts a highly integrated approach, simplifying the relationships between modules as much as possible and making the corresponding parameters publicly available at each module. This ensures that subsequent researchers who use our framework to add new models only need to make the corresponding modules public, allowing other researchers to quickly and efficiently reproduce the work and ensure the openness of the field in the future.
* **The generalization of attacker's knowledge.** The attacker's knowledge level directly impacts the effectiveness of the attack. A high degree of accessible knowledge about the recommender system allows an attacker to craft adversarial examples that can evade the model's defenses. RecAD can elevate white-box attacks to gray-box attacks and customize the proportion of data accessible by the attackers for gray-box attacks (Krizhevsky et al., 2017), promoting the fair comparison between a wide range of attackers.
## 2. Related Work
### Overview of Shilling Attack and Defense
In the past two decades, researchers have conducted experiments to demonstrate the feasibility of attacking real-world recommender systems, such as YouTube, Google Search, Amazon, and Yelp. These experiments have shown that it is possible to manipulate recommendation systems in practice, resulting in an increasing focus on this field from both the academic community and industry. To promote its development, researchers have typically focused on either shilling attacks or defense mechanisms. With the advancements in deep learning, the field has seen a notable increase in the effectiveness of these methods.
### Shilling attack
The objective of an shilling attack is to interfere with the recommendation strategy of the victim recommender system through a series of measures (Beng et al., 2017; Wang et al., 2018; Wang et al., 2018). The ultimate goal is to enhance the
exposure of a specific target item among all users after the recommender model is trained. To achieve this objective, attackers often inject fake users into the historical interactive data, or training matrix, of the recommender system. However, if these fake users are not adequately protected, they will be sent into the recommender system model during the training process, thus disrupting the recommendation strategy of the system. As a result, the key challenge of an shilling attack is to construct the interaction behaviors of the fake users. The interaction behaviors of the constructed users can generally be classified into three categories:
* **Heuristic attacks.** Heuristic attacks involve selecting items to create fake profiles based on subjective inference and existing knowledge. The goal is to strengthen the connection between fake users and other real users while evading defense methods and achieving exposure enhancement of the final target item (Fang et al., 2017; Liu et al., 2018). Currently, existing methods include the Random Attack (Krishnan et al., 2017), Average Attack (Krishnan et al., 2017), Bandwagon Attack, and Segment Attack (Bandwagon Attack et al., 2017). The Random Attack is a low-knowledge method that selects filler items randomly, while the Average Attack selects filler items randomly and requires more knowledge. In the case of an Average Attack, the target item needs to be given the highest rating to implement a push attack. Segment Attack selects items of the same category as the target item and maximizes their rating, with the goal of creating a stronger correlation with the corresponding target user among real users so that it can attack more effectively.
* **Gradient attacks.** Gradient attacks involve relaxing the discrete space to a continuous space to ensure that the objective function can be optimized by the gradient to achieve the optimal attack effect. For instance, Li et al. (Li et al., 2018; Li et al., 2019) developed poisoning attacks optimized for matrix factorization-based recommender systems, while Yang et al. (Yang et al., 2019) developed poisoning attacks optimized for co-visitation rule-based recommender systems. Additionally, there are gradient attack methods based on Next-item (Yang et al., 2019), and graph (Liu et al., 2019). However, all Gradient Attacks require known types of recommender systems to carry out specific optimization, which does not have good generalization. Moreover, in order to achieve bi-level optimization (Li et al., 2019), directly adjusting interactions according to gradients involved transforming the discrete interactions into continuous optimization. During the process of re-discretization, information loss occurred, leading to sub-optimal results and the lack of robustness in the model.
* **Neural attacks.** Neural Attacks, primarily inspired by deep learning (Li et al., 2019), generate realistic profiles that have a significant impact on recommender systems by optimizing the parameters of neural networks to maximize the objective function. WGAN (Bengio et al., 2017) draws on Wasserstein's distance, which has shown better empirical performance than the original GAN (Krizhevsky et al., 2014). It can emulate real user behavior with fake user behavior to achieve the effect of fake user behavior. AIA (Zhang et al., 2019) reviewed the bi-level optimization problems of the surrogate model and proposed time-efficient and resource-efficient solutions. AUSH (Li et al., 2019) and Legup (Li et al., 2019) solve the randomness caused by noise generation in common models, making the generated template artificially based on known knowledge, resulting in a more undetectable configuration file. When the attacker's knowledge is limited to a black box, researchers use RL attack (Krishnan et al., 2017; Li et al., 2019; Li et al., 2019) to complete the attack, with the attacker adjusting changes based on feedback given by the spy user in the victim model. The methods of Neural Attacks all show better performance on real datasets than Gradient and Heuristic attacks.
In addition to the challenges associated with constructing effective shilling attacks, another emerging issue is the **knowledge of the attacker**(Fang et al., 2017). In today's world, data security and privacy are increasingly important to both users and companies. This makes it increasingly challenging for attackers to obtain the necessary user data to construct effective attack models. As a result, researchers have begun to consider the attacker's knowledge as a key constraint for the attack model. The attacker's knowledge can be classified into three categories: _white box_, _grey box_, and _black box_. In a white box attack, the attacker has complete knowledge of the target recommender model, which includes all the data of the victim model used for training and the network structure and parameters of the victim model. In a grey box attack, the attacker can only access part of the training set of the target model and has no knowledge of the victim model. In a black box attack, only some spy users are allowed as attack feedback.
In addition to shilling attacks, there are other types of attacks, such as attacks that involve modifying the real user interaction history (Yang et al., 2019) or attacks based on federated learning recommender models (Zhao et al., 2019; Liu et al., 2019; Li et al., 2019; Yang et al., 2019). The former is not very effective due to the adoption of multiple privacy protection mechanisms by real recommender platforms, such as email and mobile phone hardware binding. Therefore, this method is easily detected and defended by the platform and is insufficient for a large-scale attack. On the other hand, the latter is still in the theoretical research stage and the models proposed are too basic, at the same time this kind of method has not yet been implemented by companies. This means that the criteria for the above two methods still need to be explored by more researchers.
### Defense
A defense model can be viewed as a checkpoint responsible for detecting possible fake users in the data before it is sent to the recommender model. The defense model eliminates fake users to ensure that the recommendation results are not interfered with by attackers to the greatest extent possible. Some defense models attempt to find the law of data distribution from all the data or obtain the probability of the corresponding label through probability methods to predict and classify. Currently, the defense direction can be classified into three categories:
* **Supervised defense models,** which need to be pre-labeled with true and false data. The goal of the model is to learn the relationship between the input and output variables, so that it can make predictions on new data. The learning process involves minimizing the difference between the predicted output and the true output for each example in the training data. In other words, the model is trained to approximate
the mapping from inputs to outputs. In the direction of recommender defense, Supervised work emerges in the initial exploration of this field, such as CoDetector (Han et al., 2017), DegreeSAD (Krishnan et al., 2017), BayesDetector (Krishnan et al., 2017).
* **Semi-supervised defense models,** as explored in (Krishnan et al., 2017; Krishnan et al., 2017), aim to use a minimal amount of false data while still maintaining the purpose and accuracy of the supervised method. This is because attackers typically use a small amount of data to launch attacks, leading to an inherent imbalance between true and false training samples, highlighting the crucial importance of maintaining the supervised aspect of the method.
* **Unsupervised defense models,** which have been intensively investigated in recent years, including traditional machine learning models such as probabilistic models (Beng et al., 2016), statistical anomaly detection (Beng et al., 2016), PCA (Krishnan et al., 2017), SVM (Krishnan et al., 2018), and K-means (Krishnan et al., 2018). More recently, network models have been used for detection, such as Graph Embedding (Krishnan et al., 2019), Sequential GANs (Wang et al., 2019), Recurrent Neural Network (Kang et al., 2019), and Dual-input CNN (Krishnan et al., 2019).
In addition to the model-based prediction introduced above to realize the defense of the recommender platform, some scholars also have trained the recommender model by using adversarial data training (Krishnan et al., 2018; Krishnan et al., 2018; Krishnan et al., 2017; Krishnan et al., 2018) so that the recommender model can have better generalization in the face of fake data.
### Benchmarking for Recommender Attack and Defense
Despite the recent growth in the field of RS security, different studies have employed different data sets, evaluation methods, and knowledge constraints, resulting in significant fairness issues when comparing different models. This has had a negative impact on the steady development of the field. Although some works have attempted to address these issues in the past, there is still a need for a comprehensive and unified library to solve the current dilemma.
For instance, in AUSH (Krishnan et al., 2017), the author provided a code that integrated multiple attack models, but the workflow was inefficient and required a significant amount of time for subsequent researchers to study the code structure. Additionally, the code was not friendly for adding new models under the same framework and focused more on the study of attack models. Moreover, the code only provided a limited data set and did not include the data processing method, making it difficult to test the model on a working public data set. In SDLib2, some defense models and attack models were provided, but the attack model was outdated and did not complete the entire process from attack generation to defense detection and injection into the recommender model. Furthermore, the code language used in this work was obsolete. Our framework overcomes the limitations of previous methods by abstracting each component into relatively independent modules, ensuring the unity and extensibility of the model. This allows for better maintenance and development of the framework in the future.
Footnote 2: [https://github.com/Coder-Yu/SDLib](https://github.com/Coder-Yu/SDLib).
Footnote 3: [https://grouplens.org/datasets/movielens/1m/](https://grouplens.org/datasets/movielens/1m/).
## 3. The Library-RecAD
The overall framework of RecAD is illustrated in Figure 1. At the bottom, our library maintains a flat structure for the default hyper-parameters globally, and the core components are built upon it with automated parameter loading (see Section 4). Our library abstracts the core modules at three levels: data, model, and workflow. In the following, we briefly present the designs of these three core modules.
### Data Module
The data module serves as the fundamental part of the entire library, as it provides essential runtime information such as batches and indicators of scale. It takes charge of dataset loading, batch generation, and fake data manipulation.
#### 3.1.1. Dataset Loading
To create an actively-contributed benchmark, it is important to make the addition of new datasets as easy as possible. Therefore, we have designed the data module to keep the required dataset formats simple and flexible. At present, our library only requires the human-readable CSV format with specific column names to load datasets into explicit or implicit interactions. This design decision allows users to easily add their own datasets to the library without having to modify the codebase. Our library already supports multiple datasets (as shown in Table 1), and we also provide auxiliary functions to convert datasets from other well-known recommender frameworks, such as RecBole (Wang et al., 2019). This
\begin{table}
\begin{tabular}{l|c c c c} \hline Dataset & \#Users & \#Items & \#Interations & \#Density \\ \hline \hline MovieLens-1m\({}^{*}\) & 5,950 & 3,702 & 567,533 & 0.257\% \\ Yelp\({}^{*}\) & 54,632 & 34,474 & 1,334,942 & 0.070\% \\ Amazon-Game\({}^{*}\) & 3,179 & 5,600 & 38, 596 & 0.216\% \\ Book-Crossing & 105,284 & 340,557 & 1,149,780 & 0.003\% \\ Last.FM & 1,892 & 17,632 & 92,834 & 0.278\% \\ Epinions & 116,260 & 41,269 & 188,478 & 0.004\% \\ Gowalla & 107,092 & 1,280,969 & 6,442,892 & 0.005\% \\ \hline \multicolumn{5}{l}{\({}^{*}\)means the dataset is used in the experiments and only kept high-frequency users} \\ \multicolumn{5}{l}{and items (at least 10 interactions).} \\ \end{tabular}
\end{table}
Table 1. Collected data in our library.
Figure 1. The overall framework of RecAD.
provides further flexibility for users to utilize the datasets that they are familiar with.
#### 3.1.2. Batch Generation
Our library prioritizes seamless integration between datasets, models, and workflows, which presents challenges for batch generation. To address this, we design a flexible and generic interface (_generate_batch_). The interface allows the caller to provide runtime configuration parameters (_e.g._, pairwise sampling; binarizing the ratings) and dispatches itself to the corresponding behavior. This design reduces the workloads on developers who are attempting to adapt their data and allows them to focus on providing as much runtime information as possible.
#### 3.1.3. Fake Data Manipulation
In our library, we recognize the importance of addressing the manipulation of fake data during runtime. Specifically, we must account for both the injection of fake data from attacker models and the filtering of fake data by defense models. We address this challenge with unified interfaces named _inject_data_ and _filter_data_, respectively. These interfaces are called by the attacker and defense models to manipulate the dataset.
### Model Module
The model implementation is the most versatile part of the library, and we offer maximum flexibility to accommodate different approaches. To account for the similarities and differences between models, we introduce a general base model and its successors: the victim, attacker, and defense models. Figure 3 presents the models that have been implemented.
#### 3.2.1. Base Model
We don't provide framework-level abstractions for model optimization. Instead, the models are responsible for their own single-epoch training and evaluation, which can be implemented through a set of auxiliary functions provided by the library. This design choice is aimed at reducing the complexity of the framework and enabling the integration of a wide range of models, without requiring modification of the framework-level abstractions for each individual model. To facilitate this, we use unified interfaces (_train_step_, _test_step_) that enable the callers to initiate the training or evaluation process of the models.
#### 3.2.2. Victim Model
Victim models are recommender models, and the library provides a unified interface for training and testing them. This makes it easy to integrate any victim model into the library without the need for modification of the core framework.
#### 3.2.3. Attacker Model
In our library, the training of the attacker model shares the same interface with victim models (_i.e. train_step_). After training, the attacker model generates the fake data through a unified interface (_generate_fake_) and then forwards the contaminated data to the next module. Since the full set of the dataset is not necessarily exposed (_e.g._ Gray box attacking in Figure 2), _generate_fake_ should explicitly receive the target dataset as a parameter.
#### 3.2.4. Defense Model
The defense model is trained on the attacked data through the same training interface. The objective of the model is to output a filtered dataset with fake data removed. Our library summarizes a unified interface _generate_filter_ to wrap the implementation details of each defense model.
### Workflow Module
This module is the corresponding abstraction of different attack knowledge (Figure 2). The workflow module holds the instantiations of the data module and model module, controlling the exposure of data and the interaction of modules. It also contains the boilerplate code for the training loop and evaluation callbacks (_e.g._, early stop; report after training).
#### 3.3.1. Data Exposure
The data exposure level for different models varies depending on the attack knowledge settings and the running stages (as shown in Figure 2). For instance, the attacker model may
Figure 3. The models that are supported by RecAD.
Figure 2. Component workflow under different attack knowledge.
be exposed separately to full, partial, or zero training data. Similarly, the victim model may be trained on clean data initially and later re-trained on the contaminated data during the attack process. The workflow module in our library is responsible for constructing the appropriate data flow according to the attack knowledge and ensuring that no accidental data leakage occurs. This way, our library provides a flexible and secure environment for implementing and testing various attack and defense models under different settings.
#### 3.3.2. Module Interaction
The interactions between modules vary between attacks. In a white-box attack, the model has direct access to all the training data, whereas, in a black-box attack, the model receives feedback from the victim without any access to the training data. For workflows (Zhou et al., 2018; Wang et al., 2019) where no defense model is involved, the fake data generated by the attacker model flows directly into the victim's training without filtering. The workflow module arranges the dependencies of modules and prevents any inappropriate interactions between them.
#### 3.3.3. Training & Evaluation
In order to better control the data exposure and module interaction, we give the workflow module the responsibility for launching the training and evaluation of the contained models. The workflow module contains the boilerplate codes for wrapping the training loop outside the models' _train_step_. Also, we design a hooking mechanism to provide flexibility for models to set up their evaluation callbacks. This allows models to define their own evaluation metrics to evaluate the model's performance at different stages of the training process.
## 4. Usage Guideline of the Library
In the following two parts, we first show the typical usage to instantiate the existing modules of our library, then detail the steps to extend our library with a new implementation.
### Module Instantiations
Attacking a recommender system often involves using multiple datasets and machine learning models, which makes the training and testing process more complex than for regular recommender systems. Our library simplifies this process by exposing the necessary modules to users and providing a unified interface called _from_config_ for instantiating them (Figure 4). Two kinds of parameters may be needed from _from_config_: hyper-parameters and runtime parameters.
#### 4.1.1. Hyper-parameters
Our library employs the hashing table to store all default hyper-parameters of modules together and offer global access across programs. While instantiating, our library automatically loads default parameters on-fly from the hashing table and updates them from the keyword arguments passed by the user. The decoupling of default hyper-parameters and the actual module implementation facilitates a quick overview of configurable parameters for the user.
#### 4.1.2. Runtime Parameters
Runtime parameters are the parameters that won't be settled before the runtime. For example, the model module in Figure 4 normally needs the numbers of the user and item to create the embeddings when instantiating. Due to the data injection or data filtering from the attacker model, the actual numbers of the user and item are not known before the runtime. But the dependency between the model and data module is clear, and it is burdensome to ask the user to manually pass in the required instances in the program. Hence, we implement lazy instantiation (Figure 5) to make runtime parameters transparent at the user level. The module won't actually instantiate at the time the user call _from_config_ if the needed runtime parameters are not passed. Instead, the workflow will sort out the dependencies between modules and automatically fill in the required runtime parameters to complete the instantiation. This decouples the instantiation of modules from the availability of runtime parameters, making the library more flexible and adaptable to different scenarios.
### Module Extension
In our library, we provide the base class for all the core modules: _BaseData_, _BaseModel_, and _BaseWorkflow_. We require the extended module must be the corresponding base class's subclass so that the necessary abstract interfaces can be called properly.
#### 4.2.1. General Module
Two abstract methods must be implemented for all the modules:
* _from_config_: users pass arguments to this method to instantiate a new module. Our library has already implemented the argument sanity checking and overwriting the default hyper-parameters in the father class. A new module should assign the default hyper-parameters in this method.
Figure 4. A code snippet of module instantiations of RecAD.
* _info_describe_: modules interact through this method. The method should return a hash table with the named variable that this module can expose publicly.
#### 4.2.2. Core Modules
The core modules have specialized interfaces that need to be implemented in addition. We have discussed most of the below interfaces in Section 3.
* **Data Module**: The most important interface for this module is _generate_batch_. The interface should take the caller's keyword arguments as the input, and return the correct batches of the dataset for later training or testing.
* **Model Module**: Right now, three kinds of models are considered: victim model, attacker model, and defense model. They are all required to be implemented with two interfaces: _train_step_ and _test_step_ to perform one-epoch training or testing. Besides, for the attacker model and defense model, _generate_fake_ and _generate_filter_ need to be implemented, respectively.
* **Workflow Module**: An interface named _execute_ should be implemented for users to explicitly launch the whole workflow. Inside the interface, the implementor should correctly instantiate and arrange the modules.
## 5. Experiments
This section showcases the application of RecAD by implementing various representative attackers and detection models. Through a comparison of the outcomes produced by these models, valuable insights can be derived.
### Comparison of Attackers
We illustrate the performance of all attackers in three recommendation datasets in Table 2. The goal of all attackers is to make the target items get higher rankings, _i.e._, larger HR@k.
The two gray box methods, AIA and AUSH, exhibit the best performances across all metrics and datasets, which attests to the efficacy of neural network-based approaches. In contrast, the performance of Legup is less consistent. For instance, Legup displays optimal performance with respect to HR@10 in Amazon, whereas it experiences a decrease in rank, falling to the middle-lower range, with respect to HR@20, HR@50, and HR@100. Additionally, in Yelp, Legup performs inadequately across all metrics, and its weak robustness is further illustrated in Figure 6. The Legup model has been observed to exhibit unstable performance, which can be attributed to its training methodology that involves the simultaneous use of three distinct models. This approach has resulted in the same training instability issues that are commonly associated with GANs. Specifically, the use of multiple models in training can lead to a lack of consistency in the learned representations across the different models. This, in turn, can create conflicts in the optimization process and cause the model's performance to become highly dependent on the initialization and training procedures.
The heuristic method RandomAttacker exhibits the poorest performance across all metrics and datasets, even when compared to a situation in which no attacker is utilized. In other words, RandomAttacker not only fails to enhance the ranking of target items but also results in a lowered ranking for those items. Due to the highly randomized nature of heuristic attacks, the resulting attack target can be skewed by the random effects, resulting in a greater impact of inserting fake users on vulnerable users.
In addition, other methods, including SegmentAttack, BandwagonAttack, AverageAttack, and WGAN, also occasionally result in a poorer ranking for the targeted items. Consequently, there remains substantial room for the development of effective attacker methods in recommender systems. Currently, the existing methods of attack are characterized by significant limitations, such as their capacity to target only specific structures of recommendation algorithms or their limited ability to transfer attacks to models in other domains.
### Comparison of Defenders
We choose three supervised methods (DegreeSAD, CoDetector, and BayesDetector), one semi-supervised method (SemiSAD), and two unsupervised methods (PCASelectUser and FAP) to act as defenders, tasked with protecting the victim model from five attacker models. The goal of these experiments is to evaluate several defense methods using our framework process.
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{ML-1M} & \multicolumn{4}{c}{Yelp} & \multicolumn{4}{c}{Amazon} \\ Attack Method & Attack Knowledge & HR@10 & HR@20 & HR@50 & HR@100 & HR@10 & HR@20 & HR@50 & HR@100 & HR@20 & HR@50 & HR@100 \\ \hline No Attacker & None & 0.0050 & 0.0109 & 0.0297 & 0.0656 & 0.0114 & 0.0190 & 0.0375 & 0.0630 & 0.0000 & 0.0000 & 0.0003 & 0.0016 \\ \hline RandomAttacker & White Box & 0.0050 & 0.0082 & 0.0228 & 0.0457 & 0.0078 & 0.0112 & 0.0214 & 0.0362 & 0.0000 & 0.0000 & 0.0000 & 0.0000 \\ SegmentAttack & White Box & 0.0069 & 0.0123 & 0.0288 & 0.0630 & 0.0057 & 0.0083 & 0.0153 & 0.0258 & 0.0397 & 0.0520 & 0.0675 & 0.0832 \\ BandwagonAttack & White Box & 0.0059 & 0.0119 & 0.0267 & 0.0592 & 0.0066 & 0.0114 & 0.0257 & 0.0431 & 0.0050 & 0.0205 & 0.0523 & 0.0854 \\ AverageAttack & White Box & 0.0016 & 0.0044 & 0.0167 & 0.0400 & 0.0053 & 0.0090 & 0.0169 & 0.0284 & 0.0085 & 0.0170 & 0.0463 & 0.0914 \\ WGAN & White Box & 0.0023 & 0.0060 & 0.0149 & 0.0340 & 0.0143 & 0.0177 & 0.0254 & 0.0344 & 0.1646 & 0.1788 & 0.2043 & 0.2226 \\ AIA & Gray Box 20\% data & 0.0078 & 0.0180 & 0.0459 & 0.1007 & 0.0187 & 0.0273 & 0.0465 & 0.0686 & 0.0441 & 0.0873 & 0.4278 & 0.4839 \\ AUSH & Gray Box 20\% data & 0.0071 & 0.0151 & 0.0434 & 0.0945 & 0.0135 & 0.0217 & 0.0393 & 0.0617 & 0.0583 & 0.1170 & 0.4392 & 0.4805 \\ Legup & Gray Box 20\% data & 0.0094 & 0.0130 & 0.0283 & 0.0471 & 0.0068 & 0.0099 & 0.0162 & 0.0242 & 0.1847 & 0.2015 & 0.2286 & 0.2566 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Overall attack performance on three recommendation datasets.
We present three evaluation metrics for the predicted label results, namely F1-score, Recall, and Precision. To evaluate the performance of our model, we split our data into two categories: True Data and Fake Data. True Data refers to the original real data used to train the recommender system, while Fake Data represents the fake data generated by attackers. We have provided three evaluation metrics for each category instead of treating them as a whole, as we believe that an effective detector should be able to not only successfully predict fake data but also avoid misclassifying real data. Hence, we hope that the values for the three metrics corresponding to both types of data are as high as possible, indicating that the detector models have better defensive performances from two dimensions.
Based on the data presented in Table 3, it can be observed that although the three supervised methods may not exhibit the highest performance, they demonstrate consistent performance against various attacks. Conversely, the semi-supervised method is not effective in defending against attacks due to the requirement of more data for training and evaluation, which is restricted by the attack budget in our approach. Consequently, the semi-supervised method misclassifies both real and fake data. Among the unsupervised methods, FAP shows promising results for certain attacks and outperforms other defense methods, but still displays certain limitations in some metrics.
### Robustness of Attackers Encountering Detection
For illustration, we visualize the performance comparison before and after detection by PCASelectUser. Due to space limitations, we only present the results in the ML-1M dataset.
From Figure 6, we can observe that the performance of all attack methods will vary after detection. In heuristic methods, Bandwagon exhibits a notable difference in performance before and after detection. After detection, there is a marked decrease in all four HR metrics. The potential reason is that Bandwagon selects the popular items as users' fakes preferences, where this pattern is relatively easier to identify, making the generated data easier to be detected. In the neural methods, Legup demonstrates a similar phenomenon with a more significant performance difference before and after detection. In the HR@10 metric, Legup outperforms all other
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c|c c c|c c c} \hline \hline & \multicolumn{4}{c}{AIA} & \multicolumn{2}{c}{Legup} & \multicolumn{2}{c}{WGAN} & \multicolumn{2}{c}{RandomAttacker} & \multicolumn{2}{c}{SegmentAttacker} \\ & \multicolumn{4}{c}{Gray Box 20\% data} & \multicolumn{2}{c}{Gray Box 20\% data} & \multicolumn{2}{c}{Gray Box 20\% data} & \multicolumn{2}{c}{White Box} & \multicolumn{2}{c}{White Box} \\ Detect Method & Data & \multicolumn{2}{c}{Lable} & Precision & Recall F1 -score & Precision & Recall F1 -score & Precision & Recall F1 -score & Precision & Recall F1 -score & Precision & Recall F1 -score \\ \hline DegreeSAD & True Data & 0.782 & 0.845 & 0.812 & 0.782 & 0.841 & 0.810 & 0.782 & 0.840 & 0.810 & 0.780 & 0.843 & 0.810 & 0.781 & 0.840 & 0.810 \\ & Fake Data & 0.720 & 0.630 & 0.672 & 0.717 & 0.632 & 0.672 & 0.716 & 0.632 & 0.671 & 0.718 & 0.627 & 0.669 & 0.715 & 0.631 & 0.670 \\ CoDetector & True Data & 0.898 & 0.861 & 0.879 & 0.887 & 0.885 & 0.886 & 0.897 & 0.873 & 0.885 & 0.908 & 0.877 & 0.892 & 0.904 & 0.880 & 0.892 \\ & Fake Data & 0.796 & 0.846 & 0.820 & 0.840 & 0.843 & 0.841 & 0.811 & 0.844 & 0.827 & 0.809 & 0.854 & 0.831 & 0.823 & 0.857 & 0.840 \\ BayesDetector & True Data & 0.943 & 0.946 & 0.945 & 0.945 & 0.945 & 0.936 & 0.943 & 0.940 & 0.944 & 0.936 & 0.940 & 0.938 & 0.943 & 0.940 \\ & Fake Data & 0.915 & 0.910 & 0.912 & 0.914 & 0.913 & 0.913 & 0.909 & 0.899 & 0.904 & 0.896 & 0.908 & 0.902 & 0.909 & 0.902 & 0.905 \\ SemiSAD & True Data & 0.895 & 1.000 & 0.945 & 0.911 & 1.000 & 0.954 & 0.921 & 1.000 & 0.959 & 0.903 & 1.000 & 0.949 & 0.892 & 1.000 & 0.943 \\ & Fake Data & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\ PCASelectUser & True Data & 0.953 & 0.985 & 0.969 & 0.954 & 0.986 & 0.970 & 0.954 & 0.986 & 0.970 & 0.952 & 0.983 & 0.967 & 0.952 & 0.983 & 0.967 \\ & Fake Data & 0.100 & 0.034 & 0.050 & 0.170 & 0.057 & 0.086 & 0.170 & 0.057 & 0.086 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\ FAP & True Data & 0.963 & 0.992 & 0.977 & 0.970 & 1.000 & 0.985 & 0.970 & 1.000 & 0.985 & 0.872 & 0.296 & 0.442 & 0.953 & 0.658 & 0.728 \\ & Fake Data & 0.526 & 0.184 & 0.272 & 1.000 & 0.325 & 0.491 & 1.000 & 0.343 & 0.511 & 0.920 & 0.647 & 0.967 & 0.968 & 0.969 & 0.961 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Defense performance against five representative shilling attackers.
Figure 6. Performances of attackers before and after detection by PCASelectUser in the ML-1M dataset.
attacker methods before the detection, however, it has the worst performance after the detection. On HR@20, HR@50, and HR@100, it remains to be the worst one after detection. One possible reason for this is that Legup's optimization objective is more complex and it includes a greater number of modules compared to other attacker methods. Both the two attackers show poor robustness before and after detection, while other methods exhibited relatively high robustness after detection.
Counterintuitively, the results of AverageAttack and WGAN demonstrate an inverse effect: the targeted items rank higher after detection, _i.e._, the detection process helps the attackers achieve their purposes. There are two potential explanations. The first possibility is that this method generates users that are virtually indistinguishable from real ones, rendering detection modules theoretically unable to identify them. The second explanation is that the mechanism by which the method generates fake users was not taken into account by the detection module, allowing it to evade detection by this detection method.
### Comparison of Defense Evaluation.
In light of the results presented in Figure 6 and Table 3, we have observed that relying solely on either injection-based or label prediction evaluation for assessing the performance of defense models may not be adequate. For instance, in the case of the WGAN method, the injection-based evaluation indicates that the exposure rate of the target item is even higher after defense than the direct attack, while the label prediction evaluation suggests that the current defense approach is more effective in true and false prediction. Thus, we urge future researchers in this field to use both evaluation methods to ensure the practical effectiveness of defense models. Our framework supports both evaluation processes, eliminating the need for researchers to repeat work.
### Effect of knowledge of the attackers.
To investigate the impact of the scale of models' knowledge, _i.e._, the quantity of training data for attacker models, on the results, we visualize the performance of two neural models (AIA and Legup) as the amount of training data varies. The results are shown in Figure 7. From the results, we observe that the performance of both models increases as the amount of data increases, albeit with a slight fluctuation for AIA at x50%. This inspires us to provide the attack model with more knowledge. However, as we venture into the exploration of novel attack algorithms, we must also take into consideration the importance of placing constraints on the known knowledge of these algorithms. This is particularly crucial, as it creates a trade-off between the scale of a model's knowledge and the effectiveness of the attack. Striking the right balance between these two factors is key to maximizing the potential impact of new attack algorithms while minimizing their potential negative consequences. This trade-off raises the critical question of how we can best manage the scale of models' knowledge while still maintaining the efficacy of the attack. This requires a comprehensive understanding of the intricate interplay between the scale of knowledge and the effectiveness of the attack, and a willingness to explore new frontiers of research and development in order to push the boundaries of what is currently possible.
## 6. Conclusion and Future Work
Recommender systems have gained significant attention in recent years. However, the effectiveness and security of these systems have also become major concerns, as attackers may attempt to manipulate the recommendations for their own benefit. To promote research in this important field, we introduce RecAD, a new recommender library that provides a variety of benchmark datasets, evaluation settings, attackers, and defense models. By using RecAD, researchers can simulate a range of real-world scenarios and evaluate the robustness of different recommender systems against a variety of potential attacks.
In addition to advancing attacks and defenses on traditional models, we also acknowledge the transformative impact of large language models in the field of recommender systems (Bang et al., 2015; Liu et al., 2016; Wang et al., 2017). Despite their powerful generative capabilities, these models are also susceptible to various attacks (Zhu et al., 2017). Therefore, our future research will also focus on the development of attack and defense mechanisms specifically tailored to large language model-based recommendations. In order to address this aim, We call upon researchers to collaborate and establish recommender system attack and defense methods that better align with the evolving needs of the field, enhancing the security and robustness of these models.
|
2310.18316 | Cognitive modeling and learning with sparse binary hypervectors | Following the general theoretical framework of VSA (Vector Symbolic
Architecture), a cognitive model with the use of sparse binary hypervectors is
proposed. In addition, learning algorithms are introduced to bootstrap the
model from incoming data stream, with much improved transparency and
efficiency. Mimicking human cognitive process, the training can be performed
online while inference is in session. Word-level embedding is re-visited with
such hypervectors, and further applications in the field of NLP (Natural
Language Processing) are explored. | Zhonghao Yang | 2023-09-16T01:58:51Z | http://arxiv.org/abs/2310.18316v1 | # Cognitive Modeling and Learning
###### Abstract
Following the general theoretical framework of VSA (Vector Symbolic Architecture), a cognitive model with the use of sparse binary hypervectors is proposed, In addition, learning algorithms are introduced to learn / bootstrap the model from incoming data stream, with much improved transparency and efficiency. Mimicking human cognitive process, the model's training can be performed online while inference is in session. Word-level embedding is re-visited with such hypervectors, and further applications in the field of NLP (Natural Language Processing) are explored.
Artificial Intelligence Vector Symbolic Architecture \(\cdot\) VSA Hyper-dimensional Computing
## 1 Introduction
Deep neural networks are AI models that are used extensively in recent years with growing popularity. However, they suffer several notable deficiencies from its original design since early history, which the academia and the industry have spent millions of dollars trying to address with varying degree of success.
**Model transparency / explainability**: deep neural networks are generally considered as black boxes where the inside is hard to discern, let alone make necessary adjustments when mismatch occurs. There do exist, primarily in academia, efforts to render the model more transparent and thus the decision-making more equitable, however, there is no well-accepted solution so far for general deep neural networks.
Depending on the real-world applications in question, especially when the consequence of failure is trivial, the lack of transparency can be a non-issue. For example, nobody will throw away his phone if the image search on the phone missed one photo of a bunny. However, for mission-critical applications such as autonomous driving, industrial assembly line, power grid / energy infrastructure controller, we need to clearly understand how the model operates, and quantify possible failure modes, if any.
The argument also applies to applications in various business domains. For example, an AI model taking actions against a social media post (or a potential malicious app) will need a clear justification on why the decision was made, for upcoming review or potential legal challenge.
Note the expected justifications can be either part of the model output, in the similar fashion as the _train of thoughts_ in current ChatGPT service 1 whereas the underlying model is opaque, or can be produced directly from the internal state of the model. While the former approach can certainly help, the publication will focus on the latter.
Footnote 1: [https://chat.openai.com](https://chat.openai.com)
Modern neural networks are often plagued by hallucination, with which the model produces unexpected output (inappropriate, plainly absurd, or even worse, harmful), a manifestation that we have insufficient understanding and control over the underlying model. Again, the industry is struggling with an effective solution, as it seems to be an inherent drawback of deep neural networks.
**Cost**: this generally refers to the computation cost, loosely related to the raw power consumption, the CPUs / GPUs needed (especially during training), and the labor cost.
A typical training for ChatGPT models costs millions of dollars, several months of continuous GPU hours, and many highly-trained engineers with sophisticated process, making modern deep neural networks a monopoly / privilege of those companies with deep pockets. In contrast, this publication aims at solutions that can reduce the overall cost significantly, even orders of magnitude, making viable efforts into AI democratization.
**Efficiency**: this is closely related to cost. If the training and deployment of AI models are expensive, everything has to be optimized around cost considerations. Reduced cost can lead to greater efficiency in terms of organizational processes.
Furthermore, on the representation level, the state-of-the-art deep neural networks contains millions or even billions of floating-point weights. Moving these huge amount of weights between external storage, memory, CPU and GPU posts great burden for the underlying infrastructure such as high-bandwidth and low-latency inter-connections. High efficient solutions, in principle, can dramatically reduce the requirement for supporting systems and infrastructure.
On the algorithm level, modern deep neural networks typically performs training via back propagation, which performs gradual tweaking over millions of weights by repeatedly iterating over training sets. With Internet-scale training set, updating huge set of weights can be extremely bulky and laborious, even with the latest development of GPUs and large on-board memory and cache. Any improvement at learning algorithm can have a huge impact on the overall efficiency of AI models.
The mobile devices (and the Internet-of-Things devices) have limited capacity in terms of storage and computing, and severely power-constrained, which makes the deployment of deep neural networks quite challenging. Again, high-efficient AI models (and learning algorithms) that require significantly less storage and computation will be highly desirable for deployment to these devices.
Initially inspired by human brain and its unique cognitive capabilities, VSA (Vector Symbolic Architecture, Gayler [2003]) or HDC (Hyper-Dimensional Computing) Kanerva [2009] is another school of AI models. While it shares a few traits with connectionist models such as neural networks, it's better (and more properly) characterized as a symbolic approach in the heated debate between the dichotomy of symbolic and connectionist AI. The functioning units (of hypervectors from VSA models) collectively represent unique meaning (or symbols), and it's algebraic operations between these hypervectors that models the interactions of real-world entities.
Unlike the well-known mantra of "curse of dimensionality" in machine learning community where high-dimensional space is typically from upon, VSA takes advantage of the high dimensionality in hypervectors, dubbed as "blessing of dimensionality". The high-dimensional space offers quite unique mathematical and topological properties that enabled and empowered the breakthrough outlined here. Kanerva [2009] is highly recommended for interested readers.
VSA has witnessed growing popularity in academia and some industrial settings in recent years. However, its usage is mostly exploratory, due to a few critical questions left unaddressed:
* On the theoretical level, what construction of the model offers the most benefit. What unique and appealing capability does this particular embodiment provide?
* What do the overall learning algorithms (or learners) look like?
* Compared with deep neural networks, what is use case, that demonstrates the most clear benefit and advantage?
## 2 Cognitive Principles
### Sparse binary hypervectors
Unlike fore-mentioned VSA, which is mostly a theoretical framework accommodating wide range of configuration choices, the publication fixates on the use of sparse binary hypervectors.
Define **sparsity**\(s=M/N\) as the fraction of ON bits count \(M\) over the dimension \(N\) (of an binary vector).
Being sparse implies the count of ON bits is relatively small, thus looking sparse among \(N\) dimensions. Sparse binary hypervectors are merely binary vectors with large dimension \(N\) and low sparsity \(s\), typically \(N\gg 1000\) and \(s\ll 0.01\).
In the following context, \(N=2^{16}=65536\), \(s=1/256\) (and thus \(M=256\)) are used. All such hypervectors forms the space of \(\mathbb{C}\). Obviously the general principles about hypervectors apply to other configurations of \(N\) and \(s\).
FIG.1 offers an intuitive glimpse of two random hypervectors in \(\mathbb{C}\), one by red dots and another by blue. Each set (of a single hypervector) of \(M=256\) ON bits is spread among the \(16\times 16\) cells, where each cell contains \(16\times 16\) bit positions, totalling \(N=65536\). If you examine closely, there is a single fat dot in black, marking the single (and very unlikely) overlap between them. For random hypervectors, the similarity is minimal and any overlap is extremely unlikely.
Within this space of \(\mathbb{C}\), **inner product**\(\langle A,B\rangle\) is introduced. For any \(A\), \(\|A\|^{2}=\langle A,A\rangle=M\): all vectors have the same length of \(\sqrt{M}\).
The _alignment_ between their vector forms in \(\mathbb{C}\) tells the correlation, that is, how similar these two hypervectors are. For any \(A\) and \(B\), their cosine angle (between corresponding vectors \(\vec{OA}\) and \(\vec{OB}\)) is simply \(\frac{\langle A,B\rangle}{M}\).
### Overlap and semantic similarity
Define **overlap** for two hypervectors \(A\) and \(B\) (from \(\mathbb{C}\)) as the count of ON bits where \(A_{i}=B_{i}=1\), for \(i\in[0,N)\). This is equivalent to the inner product \(\langle A,B\rangle\), also bitwise AND operations for binary hypervectors.
\[O(A,B)=\sum^{N}A_{i}B_{i}=\langle A,B\rangle \tag{1}\]
Define **Hamming distance**\(H(A,B)\) for two hypervectors \(A\) and \(B\) (from \(\mathbb{C}\)), which is equivalent to bitwise XOR operations for binary hypervectors.
Figure 1: ON bits for two random hypervectors from \(\mathbb{C}\)
With identical hypervectors having overlap of \(Ns=M\) as one extreme, and a pair of hypervectors (of complete randomness) has the overlap of \(Ns^{2}=Ms\), as another extreme, the ratio of \(1/s\) can be thought as signal-to-noise ratio (SNR) for those readers with electrical engineering background: the smaller \(s\) becomes, the large disparity between these two extremes. The desirable high SNR seems to be another strong reason to support the use of sparse hypervectors.
For arbitrary hypervector \(A\) and \(B\) in the space of \(\mathbb{C}\), we have
\[2\times O(A,B)+H(A,B)=2M \tag{2}\]
Proof.: Suppose \(A^{*}\) is the set of ON positions (indices) in \(A\), and \(B^{*}\) is the set of ON positions (indices) in \(B\).
We have
\[|A^{*}\cup B^{*}| =|A^{*}|+|B^{*}|-|A^{*}\cap B^{*}|\] \[|A^{*}\cup B^{*}| =|A^{*}\cap B^{*}|+|A^{*}\setminus B^{*}|+|B^{*}\setminus A^{*}|\]
where \(|A^{*}|\) denotes the cardinality of the set \(A^{*}\).
Note that \(|A^{*}\setminus B^{*}|+|B^{*}\setminus A^{*}|=H(A,B)\), \(|A^{*}\cap B^{*}|=O(A,B)\), and \(|A^{*}|=|B^{*}|=M\), we thus have our proof.
To sum up, Hamming distance (as a distance measure) is inversely related to overlap (as a similarity measure): they are really the two sides of the same coin that reveals the same inherent traits for hypervectors in the space of \(\mathbb{C}\).
### Segmented hypervectors
Segmented hypervectors form a subset within \(\mathbb{C}\), where the total \(N\) dimensions are divided into \(M=Ns\) continuous segments: within each segment (of dimension \(1/s\)), there is one and only one ON bit.
For the case of \(N=65536\) and \(s=1/256\), every \(256\) dimensions form a segment, which contains one and only one ON bit. Altogether there are \(M=256\) segments, and thus \(256\) ON bits. The space of all segmented hypervectors is denoted as \(\mathbb{C}^{\prime}\): \(\mathbb{C}^{\prime}\subset\mathbb{C}\).
This space of \(\mathbb{C}^{\prime}\) has \((1/s)^{Ns}\) points, which is significantly smaller than \(\mathbb{C}\). However, for all practical applications, it's still huge and seemingly unlimited, which can be verified by interested readers.
Hypervectors with \(N=16\) and \(s=1/4\) are demonstrated here:
\[A =\ 0011\ 0000\ 1001\ 0000\in\mathbb{C}\ (howeverA\notin\mathbb{C}^{\prime})\] \[C =\ 0010\ 1000\ 0001\ 0001\ =(2,0,3,3)\ \in\mathbb{C}^{\prime}\subset\mathbb{C}\] \[D =\ 0010\ 0100\ 0001\ 0100\ =(2,1,3,1)\ \in\mathbb{C}^{\prime}\subset \mathbb{C}\] \[O(C,D) =|0010\ 0000\ 0001\ 0000\ |=2\] \[H(C,D) =|0000\ 1100\ 0000\ 1100\ |=4\]
Also notice hypervectors from \(\mathbb{C}^{\prime}\) can be represented much more compactly by their offsets within the segments. Alternatively a hypervector from \(\mathbb{C}^{\prime}\) can be imaged as a slot machine with \(M=Ns\) spinning wheels, where each spinning wheel has \(1/s\) slots/teeth.
The representation of hypervectors of \(\mathbb{C}^{\prime}\) suits nicely with modern computer architecture. First of all, the hypervector itself is sparse and only non-zero bits need to be stored. Secondly, the segmented structure implies we only need to store local offsets with smaller dynamic range of \(1/s\). Finally, the use of binary value instead of real-value neural network weights. Overall, one of our hypervectors (\(N=65536\)) will take \(256\) bytes (which reflects exactly the entropy of \(2048\) bits), the the same storage budget as a 64-dimensional floating-point vector: a typical vector in neural networks is much wider than 64 dimension (in the order of thousands). In addition, we don't need floating-point operations which can be orders of magnitude slower and power hunger than simple bits flipping.
From now on, we limit our future discussion within the space of \(\mathbb{C}^{\prime}\), unless stated otherwise.
A random segmented hypervector can be generated by randomly picking offsets for each segments. As part of the "blessing of high dimensionality", any pair of random hypervectors in \(\mathbb{C}^{\prime}\) is maximally dissimilar (almost no overlap) by construction, also known as near-orthogonal in mathematical jargon.
For cognitive entities range from concrete objects (such as people, physical objects) to abstract concepts (such as ideas, alphabets, words, novels), and everything in between, we boldly hypothesize that all of them can be modeled with sparse hypervectors from \(\mathbb{C}^{\prime}\). It's the algebraic interactions that we will outline next that mirrors the interactions between cognitive entities in a world model.
### Bundle
Define **bundle** for \(K\) random codes \(C_{k}\) (from \(\mathbb{C}^{\prime}\)) with normalized weights \(w_{k}\) (\(\sum_{K}w_{k}=1\)) as
\[B=(w_{0}\cdot C_{0})\oplus(w_{1}\cdot C_{1})\oplus...\oplus(w_{K-1}\cdot C_{K-1}) \tag{3}\]
with the ON offset at each segment \(B_{i}\) (\(0\leq i<M\)) is probabilistically determined by reusing offset from \(C_{k,i}\) with the probability of \(w_{k}\).
The resultant \(B\) remains in \(\mathbb{C}^{\prime}\), and is similar to any of its operands \(C_{k}\), due to construction:
\[O(B,C_{k})=\langle B,C_{k}\rangle\approx w_{k}Ns \tag{4}\]
Think this as an unique lossy compression scheme, \(B\) maximally retains the original segment-wise offsets from codes \(C_{k}\), proportional to its weight \(w_{k}\).
For a special case where \(w_{k}=1/K\) for all \(k\), **bundle** notation can be simplified as
\[B=C_{0}\oplus C_{1}...\oplus C_{K-1} \tag{5}\]
and consequently
\[O(C_{k},B)=\langle C_{k},B\rangle\approx\frac{1}{K}Ns \tag{6}\]
but keep in mind this is only a special case, and the weights of \(1/K\) are implicit.
The **bundle** operation is commutative:
\[A\oplus B=B\oplus A\]
#### 2.4.1 Conformants
Let's take a closer look at the bundle operation Eq.(3): it's non-deterministic in the sense that random generators with different seeds will produce different results by picking segments differently.
All possible results forms a subspace \(\tilde{B}\), where each and every member equally conforms to Eq.(4): the collection of results are thus called **conformants**.
As we experience (and encode) the same set of \(\{C_{k}\}\), which can result in distinct conformants \(B^{(0)}\), \(B^{(1)}\) in \(\tilde{B}\).
Obviously
\[O(B^{(0)},C_{k})=O(B^{(1)},C_{k})\approx\frac{1}{K}Ns\]
However, in addition we will have
\[O(B^{(0)},B^{(1)})=\frac{1}{K}Ns \tag{7}\]
Proof.: From any input \(C_{k}\) (fix \(k\) for now), \(B^{(0)}\) will take \(Ns/K\) ON bits, and \(B^{(1)}\) will independently take another set of \(Ns/K\) ON bits. On average, the overlap between \(B^{(0)}\) and \(B^{(1)}\) due to this fixed member \(C_{k}\) is \(Ns/K^{2}\).
Repeat this for every member \(C_{k}\), we will approximately have
\[O(B^{(0)},B^{(1)})\approx\sum_{K}Ns/K^{2}=Ns/K\]
Overall, we think the very existence of conformants for bundle operation is an important feature instead of a design flaw, as it exposes another degree of freedom, thanks to the blessing of dimensionality.
#### 2.4.2 Online bundling
Beyond representing individual hypervectors, the **online learner**\(L^{(k)}\), for a data stream of incoming \(C_{k}\) is defined as:
\[L^{(0)} =C_{0}\] \[L^{(1)} =\sum_{k=0,\oplus}^{1}C_{k}=\frac{1}{2}C_{0}\oplus\frac{1}{2}C_{1} =\frac{1}{2}L^{(0)}\oplus\frac{1}{2}C_{1}\] \[L^{(2)} =\sum_{k=0,\oplus}^{2}C_{k}=\frac{1}{3}C_{0}\oplus\frac{1}{3}C_{1 }\oplus\frac{1}{3}C_{2}=\frac{2}{3}L^{(1)}\oplus\frac{1}{3}C_{2} \tag{8}\] \[...\] \[L^{(k)} =\sum_{k=0,\oplus}^{k}C_{k}=\frac{1}{k+1}C_{0}\oplus\frac{1}{k+1} C_{1}\oplus...\oplus\frac{1}{k+1}C_{k}=\frac{k}{k+1}L^{(k-1)}\oplus\frac{1}{k+1}C_{k}\]
Image the learner at an arbitrary time \(k\): \(L^{(k)}=\frac{k}{k+1}L^{(k-1)}\oplus\frac{1}{k+1}C_{k}\), we simply nudge the existing learner \(L^{(k-1)}\) towards the incoming \(C_{k}\), with the learning rate of \(\frac{1}{k+1}\). This small-step of tweaking will ensure the updated learner \(L^{(k)}\) will keep almost equal similarity and distance (per Eq.(2)) to all its experiences \(\{C_{k}\}\) so far: the learner \(L^{(k)}\) is effectively a running averaging operator of all its experiences.
In practice, the decreasing learning rate \(\frac{1}{k+1}\) can be interpreted as the fraction of segments that needs updating. As time goes by, the work needs to be done by the learner (from new data point) becomes less and less: it's quite hard to dramatically change a well-experienced learner, in agreement with our daily cognitive experience.
### Bind
Define **bind** operation of \(K\) codes \(C_{k}\) (from \(\mathbb{C}^{\prime}\)) as
\[B=C_{0}\otimes C_{1}\otimes...\otimes C_{K-1} \tag{9}\]
with the offset at segment \(i\) as
\[B_{i}=(\sum_{k}C_{k,i})\mod M \tag{10}\]
where \(C_{k,i}\) is the offset from \(i\)th segment of code \(C_{k}\).
The resulted \(B\) is maximally dissimilar to all its operands \(C_{k}\):
\[O(B,C_{k})=\langle B,C_{k}\rangle\approx Ns^{2}=1\]
The **bind** operation is obviously commutative and associative:
\[A\otimes B=B\otimes A \tag{11}\] \[(A\otimes B)\otimes C=A\otimes(B\otimes C)\]
Notably, **bind** preserves **overlap** and **Hamming** distance:
\[O(A,B)=O(A\otimes P,B\otimes P) \tag{12}\] \[H(A,B)=H(A\otimes P,B\otimes P)\]
Lastly, **bind** operation distributes over **bundle**:
\[P\otimes(A\oplus B)=P\otimes A\oplus P\otimes B, \tag{13}\]
which is similar to arithmetic counterparts of \(*\) and \(+\).
#### 2.5.1 Unit and inverse vector
Define a **unit vector**\(I\) (from \(\mathbb{C}^{\prime}\)), such that any code \(C\)
\[C=C\otimes I \tag{14}\]
The unit vector \(I\) is simply a hypervector where all the segments has ON bits at offset \(0\). For example when \(s=1/4\), the unit vector is \(1000\ 1000\ 1000\ 1000\), or conveniently recorded as \((0,0,0,0)\).
Define **inverse vector**\(C^{-1}\) for a code \(C\), such that
\[C^{-1}\otimes C=I \tag{15}\]
For any code \(C\), the unique inverse \(C^{-1}\) exists.
Actually, with the definition of **unit vector** (and **inverse vector** above), this forms an **algebraic ring** with bind and bundle operations.
### Analogical reasoning
Analogical reasoning is perhaps the most fascinating features for VSA and hypervectors.
Define **release** operation as
\[A\odot B=A\otimes B^{-1} \tag{16}\]
It releases / unbinds previously bound code: \((A\otimes B)\odot B=A\).
We have:
\[Q=(P_{1}\otimes A\oplus P_{2}\otimes B)\odot P_{1}\approx A+noise\]
In practice, a near-neighbor search with \(Q\) will retrieve \(A\), as \(O(Q,A)\approx\frac{1}{2}M\), which should be the very dominant among all recorded patterns.
Similarly
\[(P_{1}\otimes A\oplus P_{2}\otimes B)\odot P_{2}\approx B+noise\] \[(P_{1}\otimes A\oplus P_{2}\otimes B)\odot A\approx P_{1}+noise\] \[(P_{1}\otimes A\oplus P_{2}\otimes B)\odot B\approx P_{2}+noise\]
if \(A\), \(B\), \(P_{1}\), \(P_{2}\) are all random hypervectors.
We will use Pentti Kanerva's _the dollar of Mexico_(Kanerva [2009]) as a concrete example here. Suppose we want to encode the following tabular knowledge
* Mexico have the country code of MEX, capital of Mexico City and the currency of Peso;
* The United States have the country code of USA, capital of Washington DC, and the currency of dollar;
with these
\[C_{mexico} =P_{code}\otimes C_{mex}\oplus P_{capital}\otimes C_{mexicoCity} \oplus P_{currency}\otimes C_{peso}\] \[C_{us} =P_{code}\otimes C_{usa}\oplus P_{capital}\otimes C_{dc}\oplus P_ {currency}\otimes C_{dollar}\]
where all elements are random hypervectors.
Retrieve individual filler given known role/attribute is straightforward:
**capital of Mexico**:
\[C_{mexico}\odot P_{capital}\approx C_{mexicoCity}+noise\]
**currency of United States**:
\[C_{us}\odot P_{currency}\approx C_{dollar}+noise\]
It's also possible to retrieve the role/attribute given known filler: **what's the role of peso**:
\[C_{mexico}\odot C_{peso}\approx P_{currency}+noise\]
**what does "USA" stand for?**
\[C_{us}\odot C_{usa}\approx P_{code}+noise\]
Even more interestingly, without knowing the attribute for a given filler, analogical reasoning can be performed. **what's the dollar of Mexico**:
\[C_{dollar}\otimes C_{merico}\odot C_{us}\approx C_{peso}+noise\]
Similarly, **what's the counterpart in Mexico as DC in US**:
\[C_{dc}\otimes C_{merico}\odot C_{us}\approx C_{mericoCity}+noise\]
**What's to Mexico is similar to the label USA as to United States**:
\[C_{usa}\otimes C_{merico}\odot C_{us}\approx C_{mex}+noise\]
**Knowledge transfer** without decoupling first: if we construct
\[C^{\prime}_{us}=C_{merico}\otimes(C_{usa}\osymp C_{mex}\oplus C_{dc}\osymp C_{ mexicoCity}\oplus C_{dollar}\osymp C_{peso})\]
then
\[C^{\prime}_{us}\osymp P_{code} \approx C_{usa}\] \[C^{\prime}_{us}\osymp P_{capital} \approx C_{dc}\] \[C^{\prime}_{us}\osymp P_{currency} \approx C_{dollar}\]
The proofs will be left for interested readers.
### Discussions
Kanerva (2009) kick-started the concept of hyper-dimensional computing with the use of algebraic operations upon hypervectors. This line of research dates back to his seminal work (Kanerva (1988)) in the 1980s.
Kleyko and Osipov (2020) and Kenny Schlegel (2020) provide excellent surveys the VSA for the past 30 years. In general and not coincidentally, sparsity plays a critical role in terms of memory capacity. In addition, Kleyko et al. (2018) discusses the choice of sparsity for hypervectors with experimental evidence: it turns out sparse hypervectors can achieve desirable performance, and with neural plausibility.
A separate earlier publication Laiho et al. (2015) shares idea with this publication. However, we present original and critical steps forward, such as the formalization of segmented hypervectors, the novel bundle and bind operations, online bundling learner, with additional real-world application discussions.
In addition to the theoretical foundation for a new cognitive model as presented in this section, three design principles for intelligent systems seem to emerge:
1. Cognitive entities can be modeled by sparse binary hypervectors, for example, from \(\mathbb{C}^{\prime}\);
2. Overlap (of sparse binary hypervectors) is a good measurement of semantic similarity. Furthermore, overlap and Hamming distance reflect the same inherent traits among cognitive entities;
3. Compositional structures need to be encoded in the same high-dimensional space recursively (\(\mathbb{C}^{\prime}\), for example), which will be discussed in the next section;
The proposed cognitive model, together with these design principles can be realized with software systems, as well as a number of hardware architectures, possibly with different materials and substrates.
## 3 Compositional structures
### Nearly orthogonal sets
Define an _nearly orthogonal set_ (NOS) as a set of codes \(\{A_{i}\}\) (all from \(\mathbb{C}^{\prime}\)), for any member \(A_{i}\) and \(A_{j}\) (\(i\neq j\)):
\[\langle A_{i},A_{i}\rangle =O(A_{i},A_{i})=Ns \tag{17}\] \[\langle A_{i},A_{j}\rangle =O(A_{i},A_{j})=Ns^{2}\approx 0\]
The word of "nearly" refers to the fact that the cross inner product is only _approximately_ zero, with \(Ns^{2}\) being the inherent noise.
Occasionally (but equivalently) we use _relative overlap_ for an nearly orthogonal set.
\[RO(A_{i},A_{i}) =\langle A_{i},A_{i}\rangle/M=1 \tag{18}\] \[RO(A_{i},A_{j}) =\langle A_{i},A_{j}\rangle/M=s\approx 0\]
### Sets
A _set_ (as denoted by \(\{C_{k}\}\)) is formed implicitly from a _nearly orthogonal set_, with no meaningful ordering.
The composite code \(S\) for the whole set \(\{C_{k}\}\) can be constructed as:
\[S=C_{0}\oplus C_{1}\oplus...\oplus C_{K-1}=\sum_{K,\oplus}C_{k} \tag{19}\]
The near-neighbor search with a probe of \(S\) will eventually yield all member \(C_{k}\), as
\[O(S,C_{k})\approx Ns/K \tag{20}\]
for all possible \(k\). The set cardinality \(K\) can be recovered by counting all recovered codes.
\(S\) is the centroid for the whole cluster of \(\{C_{k}\}\), as geometrically \(S\) has approximately equal distance to all the cluster members \(C_{k}\), thanks to Eq.(2). \(S\) can be also considered as a summary (or a compressed version) for the whole set of codes \(\{C_{k}\}\).
### Sequences
A _sequence_ (as denoted by \([C_{k}]\)) is formed from a _nearly orthogonal set_, with enforced ordering.
The sequences can be encoded similarly, with the additional \(P_{k}\) as the positional markers:
\[S=C_{0}\otimes P_{0}\oplus C_{1}\otimes P_{1}\oplus...\oplus C_{K-1}\otimes P _{K-1}=\sum_{K,\oplus}(C_{k}\otimes P_{k}) \tag{21}\]
As we explained before, \(S\) is similar to the permutated member \(C_{k}\). A near-neighbor search with a probe of \(S\os P_{k}\) (or equivalently \(S\times P_{k}^{-1}\)) can recover the member at a particular position \(k\), since
\[O(S,C_{k}\otimes P_{k})=O(S\os P_{k},C_{k})\approx Ns/K \tag{22}\]
For further simplification, we use \(P_{k}=P_{step}^{k}\), where \(P_{step}\) is a well-known positional marker, in this case,
\[S=C_{0}\otimes P_{step}^{0}\oplus C_{1}\otimes P_{step}^{1}\oplus...\oplus C_{ K-1}\otimes P_{step}^{K-1}=\sum_{K,\oplus}(C_{k}\otimes P_{step}^{k}) \tag{23}\]
The \(k\)th element can be recovered, as
\[O(S,C_{k}\otimes P_{step}^{k})=O(S\os P_{step}^{k},C_{k})\approx Ns/K \tag{24}\]
The near-neighbor search (with probe of \(S\os P_{k}\)) can retrieve the \(k\)th member of \(C_{k}\): we progress with positional marker \(P_{k}\), until no similar code can be found.
This may remind some readers of _positional encoding_, as introduced in transformer architecture (Vaswani et al. [2017]): our encoding and retrieval scheme seems to be much cleaner.
We cannot emphasis more about the importance of an efficient near-neighbor search module for this model. In this context, the module will be specifically tuned for sparse binary hypervectors.
### A Probabilistic Interpretation
Assume we have two bundling operations, based on the same _nearly orthogonal set_\(\{P_{k}\}\):
\[\begin{split} A&=\sum_{K,\oplus}\alpha_{k}P_{k},\ \sum_{K}\alpha_{k}=1\\ B&=\sum_{K,\oplus}\beta_{k}P_{k},\ \sum_{K}\beta_{k}=1 \end{split} \tag{25}\]
then
\[\begin{split}\langle A,B\rangle&=\sum_{K}\alpha_{k} \beta_{k}\langle C_{k},C_{k}\rangle+\sum_{i\neq j}\alpha_{i}\beta_{j}\langle C _{i},C_{j}\rangle\\ &=(\sum_{K}\alpha_{k}\beta_{k})Ns+(\sum_{K}\alpha_{k}\sum_{K} \beta_{k}-\sum_{K}\alpha_{k}\beta_{k})Ns^{2}\\ &=(\sum_{K}\alpha_{k}\beta_{k})Ns+(1-\sum_{K}\alpha_{k}\beta_{k})Ns ^{2}\\ &=Ns(1-s)\sum_{K}\alpha_{k}\beta_{k}+Ns^{2}\end{split} \tag{26}\]
The similarity between two bundled hypervectors is dominated by coefficients from orthogonal terms, especially when sparsity is low.
Define **frame inner product** as the inner product of the coefficients, under the shared frame of NOS \(\{P_{k}\}\):
\[\langle A,B\rangle^{*}=\sum_{K}\alpha_{k}\beta_{k}=\frac{\langle A,B\rangle}{Ns (1-s)}-\frac{s}{1-s} \tag{27}\]
Image if the set of \(P_{k}\) is complete orthogonal to each other: \(\langle P_{k},P_{k}\rangle=Ns\), for any \(k\in[0,K)\), and \(\langle P_{i},P_{j}\rangle=0\), for any \(i\neq j\), the computation of frame inner product will be straightforward and uninteresting:
\[\langle A,B\rangle^{*}=\frac{\langle A,B\rangle}{Ns}\]
The last term \(\frac{s}{1-s}\) can be considered as an inherent system bias when sparsity \(s\) is present.
When we use \(\beta_{k}=1\) and \(\beta_{i}=0\) for all \(i\neq k\), \(B\) becomes the probe \(P_{k}\), and
\[\alpha_{k}=\frac{\langle A,P_{k}\rangle}{Ns(1-s)}-\frac{s}{1-s}=p(P_{k}|A,\{P _{k}\}) \tag{28}\]
\(\alpha_{k}\) (\(0\leq\alpha_{k}<1\)) is actually the empirical probability of member \(P_{k}\) within experience recorded by \(A\).
Suppose we have an online learner \(L^{(t)}=\sum_{t,\oplus}P_{t}\), where \(P_{t}\) picked from a fixed _NOS_ of \(\{P_{k}\}\), duplication is not only possible, but frequent as well. Eventually it will contain the empirical probabilities associated with each and every member \(P_{k}\). Effectively the learner itself can be considered as a probability mess function, or a probability profile.
From the opposite direction, with the probability mess function \(W_{k}\), it's trivial to encode:
\[C=\sum_{K,\oplus}w_{k}P_{k} \tag{29}\]
We can draw an nice analogy to Fourier transformation, which decomposes an arbitrary function into a set of coefficients with sinuous functions of different frequencies, phases and amplitudes. Similarly, the snapshot of an online learner corresponds to a point in the \(K\)-dimensional feature space, spanned by the NOS: in reality, the unknown \(K\) can be in the neighborhood of thousands, but typically \(K\ll N\).
## 4 Applications
### Word-level embedding
The idea of word-level embedding has its long history in NLP (Natural Language Processing). Proposed in the late 1990s, the idea is to model the semantic meaning of each word (for example, from English) as a long vector (dimensionality typically in hundreds), which is almost always real-valued. Modern NLP systems typically use dimension around thousands.
Distributional hypothesis claims that the semantics of a word is determined by its neighbor (or contexts). With a large amount of training corpus (for example, Wikipedia or even the whole Internet), any words which have the similar
contexts will have similar word embeddings, and presumably similar semantic meaning. Despite certain idiosyncrasies, this hypothesis, for the most part, inspired projects such as Word2Vec(Mikolov et al. (2013); Mikolov et al. (2013)) or GloVe (Pennington et al. (2014)), which lays the foundations for the most recent deep-learning-based NLP resurgence in applications such as text summarization, machine translation and chatbots.
Acquisition of word-level embedding is traditionally done by neural networks. The weights are adjusted by back propagation in such way that they will better predict the neighboring context words. The training itself is self-supervised in the sense the training corpus provides all necessary "labels" for learning. Since the training itself is still non-trivial, with the Internet-scale data set, most companies simply use the pre-trained embeddings and focus on their own downstream applications.
We believe the algebraic operations of **bind** and **bundling** in \(\mathbb{C}^{\prime}\) and the online learner can deliver significant benefits with improved transparency and efficiency, which inspired us to revisit this problem.
We use a half window size of \(2\) around a center word, respectively \(c_{-2},c_{-1},w,c_{1},c_{2}\). The window size can be trivially expanded when needed.
For this occurrence \(t\) of word \(w\) in the training corpus, an observation hypervector is produced:
\[C_{w}^{(t)}=C_{-2}\otimes P_{step}^{-2}\oplus C_{-1}\otimes P_{step}^{-1} \oplus C_{w}\oplus C_{1}\otimes P_{step}\oplus C_{2}\otimes P_{step}^{2} \tag{30}\]
where \(C_{-2}\), \(C_{-1}\), \(C_{1}\), \(C_{2}\) are the codes for context words \(c_{-2}\), \(c_{-1}\), \(c_{1}\) and \(c_{2}\), and \(C_{w}\) the code for the center word \(w\). \(P_{step}\) is a well-known step marker.
A document with \(W\) words will produce \(W\) such observation hypervectors, aiming at different center word \(w\).
The learner for a fixed center word \(w\) follows:
\[L_{w}^{(T)}=\sum_{T,\oplus}C_{w}^{(t)} \tag{31}\]
We maintain one learner \(L_{w}\) for each word \(w\) throughout the training, fed by the observation hypervectors targeted at that word \(w\). One pass through all training documents, \(L_{w}^{(T)}\) reflects the summary of all occurrences where the word \(w\) was encountered until time \(T\). In particular, all its contexts are compressed (albeit lossy) and recorded, which will be used as the embedding for the word \(w\).
Unlike back propagation, we use each occurrence (of center word \(w\)) exactly once and there is no need to store the data set for this purpose. The cost-saving for this streaming fashion can be huge.
Once produced, the context can be recovered. For example, a near-neighbor search, with a probe of \(L_{w}^{(T)}\oslash P_{step}\), will produce the words that most likely appears immediately after the center \(w\). From this perspective, prediction for the next word is merely a by-product from the recovery of compressed contexts.
### Discussions
The proposed training also offers greater clarity and diagnostic insights into the underlying model, enabling and empowering trust-worthy AI systems that can function reliably in mission-critical applications. With clear understand of how the model learns and operates, there should not be any hallucination or surprises.
In addition, the updated semantic code for each word \(w\) can be immediately used for inference. From architectural point of view, the training and inference is unified and essentially the training burden is amortized into day-to-day inferences. The simplification can obviously bring significant cost reduction and efficiency boost, but most interestingly, this mirrors closely with our own cognitive ability and experience: human infants learn to speak while babbling.
Unlike the framing of learning as optimization by neural networks, the proposed learning here is better framed as an unsupervised clustering, where similar observations forms its own clusters. However, supervised learning is also possible under this model, and it's being actively investigated by the author.
Heralding a dramatic departure from the traditional wisdom, this example offers a glimpse into what _damage_ this novel cognitive model (and learner) can bring. We believe the heavy-lifting work still lies on re-thinking and re-architecturing critical pieces in the downstream pipelines, for example, the popular Transform architecture (Vaswani et al. (2017)), which is notorious for its opaque-ness and heavy cost. If we can for example, augment Transform architecture with such sparse binary hypervectors, dramatic change can happen to the whole landscape of NLP overnight.
## 5 Summary
In essence, we proposed an embodiment of the VSA model with the use of sparse binary hypervectors, which features:
* much-needed transparency, viable for a wider range of trust-worthy and reliable AI applications;
* greatly reduced cost and improved flexibility;
* high efficiency in terms of storage and computation, suitable for mobile and edge deployments.
While exploring cognitive model with sparse binary hypervectors, three general principles are hypothesized for a true intelligent system that can potentially match human cognitive capabilities.
Novel learning algorithms are also developed. In particular, the learning algorithms operate in a streaming fashion: new data, as they becomes available, are used to update the model and in principle never needed at a later time. The algorithms are also online in the sense that the updated model can be immediately available for inference as well.
It's my humble hope that this publication can shed some lights into current AI endeavors with a new perspective on transparency and efficiency, and more compelling business cases can be found for trustworthy AI models.
We've developed a high-performance Python library for the manipulations of these sparse binary hypervectors, which is in the process of packaging and releasing. Please contact the author if interested.
**The ideas discussed here have been filed as a pending patent. While academic redistribution, explorations and improvements are welcome, please contact the author for commercial use.**
|
2309.13320 | GlotScript: A Resource and Tool for Low Resource Writing System
Identification | We present GlotScript, an open resource and tool for low resource writing
system identification. GlotScript-R is a resource that provides the attested
writing systems for more than 7,000 languages. It is compiled by aggregating
information from existing writing system resources. GlotScript-T is a writing
system identification tool that covers all 161 Unicode 15.0 scripts. For an
input text, it returns its script distribution where scripts are identified by
ISO 15924 codes. We also present two use cases for GlotScript. First, we
demonstrate that GlotScript can help cleaning multilingual corpora such as mC4
and OSCAR. Second, we analyze the tokenization of a number of language models
such as GPT-4 using GlotScript and provide insights on the coverage of low
resource scripts and languages by each language model. We hope that GlotScript
will become a useful resource for work on low resource languages in the NLP
community. GlotScript-R and GlotScript-T are available at
https://github.com/cisnlp/GlotScript. | Amir Hossein Kargaran, François Yvon, Hinrich Schütze | 2023-09-23T09:35:55Z | http://arxiv.org/abs/2309.13320v2 | # GlotScript: A Resource and Tool for Low Resource Writing System Identification
###### Abstract
We present GlotScript, an open resource and tool for low resource writing system identification. GlotScript-R is a resource that provides the attested writing systems for more than 7,000 languages. It is compiled by aggregating information from existing writing system resources. GlotScript-T is a writing system identification tool that covers all 161 Unicode 15.0 scripts. For an input text, it returns its script distribution where scripts are identified by ISO 15924 codes. We also present two use cases for GlotScript. First, we demonstrate that GlotScript supports cleaning multilingual corpora such as mC4 and OSCAR. Second, we analyze the tokenization of a number of language models such as GPT-4 using GlotScript and provide insights on the coverage of low resource scripts and languages by each language model. We hope that GlotScript will become a useful resource for work on low resource languages in the NLP community. GlotScript-R and GlotScript-T are available at [https://github.com/cisnlp/GlotScript](https://github.com/cisnlp/GlotScript).
**Keywords**: multilingual, low resource, natural language processing
## 1 Introduction
We are interested in automatically identifying the writing system or script a given text is written in. We will refer to this automatic identification of scripts as _script identification_. When doing research on and developing technology for low resource languages, script identification is useful. For example, when compiling a corpus for a low resource language, script identification can serve as part of quality control: texts written in scripts not used for the language can be excluded. Similarly, when training the tokenizer of a language model for low resource languages, an analysis of the learned token vocabulary allows us to see how well a script is represented, an indication of how well languages written in that script are represented.
In such low resource scenarios, language identification is an alternative to script identification: language identification can also be used for quality control and for the analysis of language model vocabularies. However, language identification for low resource languages is prone to high error rates Kreutzer et al. (2022); Caswell et al. (2020). Many low resource languages are poorly identified by existing tools, due to data scarcity and high variability in orthography, genres and domains. By contrast, script identification can be performed with a much higher accuracy and it is therefore a useful functionality in the abscence of reliable language identification for many low resource languages.
In this paper, we present GlotScript, a resource and tool for low resource identification of writing systems, i.e., low resource script identification.
Our contributions are as follows. (i) We compile and organize GlotScript-R, a comprehensive resource for script identification, offering attested writing systems for each language variety. We make this resource available to the community. (ii) We publish GlotScript-T, a tool for script identification with coverage of all 161 scripts in Unicode 15.0. It provides the script distribution for an input text. Scripts are identified by ISO 15924 codes. To the best of our knowledge, no such tool is currently available. (iii) We demonstrate the benefits of GlotScript-T and GlotScript-R for corpus cleaning: we show that the quality of existing low resource corpora can be improved using script identification. (iv) We analyze the tokenization of large language models (LLMs) - including GPT-4, Falcon and Llama2 - using GlotScript-T. The analysis gives valuable insights into LLM coverage (or lack of coverage) of low resource languages.
Background and related work
### Script identification
The Stops library (Andrews et al., 2022), part of the NLLB project (NLLB Team et al., 2022), is capable of detecting the script of a given text in 38 scripts based on ISO 15924. It uses the Unicode blocks defined for each script.
Acs (2019) gathered Unicode data block ranges and mapped them to 18 macro Unicodes. For instance, they categorized ranges like "Basic Latin", "C1 Controls and Latin-1 Supplement", "Latin Extended Additional", "Latin Extended-A", and "Latin Extended-B" into the Latin script. These ranges were then employed with regular expressions to identify the script of an input text.
These methods do not cover all of the 161 scripts that Unicode 15.0 defines. Additionally, since they use entire blocks,1 the result may not be entirely accurate. For example, the range from U+0000 to U+007F is part of the "Basic Latin" block. However, within this range, there are some common characters that do not belong to a specific script and can be used universally, such as the left square bracket (U+005B). Compared to blocks, using a more granular approach, in particular a per-character approach,2, can be beneficial.
Footnote 1: [https://unicode.org/Public/15.0.0/ucd/Blocks.txt](https://unicode.org/Public/15.0.0/ucd/Blocks.txt)
Footnote 2: [https://unicode.org/Public/15.0.0/ucd/Scripts.txt](https://unicode.org/Public/15.0.0/ucd/Scripts.txt)
Python has the built-in library unicodedata. It allows working with the Unicode Database. The command unicodedata.name(char) can be used to obtain the name of a character. This command only functions for a single character. However, the character's name does not always include the name of its script. Even if the name of the character contains information about its script, there is no direct and consistent correspondence of that information to the codes of the ISO 15924 standard.
### Language resources
Many existing resources that provide information about the world's languages such as Ethnologue (Eberhard et al., 2023), Glottolog (Hammarstrom et al., 2023) and WALS (World Atlas of Language Structures) (Dryer and Haspelmath, 2013) do not include information about writing systems.
Our work is most closely related to the work of van Esch et al. (2022). Apart from the fact that van Esch et al. (2022) do not provide script identification software for use cases like corpus cleaning, our approach differs in methodology. van Esch et al. (2022) also aim to establish an extensive metadata repository including writing systems and speaker details. They cover more than 2,800 languages. But their methodology heavily focuses on analyzing online texts from sources such as Wikipedia, Jw.org, Crubadan (Scannell, 2007) and PanLex (Kamholz et al., 2014). They then extend their analysis to projects like Unilex,3 CorpusCrawler (Brawer, 2017), Bible.is and the LTI corpus for LangID (Brown, 2014). Relying on texts to determine the correct script for a language may not be a robust method, as texts collected online can be noisy or may lack accurate labels. We further discuss van Esch et al. (2022)'s work and its limitations in SS3.3, including its tendency to include the Latin script for languages where romanization is not widely used.
Footnote 3: [https://github.com/unicode-org/unilex](https://github.com/unicode-org/unilex)
Footnote 4: [https://software.sil.org/fonts/](https://software.sil.org/fonts/)
### Applications
**Corpus Cleaning.** One of the uses of script identification is in corpus cleaning. ImaniGooghari et al. (2023, 2023) detect the script for each sentence and treat each language-script as a separate entity. They exclude all corpora for which the language-scripts are found to be incorrect or noisy; for example, when there is a mismatch between language and script, the corpus is removed.
Kreutzer et al. (2022) reported in their manual audit on multilingual datasets that languages written in scripts other than their correct one, or languages with non-linguistic material, are good indicators of a corpus being of low quality.
**Analysis of Pre-trained Models.**Acs (2019) studies the mBERT (Devlin et al., 2019) tokenizer vocabulary. They compile unicode ranges into 18 categories and use these ranges with regex to detect the script of vocabulary tokens. Similar to Acs (2019), van Esch et al. (2022) analyzes the vocabulary coverage of three models: mBERT (Devlin et al., 2019), XLM-R (Conneau et al., 2020) and mT5 (Xue et al., 2021).
**Fonts and Keyboards.** One application of studying writing systems is the development of Unicode fonts, such as SIL Fonts,4 or keyboards for devices such as Keyman.5
## 3 GlotScript-R
We now describe GlotScript-R, a resource providing writing system metadata for more than 7,000 language varieties. We individuate languages based on ISO 639.6
Footnote 6: [https://isso639-3.sil.org/](https://isso639-3.sil.org/)
### Source selection
We conducted an exhaustive search to identify potential sources of information about writing systems that we could use for our goals in creating GlotScript. We focused on sources known for collaborative contributions or recognized for their reliability across a wide range of languages. We can summarize the result of this search as follows.
(i) **LREC_2800 metadata**. van Esch et al. (2022) compiled a database containing information on the writing systems of more than 2,800 language under the CC BY-SA 4.0 license, permitting modification and redistribution.7
Footnote 7: [https://github.com/google-research/url-nlp](https://github.com/google-research/url-nlp)
(ii) **Wikipedia metadata**. Wikipedia hosts pages for each language's ISO 639 code and some of these pages include details about the language's writing system. It also contains information about writing systems that have not yet been incorporated into Unicode. However, not all pages contain metadata for writing systems. This dataset is also available under the CC BY-SA 4.0 license, permitting modification and redistribution.8
Footnote 8: [https://en.wikipedia.org/wiki/ISO_639](https://en.wikipedia.org/wiki/ISO_639):{ISO639 code}
(iii) **ScriptSource metadata.** Developed by SIL, ScriptSource is a dynamic collaborative website serving as a reference for writing systems. It gives information on which languages use which script. This dataset is available under the CC BY-SA 3.0 license, permitting modification and redistribution.9
Footnote 9: [https://scriptsource.org/scr/](https://scriptsource.org/scr/){ISO15924}
Footnote 10: [https://github.com/simris/langtags](https://github.com/simris/langtags)
(iv) **LangTag metadata.** The WSTech team of SIL offers writing system metadata for language varieties, presented in JSON format. This dataset is available under the MIT License, permitting modification and redistribution.10
Footnote 10: [https://github.com/simris/langtags](https://github.com/simris/langtags)
(v) **Other sources.** We came across additional sources during our search, but they have limited coverage of languages. For example, the IANA language subtag registry provides script metadata for 134 languages.11 Other sources are consulted by sources (i) - (iv), for example, Omniglot12 in LREC_2800 and Unicode CLDR13 in LangTag.
Footnote 11: [https://iana.org/assignments/language-subtag-registry](https://iana.org/assignments/language-subtag-registry)
Footnote 12: [https://www.omniglot.com/writing/langalph.htm](https://www.omniglot.com/writing/langalph.htm)
Note that none of these sources cover all languages, and there is a potential for some languages to have incorrect scripts listed (see SS3.3). To address this, we incorporate all four sources (i) - (iv) in GlotScript-R; this allows us to give preference to script identification decisions that several sources agree on. We gathered the Wikipedia and ScriptSource data - which are not accessible in tabular format - by crawling.
### Preprocessing
There is a total of 8030 unique three-letter ISO 639 codes that at least one of the sources covers. Some of these codes are no longer used and were replaced with new ones. For instance, tsf (Southwestern Tamang) was merged into taj (Eastern Tamang). Some languages have two codes representing them and both are still in use. One code is used for terminology applications, and the other is used for bibliographic applications, e.g., msa and may for Malay. The most used version of the ISO 639 code set in the NLP community is ISO 639-3; however, not all three-letter codes are part of this subset. For instance, ber (Berber languages) is part of the code sets of 639-2 and 639-5, but not part of 639-3. To handle this, we include all three-letter ISO codes, not just those from ISO 639-3. For each language, we also specify the other equivalent codes.
The number of three-letter ISO 639 codes covered is 2836 for LREC_2800, 1726 for Wikipedia, 7875 for ScriptSource and 7901 for LangTag.
### Agreement
We assess agreement between two metadata sources using Jaccard similarity:
\[J(A,B)=\frac{|A\cap B|}{|A\cup B|}\]
where \(A\) and \(B\) are sets of scripts given for an ISO code by the two sources in question. Since the Wikipedia data is of a smaller size and does not represent writing systems in a uniform format (a format such as ISO 15924), we use Wikipedia as a secondary source of information when merging datasets, especially in cases where there is no agreement.
We present the results for each pair of LangTag, ScriptSource, and LREC_2800 in Table 1. LangTag and ScriptSource completely agree (\(J=1.0\)
for 96% of ISO codes. This is not surprising given that both sources are from SIL. However, some disagreements still exist. Additionally, it appears that LangTag aligns more closely with LREC_2800, as it shares a greater number of ISO 639 codes, fewer partial agreements (\(0<J<1\)) and no disagreements (\(J=0\)).
To understand the discrepancies between different metadata sources, we conducted a manual analysis, and observed the following trends.
(i) **Rare or historic scripts.** ScriptSource and LangTag metadata tend to include rare and historic scripts. For instance, in the case of Turkish (tur), alongside the primary Latin script, these sources also list Arabic, Greek and Cyrillic. In contrast, LREC_2800 exclusively lists Latin, the current official script.
(ii) **Romanized versions.** LREC_2800 often introduces a Latin version for a language, even if it is rarely used. For instance, there is a Latin entry for fas (Farsi) in LREC_2800, despite it not being the official script and not widely used, even in social networks. Scriptsource and langtag only give Arabic.
(iv) **Partial information.** There are instances where each source only partially supports certain scripts. For example, for aat (Arvanitika Albanian), ScriptSource and LangTag give the Greek script while LREC_2800 gives the Latin script. However, resolving this conflict is complicated by the language's endangered status and the disagreement among at speakers about using the Greek vs. the Latin script.
(v) **Errors.** There are instances where it is clear that a language is highly unlikely to be written in a particular script. One of these cases is var (Huarijio) in LREC_2800, which is indicated to be written in Devanagari. In our judgement, this is an error since var is a Uto-Aztecan language in northwestern Mexico and Devanagari is only used for South Asia languages.
### Compilation
We now explain the process of compiling GlotScript-R by combining different metadata sources. As will be apparent from our discussion, creating a reliable writing system resource is not straightforward.
Two of our desiderata are usefulness for NLP and accuracy.
As far as usefulness for NLP is concerned, if we accepted all scripts that any of the sources lists for a language, then we would include errors and scripts that in practical NLP contexts are very unlikely to be relevant. The most important instance of this is that some sources give Latin as a valid script in many cases where its use is extremely rare. Including Latin for such languages would be harmful for use cases such as corpus quality control. For example, a non-Farsi subcorpus written in Latin cannot be excluded using script identification if we allow Latin as a standard script for Farsi.
On the other hand, the desideratum of accuracy demands that we do not simply adopt a criterion of perfect agreement of the four sources. Such a heuristic would exclude important language metadata that might be useful to the NLP community.
To allow users of GlotScript-T to trade off usefulness against accuracy, we define two metadata categories: MAIN and AUXILIARY. The MAIN metadata give the primary scripts based on consensus among the metadata sources. The AUXILIARY metadata give secondary scripts, those that are only specified as admissible by a single source.
Given the 96% complete agreement between LangTag and ScriptSource, we prioritize resolving disagreements between these two sources using information from Wikipedia and LREC_2800. We merge LangTag and ScriptSource as one consolidated group named SIL, which is the aggregation of LangTag and ScriptSource if they match or if the discrepancies can be resolved based on additional resources. Only those discrepancies that cannot be resolved this way are collected in SIL2-aux.
As a result of this consolidation, we have now three metadata sources: SIL, LREC_2800 and Wikipedia. Given a language \(l\) identified by an ISO 639 code, we categorize a script for \(l\) as MAIN if this is supported by at least two of the three sources (e.g., the MAIN metadata specify Kpel as one of the admissible script for kpe (Kpelle) since SIL and Wikipedia agree on it even though LREC_2800 does not) or if only one of three sources provides
\begin{table}
\begin{tabular}{l|c|c|c|c}
**Pair** & \(|\mathcal{L}|\) & **CA** & **PA** & **NA** \\ \hline (LangTag, LREC\_2800) & 2814 & 2385 & 404 & 25 \\ (ScriptSource, LREC\_2800) & 2811 & 2372 & 414 & 25 \\ (LangTag, ScriptSource) & 7858 & 7567 & 287 & 4 \\ \end{tabular}
\end{table}
Table 1: Agreement counts for each pair of sources. CA: complete agreement (\(J=1.0\)), PA: partial agreement (\(0<J<1\)), NA: no agreement (\(J=0\)), \(|\mathcal{L}|\): number of common ISO 639 codes.
information about admissible scripts for \(l\).
(i) In cases of partial information, such as for aat (Arvanitika Albanian), where both LREC_2800 and Wikipedia agree on Latin, and both Wikipedia and SIL agree on Greek, we include both Latin and Greek in MAIN.
(ii) If only one metadata source agrees on a script and not the other, the script is placed in the auxiliary category specific to that source. Wikiaux, LREC2800-aux, and SIL-aux are used for Wikipedia, LREC_2800, and SIL, respectively. SIL2-aux is exclusively used for discrepancies between ScriptSource and LangTag.
## 4 GlotScript-T
We now describe GlotScript-T, an open-source Python tool that identifies the writing systems of input text. It supports 161 Unicode scripts, identified as ISO 15924 codes. GlotScript-T is the first tool to provide labels based on ISO 15924 with this level of coverage. Figure 1 gives an example of how to use GlotScript-T.
### Development
We first sorted unicode ranges into different script categories, based on the Unicode Character Database.14 We then matched these ranges with ISO 15924 code names from Wikipedia.15
Footnote 14: [https://unicode.org/Public/15.0.0/ucd/Scripts.txt](https://unicode.org/Public/15.0.0/ucd/Scripts.txt)
Footnote 15: [https://en.wikipedia.org/wiki/ISO_15924](https://en.wikipedia.org/wiki/ISO_15924)
For an input text, GlotScript-T identifies the unicode range of each character, maps it to an ISO 15924 code and then calculates the percentage of each script. GlotScript-T returns the main script (the one that the most characters belong to) and detailed information on the distribution of scripts.
#### 4.1.1 Special codes.
1. **Zzzz.** This code is used for unknown Unicode ranges. We also add the replacement character (U+FFFD \(\mathbf{\hat{\Phi}}\)) as Zzzz.
2. **Zinh.** This code is assigned to a character who inherits its script from the previous character. For example, the zero width joiner character (U+200D) is used for joining characters. It does not belong to any script, but rather inherits its script code from the immediately preceding character.
3. **Zyyy.** This is the ISO 15924 code for undetermined script. This script code covers characters like punctuation, symbols, mathematical notation and musical notation that are used across many different scripts.
#### 4.1.2 Efficiency
We randomly generate a test set of 1 million sentences, each with a length of 100, using characters from different Unicode ranges. The walltime of processing this test set with GlotScript-T on a single core of an Intel Xeon E7-8857 3GHz CPU is 80.790 seconds, i.e., about \(8\times 10^{-5}\) seconds per sentence.
## 5 Experimental setup
We present experiments for two tasks to demonstrate the usefulness of GlotScript-T and GlotScript-R.
(i) **Corpus quality assessment.** We investigate multilingual datasets by determining if a text assigned by the corpus metadata to a particular language is written in a script that is admissible for the language. If this is not the case for a particular text, it indicates mislabeling; it most likely belongs to another language or is noise. This part of our experiments highlights the benefits of a script identification tool for creating high-quality low resource corpora.
(ii) **Multilingual models.** We quantify the presence of each script within the vocabulary of popular multilingual language models, focusing on large multilingual language models. We evaluate the level of representation of each script, which sheds
Figure 1: How to use GlotScript-T: three examples. GlotScript-T returns a tuple consisting of the main script, the percentage of characters in the main script and detailed information on the distribution of scripts.
light on the quality of representation of languages written in that script.
### Corpus quality assessment
GlotScript-R lists for each language \(l\), identified by an ISO 639 code, the scripts that are commonly used for \(l\). Recall that, as shown in Figure 1, the function \(\mathit{sp}(s)\) provided by GlotScript-T predicts the percentage of each script in an input and identifies the main script.
Let \(s\) be an input sentence from a corpus that is assigned the ISO 639 code \(l\) by the corpus metadata. The predicted main script for \(s\) - i.e., \(\mathit{sp}(s)\) - is either one of the admissible scripts (according to GlotScript-R) for \(l\). We call this a match. Or it is not one of the admissible scripts. We call this a mismatch. In case we find a mismatch for \(s\), we evaluate this as an error. We refer to this heuristic as the _script mismatch rule_. We determine for each sentence of the corpus whether it is a match or a mismatch and then compute the proportion of errors.
#### 5.1.1 Evaluation corpora
We select two popular corpora for their multilinguality and the inclusion of lower resource languages.
(i) Multilingual C4 (mC4) (Xue et al., 2021) is a document-level dataset used for training the mT5 language model. It uses CLD3 (Botha et al., 2017; Salcianu et al., 2018) language identification (LID). CLD3 supports 107 languages. Accordingly, C4 provides monolingual text in 107 languages.
(ii) OSCAR-2201 (Ortiz Suarez et al., 2019; Abadji et al., 2021) is a set of monolingual corpora for 151 languages. It is deduplicated and uses Fasttext (Joulin et al., 2017) FT17616 LID on a line by line level.
Footnote 16: [https://fasttext.cc/docs/en/language-identification.html](https://fasttext.cc/docs/en/language-identification.html)
Both corpora are sourced from CommonCrawl. Kreutzer et al. (2022) performed a manual audit on a maximum of 100 sentences per language for these two corpora.
#### 5.1.2 Setup
We load both datasets using the Hugging Face API. Each row of the dataset is split by \(\backslash\)n (which we consider to be the sentence delimiter) and deduplicated.
For both corpora, we randomly select 1000 sentences per language. We exclude languages for which there are fewer than 1000 sentences available, resulting in a coverage of 118 languages from OSCAR-2201. For example, for dsb, diq, and eml, there is only one sentence each available in OSCAR-2201, so we exclude these languages. We map the language identifiers provided by the corpus metadata to three-letter ISO 639 codes.
We apply GlotScript-T to the 1000-sentence subsets per language and obtain the main script for each sentence. We apply the script mismatch rule to identify the sentence as correct or incorrect. However, if the corpus metadata specify the script in addition to the language (e.g., bg-Latin), then we only consider the script given as a candidate script for that sentence by the metadata.
### Multilingual models
We analyze the representation of common writing systems in state of the art pretrained models. Most of these models are claimed to be multilingual. We approach this analysis employing the following two methods.
(i) Following (van Esch et al., 2022; Acs, 2019), we examine the writing systems present in the vocabulary of each model's tokenizer.
(ii) We tokenize parallel corpora of UDHR17 using each model's tokenizer. For each writing system, we then measure the number of tokens generated and the percentage of unknown tokens (UNK) generated.
Footnote 17: [http://unicode.org/udhr/d/](http://unicode.org/udhr/d/)
#### 5.2.1 Model selection.
We select ten state-of-the-art models for their multilingual capabilities or for their frequent use: GPT-4 (OpenAI, 2023), Falcon (Penedo et al., 2023), Llama 2 (Touvron et al., 2023), BLOOM (Scao et al., 2022), Glot500 (ImaniGooghari et al., 2023, 2023), XLM-R (Conneau et al., 2020), mBERT, BERT (Devlin et al., 2019), mT5 (Xue et al., 2021) and NLLB (NLLB Team et al., 2022).
#### 5.2.2 Udhr
UDHR consists of more than 500 translations of the Universal Declaration of Human Rights, each containing 30 short articles. We remove all translations that are incomplete (fewer than 89 sentences) or noisy (e.g., lines consisting of the single English word "missing"). We ensure that all 30 articles are available in a translation and that it has a valid ISO 639-3 code (not undetermined). In cases where multiple versions are available for a pair of ISO
639-3 and ISO 15924, we make a random selection. This procedure selects a subset of UDHR that covers 396 different language-scripts.
## 6 Results and analysis
### Corpus quality assessment
Table 2 shows the top five and bottom five languages (in terms of inferred accuracy of their metadata) for each corpus, along with correct and incorrect scripts. The scripts highlighted in green show the correct scripts based on GlotScript-R MAIN. yellow represents the scripts that were returned for AUXILIARY. Notice that quite a few languages have Latin as an AUXILIARY script, based on the LREC_2800 metadata. The ACC column displays the accuracy of the correct script based on MAIN.
For the 118 selected languages in OSCAR, we obtain an average script accuracy of 0.947. For the 107 languages in mc4, the average score is 0.917. These averages are high, indicating a favorable quality overall. However, when examining the bottom five languages with the lowest correct script scores, the average drops to 0.823 for OSCAR and 0.566 for mc4.
Based on our audit of common errors in the OSCAR corpus, we can confirm that incorrect Latin sentences are either written in English or are related to website content, such as website functionalities (comment and search sections), URLs and dates. This confirms that including more scripts, especially all the romanized versions, in the writing metadata without them being in popular use would hamper our ability to identify incorrect sentences in low resource corpora. This is why we decided that when merging different datasets, if a script is not approved by the majority of sources, it will be kept in AUXILIARY (see SS3.4). We also noticed that most sentences with script mismatches are short. We therefore run another set of experiments, this time using a length-based filter that keeps either 70% (ACC70 column) or 50% (ACC50 column) of the longest sentences.
For the bottom languages of OSCAR in Table 2, it is evident that length filtering proves to be effective. Notably, for amh (Amharic), the accuracy improves 0.118 when retaining only the 50% longest sentences. However, this is not the case for the bottom languages of mc4, particularly for cym (Welsh) and snd (Sindhi) where the accuracy worsens. Additionally, the correct scripts for these two languages are not the most frequent in their respective corpora. This suggests that the mistakes are not merely short incorrect sentences, but rather lengthy paragraphs in the wrong language. In the case of Welsh, upon closer inspection, it becomes appar
ent that the incorrectly identified Greek scripts are actually written in ell (Modern Greek). We also observed suspicious patterns in the Latin portion of this data but it contained many correctly written sentences in cym (Welsh). For Sindhi, the data contains numerous extensive paraphrases in English, and we also suspect a mix of ara (Arabic) and fas (Farsi) in the Arabic script part.
The infrequent instances of incorrect writing systems in OSCAR may indicate the effectiveness of line-level LID filtering. These results suggest the need for further research on LID. Additionally, we recommend that in newly published LID and corpora, along with the language code, a script code should be assigned to each sentence as part of the metadata. This practice significantly facilitates error prevention.
### Multilingual models
#### 6.2.1 Tokenizer vocabulary
We use GlotScript-T to analyze the token vocabulary of each language model and determine each token's script. Figure 2 gives the percentage distribution of each script for each tokenizer's vocabulary.
We find the following.
1. The Cyrillic representation in the BLOOM tokenizer is relatively scarce compared to other models.
2. The BERT tokenizer supports not only Latin scripts but also recognizes Hani, Arabic, Cyrillic and some tokens in an additional 12 scripts.
3. Glot500 encompasses the highest number of scripts, totaling 88. Following that, mT5 supports 66 scripts. However, a significant portion of these scripts in both models has limited presence.
4. Llama2's second most prominent script is Cyrillic.
5. Falcon's second most prominent script is Hani.
6. The GPT-4 tokenizer vocabulary includes representations for 18 scripts, albeit not very comprehensively compared to its coverage of Latin.
7. In all tokenizer models combined, a total of 92 scripts has some presence.
#### 6.2.2 UDHR tokenization
We employ the specific tokenizer associated with each model for the selected UDHR translations. Subsequently, we generate a plot illustrating the token count required by each model to tokenize the UDHR translation. Since not all model tokenizers operate at the byte level, this may result in the generation of unknown (UNK) tokens. We only consider tokenizer-translation pairs where fewer than 5% unknown tokens are produced. Figure 3 displays both the token count used by each tokenizer (left) and the percentage of unknown tokens (right). Rather than coloring the plot data based on language labels, we choose to use script categories for color representation.
GPT-4, in addition to being trained on English, was also trained on some other languages. For instance, it is capable of translating between English and sin (Sinhala). In tasks such as text generation, the number of generated tokens is particularly important. For example, for the English UDHR translation, the GPT-4 tokenizer produces 1983 tokens. However, for the Sinhala UDHR translation, it generates 20,071 tokens, nearly 10 times more. As the pricing of OpenAI APIs is also based on the number of tokens, this demonstrates that generation of Sinhala is very expensive using GPT-4 in comparison with English.
## 7 Conclusion
We publish GlotScript-R, an extensive resource covering writing systems for over 7,000 languages, including thousands of typically over
Figure 2: The percentage of each script in the vocabulary of model tokenizers. Scripts with a presence of more than 1% in each tokenizer are text-labeled in the figure.
looked tail (i.e., long-tail) languages. We open-source GlotScript-T, a script identification tool that supports all 161 scripts in Unicode 15.0. It provides the script distribution within a given text, using ISO 15924 labels. This work is the first to create a highly efficient tool for script identification and provide labels based on ISO 15924 with this level of coverage.
We apply GlotScript-R and GlotScript-T to the task of corpus quality assessment. Our findings indicate that these two components work together effectively to improve the quality of existing low resource corpora. Furthermore, we investigate the tokenizers of large language models like GPT-4. This analysis enables us to assess how well a script is represented, serving as an indicator of the representation quality of languages written in that script.
In future, we aim to expand the writing system resources and offer a better categorization of writing systems for GlotScript-R such as "live", "rare", "historic", "romanization present", "romanization in use". We also want to include more metadata.
|
2309.04890 | Flattened photon beams, an obsolete feature in modern linear
accelerators | Background: With the advent of Intensity Modulated Radiotherapy (IMRT) and
recently, Volumetric Modulated Arc Therapy (VMAT), treatment planning using
Flattening Filter Free (FFF) beams can meet all of the energy requirements in
radiation therapy clinics. Manufacturers of linear accelerators no longer need
to install a flattening filter (FF) in gantry head. This study aims to provide
evidence of the superiority of FFF to FF through both dosimetric measurements
and clinical treatment plans. Materials and Methods: A 50x50x50cm3 water
phantom was created in the RayStation treatment planning system (TPS) for
dosimetry comparisons. Flat beam profiles were generated using FFF beam through
an optimization process for 10x10 to 30x30cm2 field sizes. Next, a comparison
of treatment plans was made using 21 Head and Neck and 14 Lung/Mediastinum
treatment sites using 6MV and 6MV-FFF beams. Results: Using FFF beams, profiles
with flatness and symmetry identical to or better than those of the flattened
beams were produced. At the very edge of the optimized plans for FFF beams,
horns had the highest gamma index deviation <1.5% of the normalized dose. For
clinical plans evaluated, most of the mean doses to organs_atrisk (OAR) volumes
receiving 5% to 30% of the prescription dose were reduced with FFF beams.
Conclusion: These results indicate the feasibility of delivering flat beams
with FFF quality and producing treatment plans with equal or higher qualities
in PTV coverage while achieving better sparing of OAR which will allow
escalation of target dose if desired. Plus, removing FF will simplify the
gantry head and reduces quality assurance and machine maintenance efforts. | E. Ishmael Parsai, Elahheh Salari, Diana Shvydka, Jui Wan | 2023-09-09T22:57:21Z | http://arxiv.org/abs/2309.04890v1 | # Flattened photon beams, an obsolete feature in modern linear accelerators
###### Abstract
_Background_: With the advent of intensity modulated Radiotherapy (IMRT) and recently, volumetric modulated Arc Therapy (VMAT), treatment planning using Flattening Filter Free (FFF) beams can meet all of the energy requirements in radiation therapy clinics. Manufactures of linear accelerators no longer need to install a flattening filter (FFF) in gantry head. This study aims to provide evidence of the superiority of FFF to FF through both dosimetric measurements and clinical treatment plans. _Materials and Methods_: A 50x50x50cm\({}^{3}\) water phantom was created in the RayStation treatment planning system (TPS) for dosimetry comparisons. Flat beam profiles were generated using FFF beam through an optimization process for 10x10 to 30x30cm\({}^{2}\) field sizes. Next, a comparison of treatment plans was made using 21 Head and Neck and 14 Lung/Mediisanium treatment sites using 6MV and GMV-FFF beams. _Results_: Using FFF beams, profiles with flatness and symmetry identical to or better than those of the flattened beams were produced. At the very edge of the optimized plans for FFF beams, horns had the highest gamma index deviation <1.5% of the normalized dose. For clinical plans evaluated, most of the mean doses to organs-at-risk (OAR) volumes receiving 5% to 30% of the prescription dose were reduced with FFF beams. _Conclusion_: These results indicate the feasibility of delivering flat beams with FF quality and producing treatment plans with equal or higher qualities in PTV coverage while achieving better sparing of OAR which will allow escalation of target dose if desired. Plus, removing FF will simplify the gantry head and reduces quality assurance and machine maintenance efforts.
_Keywords:_ radioactive Flattening filter free, linac, VMAT, sliding window.
## Introduction
A flattening filter (FF) is designed to produce uniform dose distribution at a certain depth in a homogenous phantom, usually water. However, having a flat beam is not desirable for complex treatment plans. Therefore, beam-modifying devices such as compensators, wedges, and dynamic multileaf collimators (MLC) are used to shape beams. Over the past two decades, modern linear accelerators have been equipped with a flattening filter-free (FFF) feature, and a wealth of literature has demonstrated the advantages of FFF beams. Those, aside from dosimetric advantages which are the subject of this manuscript include but are not limited to its ability to produce treatment plans with sharper dose fall-off resulting in lower dose to normal structures in the vicinity of a target volume, and decreased radiation from head scatter and outside the treatment field since FF is identified as the most significant source of scatter radiation in gantry head (1-6). Moreover, removing FF from the beam path results in a higher dose rate (1400 MU/cGy and 2400 MU/cGy for 6 MV FFF and 10 MV FFF beams respectively), leading to a shorter delivery time [(7)]. This decreased beam-on time is especially important for patients receiving Stereotactic Body Radiotherapy (SBRT) with gating, resulting in acceptable acute toxicity profiles and promising local control [(7), (8)]. These advantages have been employed for numerous sites, including lung, liver, and brain [(10, 11, 12, 13)] treatments. As a result, the FFF beams are widely used in SBRT and also in stereotactic radiotherapy (SRS) techniques where a smaller number of fractions with a higher dose per fraction is prescribed. In one study [(2)], the use of 6 MV-FFF beams was compared to 6 MV in plans produced for SRS treatments with improved conformity and better sparing of nearby critical structures, while reducing the beam-on time by roughly 43%. Furthermore, the removal of the FF helps establish much simpler configurations in the gantry of linac, which eliminates quality assurances to the filter and reduces expenses on building (from manufacturers' point of view) and purchasing (from clinical consumers' point of view) the machine.
A feature of the non-flat beam is that it presents highest intensity at the beam center in contrast to FF beam where typically a higher intensity is observed
near the edges of the field known as horns. Using the MLCs through the sliding window technique in treatment planning software package, one has the ability to shape the beam fluency distribution across the field and deliver a desired dose distribution[5]. The majority of modern linear accelerators manufactured at the time of the writing, however, still provide flattened beams in addition to FFF photon beams. This study aims to provide the evidence that through inverse planning with VMAT delivery, no longer a flattened beam is needed, and the FF should be completely removed from the LINAC's head, thus reducing its complexity and to some degree the cost of manufacturing.
## Materials and Methods
### Using non-flat photon beams to deliver a flat beam
For this study, Edge and TrueBeam linacs (Varian Medical Systems, Palo Alto, CA) with 6 MV-FFF, 6 MV, 10 MV-FFF, and 10 MV beams were utilized. Energies used were 10 MV flattened and 10 MV FFF from the TrueBeam and 6 MV flattened and 6MV FFF from the Edge linac. TrueBeam linac is equipped with a conventional 120 leaf MLC (60 pairs) with the central 20 cm having 5mm leaf width and the outer 20 cm having 10 mm leaf width with the maximum leaf speed of 2.5 cm/s. The Edge linac on the other hand is equipped with 120 HD MLC leaves with the central 8 cm having 2.5 mm leaf width, and the outer 14 cm with 5 mm leaf width providing a maximum IMRT field size of 32 cm \(\times\) 22 cm. Flat beam profiles were generated for the 6MV-FFF energy using inverse planning with the sliding window technique and compared with profiles from 6MV beam. For this purpose, a 50\(\times\)50\(\times\)50 cm\({}^{3}\) water phantom was created in the RayStation (Ver.8) (RaySearch Medical Laboratories AB, Stockholm, Sweden) treatment planning system (TPS) [14, 15]. Then beams with open square field sizes of 10\(\times\)10, 20\(\times\)20, and 30\(\times\)30 cm\({}^{2}\) were defined by jaws and MLCs tracking the jaws at 100 cm SAD for both linacs at gantry angle 0\(\cdot\) The main optimization criterion for inverse plans was that of uniform dose to a plane with a thickness of 0.1 cm and areas equal to corresponding field sizes at 10 cm depth from the surface of the water.
For normalization purposes, the center of each plane was prescribed to receive 1 Gy. The optimization parameter "uniform dose" was utilized to guide the TPS to achieve the set goals by the MLCs sliding movement within the fields. After successfully producing uniform dose distribution on the plane, the "line dose" tool in RayStation TPS was used to get crossline and inline profiles [16, 17]. To obtain the beam profile, a line can be drawn across any of the regions of interest by this tool. In this case, profiles for different field sizes across the central axis and vertical to the sagittal plane of the water phantom at 10 cm depth from the surface of the water were gathered for data analysis. Then flatness was calculated based on equation 1:
\[Flatness=\frac{(D_{\text{max}}-D_{\text{min}})}{(D_{\text{max}}+D_{\text{min}}) }\times 100 \tag{1}\]
Where \(D_{\text{max}}\) and \(D_{\text{min}}\) are the maximum and minimum doses along with the profile within the central 80% of the field.
The gamma index was also calculated based on 3%/3mm objectives by using an in-house developed code written in Python3 (Python Software Foundation) to compare the profiles generated by non-flat beams with those generated using flat beams.
### Clinical treatment plans comparison (6 MV vs 6 MV FFF)
This comparison aims to verify the feasibility of creating identical or even higher-quality plans with FFF beams. For this purpose, 21 Head and Neck (H&N) patients and 14 Lung/Mediastinum patients who were previously treated with 6 MV photon beams were selected. New plans with 6 MV FFF photon beams were generated for comparison. All new completed plans with 6 MV FFF beams achieved a similar percentage coverage of at least one planning target volume (PTV) level. Ethical approval was obtained for this research from the Internal Review Board (IRB) of the University of Toledo (UT-300579) on April 2nd, 2020.
The simultaneous integrated boost technique was used for both Lung/Mediastinum and H&N treatment plans. Lung/Mediastinum plans have one to three PTVs with different dose levels (30 Gy to 60 Gy) delivered in 10 fractions. For the H&N cases with a total of three targets, prescription doses of 54 to 66 Gy in 30 fractions were used, or plans were only designed for one target with a prescription of 36 Gy or 40 Gy in 10 fractions. Depending on the size of the target, 2 or 4 arcs were used for both H&N and Lung/Mediastinum plans. Most objectives and constraints used for plan optimization remained unchanged, only a few extra objectives were defined to meet the demand for the equivalent coverage of PTVs. Average differences between plans with non-flat beams and with flat beams for maximum doses, mean doses, and volumes receiving 5%, 10%, 20%, and 30% of the prescription dose for organs-at-risk (OAR) were selected to evaluate the results.
The objective is choosing the low-dose level irradiation to OAR for an investigation came from the knowledge that the greatest advantage of the non-flat beam against the conventional flat beam would be a fast dose fall-off beyond the target. Consequently, it is expected that less contribution of dose to normal tissues should be observed in the results. RadCalc(tm) (Ver6.4) (LifeLine Software, Inc, LAP Group) was used as an independent monitor unit verification calculation to confirm the accuracy of dose calculations in the TPS.
## Results
### Using non-flat photon beams to deliver a flat beam
Crossline and inline profiles for both FFF beams, and flat beams overlaid on top of each other are shown in figures 1 to 5. Each line profile was extracted from the RayStation TPS in Microsoft Excel (Ver. 2016) format datasheet, which will allow obtaining point dose values along the line. The gamma index line is shown in each graph of figures 1 to 5 and is multiplied by 20 for clarity.
Equation 1 was utilized to calculate the flatness of all profiles, with results presented in table 1. Due to jaw opening limits on the Edge machine, 30\(\times\)30 cm\({}^{2}\) fields were not generated for both 10 MV FFF and 10MV beams.
The results from the initial part of this research already indicated that it is highly feasible to deliver a flat beam with a non-flat beam.
Also, the dose distribution of 6MV FFF and 6MV beams as illustrated in figures 6, 7 & 8 indicates a sharp dose fall of FFF beams beyond the target(s).
All doses calculated for both 6MV FFF and 6MV with RadCalc were within \(\pm\)2% of the doses calculated by TPS. Moreover, for each treatment site, the average delivery time of 6 MV FF was compared with the average delivery time results of 6MV shown in figure 9. The maximum dose rate was used to achieve the fastest delivery time for each plan in TPS. As shown in figure 10, more monitor units were needed to generate a uniform dose distribution in the PTV region using non-flat beams; however, this did not lead to longer treatment times for beam with FFF energies.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
**a)** & **Brainstem** & **Spinal Cord** & **Esophags** & **Larynx** & **Left Parotid** & **Right Parotid** & **Trachea** \\ \hline
**V5\% (cc)** & 0.00 & 0.22 & 0.00 & -0.13 & 0.00 & 0.00 & 0.00 \\ \hline
**V10\% (cc)** & -0.14 & 0.00 & 0.00 & 0.00 & -0.46 & 0.81 & 0.00 \\ \hline
**V20\% (cc)** & -0.29 & -0.24 & 0.00 & 0.00 & -0.23 & 0.00 & 0.00 \\ \hline
**V30\% (cc)** & -0.16 & -1.09 & -0.17 & -0.16 & -0.71 & 0.26 & -0.20 \\ \hline
**Mean Dose (Gy)** & 0.00 & -0.34 & -0.36 & -0.55 & 2.00 & 2.25 & -16.50 \\ \hline
**Max Dose (Gy)** & -35.24 & -39.71 & -13.33 & -21.70 & -3.71 & -3.44 & 7.75 \\ \hline
**b)** & _Spinal Cord_ & _Esophags_ & _Heart_ & _Lungs_ & _Trache \& **Garino**_ \\ \hline
**V5\% (cc)** & -0.94 & -0.53 & -6.65 & -33.67 & -0.77 & \\ \hline
**V10\% (cc)** & -0.18 & 0.42 & -8.00 & -33.53 & -1.45 & \\ \hline
**V20\% (cc)** & -0.68 & -0.50 & -7.58 & -32.52 & -1.55 & \\ \hline
**V30\% (cc)** & -1.84 & -0.86 & -9.27 & -30.72 & -1.43 & \\ \hline
**Mean Dose (Gy)** & -35.22 & -34.36 & -24.00 & -11.43 & -74.36 & \\ \hline
**Max Dose (Gy)** & -19.71 & -38.14 & -216.21 & -8.00 & 6.93 & \\ \hline \end{tabular}
\end{table}
Table 2: Differences in average maximum doses (Gy), mean doses (Gy), and volumes receiving 5%, 10%, 20%, and 30% of the prescription dose between 6 MV and 6 MVFF for each organ at risk. a) Head & Neck cases; b) Lung/Mediastinum cases. The abbreviation of FFF refers to Flattening Filter free.
Figure 8: Example of the sagittal-view dose distribution in a Lung/Mediastinum cancer treatment plan. A) 6MV FFF and B) 6MV-Dose level 40Gy.
Figure 10: Total number of MUs for Lung/Mediastinum and H&N treatment plans while the maximum available dose rate on the Unac was utilized. Whisker chart shows the distribution of data into quartiles, the line and X within the box show the median and mean values respectively. Dots outside the box are outliers. Y-axis is the Total MU.
Figure 7: Example of the Coronal-view dose distribution in Head and Neck cancer treatment plans. A) 6MV FFF, and B) 6MV-Dose level 36 Gy.
Figure 9: Average delivery time[s] for Lung/Mediastinum and H&N treatment plans while the maximum available dose rate on the Unac was utilized. Whisker chart shows the distribution of data into quartiles, the line and X within the box show the median and mean values respectively. Dots outside the box are outliers. Y-axis is the average delivery time (second).
## Discussion
Using a commissioned RayStation TPS on Varian linacs, the FFF beams were optimized through a sliding window inverse planning process to confirm the ability to deliver a flat beam with various field sizes. Large field sizes (10x10, 20x20, and 30x30 cm\({}^{2}\)) were chosen for this study to prove the concept that a TPS can generate flat profiles with FFF beams, thus mimicking the effect of a flattened beam. As shown in table 1, for beam profiles computed through the TPS, the flatness of dose was superior for both FFF beams in both inline and cross line in contrast to the flat beams. As evident from figures 1 to 5, minor deviations in the dose profiles were observed at the field edges, where instead of a gradual decrease of the flattened beam profiles the optimized FFF beam plans resulted in sharper edge drop-offs and slight "horn" features. The highest dose deviation in those regions never exceeded 5% of the normalized dose. Gamma passing rates are also shown in figures 1 to 5 (note the multiplication factor of 20 used for clarity) and point also to excellent agreement between the profiles except for a few points coinciding with the "horn" locations. Small percentage differences in flatness between flat beams and FFF within a specific square plane demonstrate the feasibility of using a non-flat beam to generate the flat dose distribution with the sliding window technique. Our result is in good agreement with Potter _et al._ findings which demonstrated producing a modulated flat beam using a FFF beam is practicable [4].
A set of H&N and Lung/Mediastinum plans were used to illustrate the capabilities of FFF beams in achieving both the superior dose conformity to the target and faster dose fall-off outside the target volumes. For most of the H&N and all Lung/Mediastinum plans volumes of each OAR adjacent to targets receiving low doses in FFF beams-based plans were reduced, as shown in table 2. Also, from figures 6, 7 & 8, it is obvious that the FFF beams achieved uniformity within the region of the target(s) as good or better than conventional flat beams.
Furthermore, mean doses for OAR decreased, maximum doses increased slightly in the high dose level target, and the maximum dose of OAR declined when FFF beams were utilized. Mean doses of both sides of the periods slightly increased for non-flat beams since some parts of these organs were in the PTV region. Similar trends with much more significant dose reduction in OAR were found for Lung/Mediastinum treatment plans. The only observed exception was in the trachea, which was a part of PTVs for one patient, as some hot spots were included in those areas, resulting in a higher max dose. Several studies were conducted to compare FFF vs FF beams for different treatment sites [10, 11, 12, 13]. Our study had similar outcomes which are in parallel with other findings.
Figures 6 and 7 show the dose distribution for one example of each group of treated sites.
These results indicate that it is feasible to deliver a flat beam with a FFF quality and produce treatment plans with escalated total doses while sparing OAR. Albeit non-flat beams might generate higher maximum doses (hot spots) in the whole plan, an increase of less than 3% of the maximum dose should not cause any additional biological complications. Trading a very small escalation of a maximum point dose with preventing OAR from receiving an extra low dose seems a good compromise. Some increase in delivered MU's for FFF plans is another trade-off in achieving higher quality complex plans as shown in figure 10. Salari _et al._[19] also have recently shown that the decrease in off-axis ratio, a characteristic of FFF beams results in MU increase to generate the same uniform PTV coverage which is also in good agreement with the Cashmore's result [20]. Similarly, we can conclude that rapid dose fall-off in FFF beams generally require more MUs to produce the same PTV coverage as flat beams.
As shown in figure 9, unlike other TPS [4, 4, 21], no significant difference was observed in the delivery time of flat vs non-flat beams. This is due to RayStation's optimization algorithm which is capable of providing similar delivery times for both flat and non-flat beams by adjusting other variables such as dose rate and gantry speed to deliver a specific amount of MU in the VMAT technique. It was also shown that the delivery time completely hinges on the ability of the optimization algorithm in VMAT technique where gantry speed and dose rate are two more variables compared to the IMRT technique.
## Conclusions
Using a commissioned RayStation TPS on the Varian linacs, the FFF beams were optimized through sliding window inverse planning process to confirm its capability to deliver flat beams with various field sizes from 10x10 to 30x30 cm\({}^{2}\). The study also demonstrated the superiority of FFF beams to flat beams by comparison of the dosimetric characteristics of beam sets, and also by comparing clinical treatment plans. With identical coverages of PTVs, lower doses to OARs were achieved with FFF beams in plans presented for H&N and mediastinum. As a result, the complete removal of the flattening filter from the gantry head of modern linear accelerators is possible and recommended as it eliminates additional quality assurances for filtered beams while lowering the added complexities in electronics and expenses at the time of manufacturing.
## Acknowledgment
_None._
_Conflict of Interests:_ None to report.
_Ethical consideration:_ Ethical approval for the research was obtained from the University of Toledo (UT-300579).
_Funding:_ None to report.
_Author contribution:_ All authors designed, conceived, analysis the collected data. They contributed and performed the needful in writing the paper. _E.I. Parsai_: Conceived and designed the analysis, collected the data, contributed data, performed the analysis, and wrote the paper. _E. Salari_: Conceived and designed the analysis, collected the data, contributed data, performed the analysis, wrote the paper. _D. Shuydka_: Conceived and designed the analysis, wrote the paper. J. Wan: Conceived and designed the analysis
|
2309.05556 | Itinerant ferromagnetism in transition metal dichalcogenides moiré
superlattices | Moir\'e materials are artificial crystals formed at van der Waals
heterojunctions that have emerged as a highly tunable platform to realize much
of the rich quantum physics of electrons in atomic scale solids, also providing
opportunities to discover new quantum phases of matter. Here we use finite-size
exact diagonalization methods to explore the physics of single-band itinerant
electron ferromagnetism in semiconductor moir\'e materials. We predict where
ferromagnetism is likely to occur in triangular-lattice moir\'e systems, and
where it is likely to yield the highest Curie temperatures. | Pawel Potasz, Nicolás Morales-Durán, Nai Chao Hu, Allan H. MacDonald | 2023-09-11T15:44:12Z | http://arxiv.org/abs/2309.05556v3 | # Itinerant ferromagnetism in transition metal dichalcogenides moire superlattices
###### Abstract
Moire materials are artificial crystals formed at van der Waals heterojunctions that have emerged as a highly tunable platform that is able to realize much of the rich quantum physics of electrons in atomic scale solids, and in several cases even new quantum phases of matter. Here we use finite-size exact diagonalization methods to explore the physics of single-band itinerant electron ferromagnetism in semiconductor moire materials. We predict where ferromagnetism is likely to occur in triangular-lattice moire systems, and where it is likely to yield the highest Curie temperatures.
## I Introduction
Moire materials have already been established as hosts of Mott [1; 2; 3] and topological insulators [4], a rich variety of magnetic states [5; 6; 7; 8], and recently even fractional Chern insulators [9; 10]. Moire materials also provide an alternative platform for studies of itinerant electron ferromagnetism. Ferromagnets are many-electron ground states that break time-reversal but not translational symmetry, have finite macroscopic magnetization, and are more common in metals than in insulators. Ferromagnetic metals exhibit a rich variety of interesting hysteretic magneto-resistive effects that lie at the heart of spintronics [11] and are valuable for technology. Theoretical studies of metallic ferromagnetism in the context of simple one-band Hubbard models [12; 13; 14; 15; 16; 17; 18], although rarely physically realistic, have nevertheless helped provide an understanding of the necessary conditions to stabilize such ground states in crystalline materials. The moire material case, in which isolated bands are common, offers the opportunity to compare theories of single-band itinerant electron ferromagnetism directly with experiment.
In this article we use exact diagonalization methods (ED) to explore metallic ferromagnetism in the single-band triangular lattice moire materials realized in transition-metal dichalcogenide (TMD) heterobilayers [19; 20; 21; 22] such as WSe\({}_{2}\)/MoSe\({}_{2}\) and WSe\({}_{2}\)/WS\({}_{2}\). We predict where ferromagnetism is most likely to occur and where ferromagnetic transition temperatures are maximized. The restriction of our study to the case in which a single band is partially occupied and well separated from other bands [23] is motivated by a technical consideration, namely the need to restrict the dimensions of the many-electron Hilbert spaces studied to manageable sizes [24]. Metallic ferromagnetism is interesting in both single-band and multi-band systems In the multi-band case local moments from one subset of bands that supply local Hunds magnetism can combine with large spin-stiffnesses supplied by another set of band that validate simple mean-field descriptions - using using density functional theory for true atomic scale materials. In contrast, single-band systems are often more difficult to understand, requiring non-perturbative approaches as the one we take here. Although it seems likely that the highest ferromagnetic transition temperatures that can be realized in moire systems are in multi-band systems [25] we nevertheless anticipate that scientific progress can be achieved by comparisons between theory and experiment across a broad range of band filling factors and band widths in the single isolated-band regime.
Our paper is organized as follows. In Section II we specify the model that we study - a triangular lattice moire material model with the Hilbert space truncated to the lowest energy moire band and interaction matrix elements calculated exactly. In Section III we present our numerical results. We examine three different ferromagnetism indicators that are available from finite-size ED calculations; i) ground state spin quantum numbers, ii) magnon energy estimates from the total-momentum dependence of the low-energy many-body excitation spectrum and iii) Lanczos spin-susceptibility calculations. All are consistent with the notion that ferromagnetism occurs when the band filling factor of the lowest energy hole miniband is around \(\nu\sim 3/4\). We estimate that Curie temperatures that can reach \(T\sim 10\) K. Finally in Section IV we summarize and discuss our findings, estimating conditions for which the single-band model is realistic. We conclude that the single-band approximation is not applicable at \(\nu\sim 3/4\) in the TMD moire materials studied experimentally to date, but that it can be realized by choosing systems with the strongest possible moire potentials and maximizing background screening of the Coulomb interaction.
## II Finite size moire material model
In this paper we will focus on transition metal dichalcogenide heterobilayer moire materials [19] in which the topmost valence miniband is energetically isolated, so that holes only populate this band upon doping. Because we are interested mainly in understanding where ferromagnetism has a substantial ordering temperature, we focus on the range of twist angles for which the topmost band is relatively dispersive. The single-particle part of the continuum model Hamiltonian describing these sys
tems is [19]
\[H_{0}=-\frac{\hbar^{2}}{2m^{*}}\mathbf{k}^{2}+\Delta(\mathbf{r}), \tag{1}\] \[\Delta(\mathbf{r})= 2V_{m}\sum_{j=1,3,5}\cos(\mathbf{b}_{j}\cdot\mathbf{r}+\psi). \tag{2}\]
where the \(\mathbf{b}_{j}\) are members of the first shell of moire reciprocal lattice vectors and \(m^{*}\), \(V_{\rm m}\) and \(\psi\) are heterojunction specific parameters. The specific calculations we report on below take effective mass \(m^{*}=0.35\,m_{0}\), where \(m_{0}\) is the rest mass of the electron, moire modulation strength \(V_{\rm m}=25\) meV, and moire potential shape parameter [20]\(\psi=-94^{\circ}\). These numerical values correspond to \(\mathrm{WSe}_{2}/\mathrm{MoSe}_{2}\) heterobilayer moires [19]. It is known [26; 27] that strain relaxation of the moire pattern strengthens the moire modulation potential, an effect that can be incorporated approximately simply by increasing the value of \(V_{\rm m}\). For this reason we take a slightly larger value for the moire modulation than the one reported for the unstrained bilayer [19]. (Approximate scaling relations relating our results to those at larger values of \(V_{\rm m}\) are explained in the discussion section.)
Figs. 1(a) and (b) illustrate the implied moire band structures and densities-of-states. The density-of-states maximum occurs at the energy of a saddle-point van Hove singularity (VHS) at band filling \(\nu\approx 3/4\), where \(\nu=\frac{N}{2M}\) with \(N\) the number of valence band holes in the system (we call them particles from now on), and \(M\) the number of moire unit cells. We will find that ferromagnetism occurs when the van Hove singularity is close to the Fermi level of the competing paramagnetic state. The position of the van Hove singularity (VHS) shifts slightly to larger band filling factors \(\nu\) with increasing twist angle. The VHS is manifested in finite size calculations with \(M\) unit cells by a bunching of discrete states in a small energy interval. In Fig. 1(c) we show the discrete single-particle spectra of (c) \(M=16\) and (d) \(M=36\) meshes. When momentum space is discrete, the thermodynamic limit VHS results in a set of closely spaced discrete energies slightly below \(E=15\) meV. When these states are occupied only by majority spins and all other states are doubly occupied the filling factor is \(\nu=0.72\) for \(M=16\) and \(\nu=0.74\) for \(M=36\) system sizes, respectively. Note that single particle states at general momenta in the Brillouin-zone interior are six fold degenerate simply due to triangular lattice rotational symmetries; this property is responsible for the bunching near \(E=5.0\) meV for \(M=16\) and near \(E=2.5\), \(E=7.0\), \(E=9.0\) meV for \(M=36\). (\(\gamma\) point (\(\mathbf{k}=0\)) states are non-degenerate and Brillouin zone corner states are doubly-degenerate - the degeneracy between \(K\) and \(K^{\prime}\) points.) As is commonly recognized, the bunching of single particles energy levels has an impact on finite-size many-body results, and limits the types of conclusions that can be reached. We will consider a variety of different finite size geometries, each with a corresponding discretization of the moire Brillouin zone. In order to correctly capture the VHS physics we seek meshes that neither underrepresent nor overrepresent the associated high density of states close to \(\nu=0.75\). In the appendix we discuss how we choose finite-size geometries for the calculations discussed in the main text.
The full Hamiltonian is obtained by projecting the two-particle Coulomb interaction term to the topmost valence band shown in Fig. 1(a):
\[H=\sum_{\mathbf{k}}\epsilon_{\mathbf{k}}c^{\dagger}_{\mathbf{k}\sigma}c_{\mathbf{k}\sigma}+ \frac{1}{2}\sum_{\begin{subarray}{c}i,j,k,l\\ \sigma,\sigma^{\prime}\end{subarray}}V^{\sigma,\sigma^{\prime}}_{i,j,k,l}\ c^{ \dagger}_{\mathbf{k}_{i}\sigma}c^{\dagger}_{\mathbf{k}_{j}\sigma^{\prime}}c_{\mathbf{k}_ {l}\sigma^{\prime}}c_{\mathbf{k}_{l}\sigma}, \tag{3}\]
where \(c^{\dagger}_{\mathbf{k}\sigma}\) (\(c_{\mathbf{k}\sigma}\)) creates (annihilates) a particle with momentum \(\mathbf{k}\) and spin \(\sigma\), \(\epsilon_{\mathbf{k}}\) are band energies, and the Coulomb matrix elements are given by
\[V^{\sigma,\sigma^{\prime}}_{i,j,k,l}=\frac{1}{A}\sum_{\begin{subarray}{c}\bm {G}_{i},\mathbf{G}_{j}\\ \mathbf{G}_{k},\mathbf{G}_{l}\end{subarray}}^{{}^{\prime}}\left(z^{*}_{\mathbf{k}_{i},\bm {G}_{i}}z^{*}_{\mathbf{k}_{j},\mathbf{G}_{j}}z_{\mathbf{k}_{k},\mathbf{G}_{k}}z_{\mathbf{k}_{l},\bm {G}_{l}}\right)\frac{2\pi e^{2}}{\epsilon\,q}, \tag{4}\]
with \(z_{\mathbf{k},\mathbf{G}}\) eigenstate coefficients obtained from diagonalization of Hamiltonian \(H_{0}\) given by Eq. (1) in a basis of plane waves \(\mathbf{G}\). In Eq. (4) \(A\) is moire unit cell area, momentum conservation implies that matrix elements are
Figure 1: (a) Particle-hole transformed (hole-picture) band-structure of moiré TMD heterobilayers at twist angle \(\theta=3.0\), moiré modulation strength \(V_{\rm m}=25\) meV and shape parameter \(\psi=-94^{\circ}\). Note that the lowest energy hole miniband is partially occupied and isolated from the remote bands. (b) Density of states (DOS) of the lowest energy hole miniband \(\psi\)s. band filling \(\nu\) for twist angles \(\theta=2.5\), \(\theta=3.0\), \(\theta=3.5\). The inset indicates the discrete momenta of a \(M=16\) unit cell finite-size system within a color scale band contour plot for the \(\theta=3.0\) case. These bands have a van Hove singularity at energy \(E_{\rm VH}\approx 15\) meV and band filling factor \(\nu_{\rm VH}\approx 0.75\) in the thermodynamic limit, \(M\to\infty\). (c) and (d): The discrete energies of the (c) \(M=16\) and (d) \(M=36\) finite size systems discussed in the text.
non-zero only if \(\mathbf{k}_{i}+\mathbf{k}_{j}=\mathbf{k}_{k}+\mathbf{k}_{l}\) modulo a moire reciprocal lattice vector, the prime on the sum over the **G**'s implies that \(\mathbf{k}_{i}+\mathbf{G}_{i}+\mathbf{k}_{j}+\mathbf{G}_{j}=\mathbf{k}_{k}+\mathbf{G}_{k}+\mathbf{k}_{l} +\mathbf{G}_{l}\), and \(q=|\mathbf{q}|=|\mathbf{k}_{i}+\mathbf{G}_{i}-\mathbf{k}_{k}-\mathbf{G}_{k}|\) is the momentum transfer. As we have shown previously [28], by working in a Wannier representation the matrix elements can be reexpressed in terms of a single large parameter, the on-site Coulomb interaction \(U_{0}\), and a series of smaller parameters including non-local exchange, interaction assisted hopping, and longer range local interactions. The strength of interactions depends on the value used for the effective dielectric constant \(\epsilon\), which represents screening by the three-dimensional dielectric environment of the moire system. We return to this issue in the discussion section.
The physics of ferromagnetism is often viewed qualitatively as a competition between band energies, which favor states with minimal spin-polarization and interaction energies, which favor spin-polarized states because many-electron wavefunctions must vanish when electrons with parallel spins approach each other, thereby avoiding strong repulsive interactions. The gain in interaction energy per unit cell is often referred to as the Stoner energy \(I\). In Fig. 2 we compare finite size kinetic energies for single-Slater-determinant (SD) states with maximal and minimal spin-polarization in triangular lattice moire materials, \(\Delta E_{\rm kin}=E_{\rm kin}^{\rm min}(S_{\rm max}^{z})-E_{\rm kin}^{\rm min} (S_{\rm min}^{z})\), where the superscripts'min' emphasize that the occupation numbers are chosen to minimize the kinetic energy subject to the spin-polarization constraint. The energy difference per moire period reaches its maximum when the band is half-filled because this is the filling factor with the maximum possible spin-polarization per moire cell. The kinetic energy cost increases with twist angle \(\theta\) because of increasing band widths, see the appendix. Note that the kinetic energy cost of spin-alignment is, for the most part, reasonably well approximated at relatively small system sizes, and that the kinetic energy cost is very small for large band filling factors because of the VHS near the top of the first hole miniband. This is the filling factor regime where itinerant ferromagnetism might be expected.
## III Exact diagonalization results
We will discuss three different indicators for ferromagnetism that are available from finite-size calculations. First of all we consider the total spin quantum number of the finite-size many-electron ground state. The absence of spin-orbit coupling in our model allows a ferromagnet to be defined as a system in which the ground state total spin quantum number \(S\) is extensive. We find that maximal spin-polarization is common in finite-size systems at band filling factors larger than about \(3/4\), and conclude that ferromagnetism will occur through much of this filling factor range. In the following subsections we estimate the temperature to which ferromagnetism survives in two different ways: i) by extracting magnon-energies from the momentum dependence of the many-body excitation spectrum and ii) by extracting finite temperature Stoner energies \(I\) from the temperature-dependent spin-susceptibilities calculated using finite-temperature Lanczos methods.
Figure 3: The ground state total spin \(S\) as a function of filling factor \(\nu\) from exact diagonalization calculations for a system with \(M=N_{1}\times N_{2}=16\) unit cells (\(N_{1}\) and \(N_{2}\) are defined in the Appendix). (a) Spin polarization map: Total spin as a function of filling factor \(\nu\) and dielectric constant \(\epsilon\) for twist angle \(\theta=2.5\) and moire potential strength \(V_{\rm m}=11\) meV. A horizontal blue line labels the metal-insulator transition at half-filling [29]. (b) Comparison of the ground state spin polarization of the moire continuum Hamiltonian and the corresponding on-site Hubbard model for dielectric constant \(\epsilon^{-1}=0.04\), twist angle \(\theta=3.0\), moiré strength \(V_{\rm m}=25\) meV and moiré shape \(\psi=-94^{\circ}\). A dashed line indicates the position of the van Hove singularity for finite size mesh.
### Ground State Spin
We first assess the tendency toward ferromagnetism by comparing ground state energies in different total spin \(S\) sectors. Typical results are summarized in Fig. 3(a), where we plot ground state spin quantum numbers vs \(\nu\) and the interaction strength parameter \(\epsilon^{-1}\). Large ground state spins appear in several different regimes in this plot. First of all they appear at small band filling factors and weak interactions. We view ferromagnetism in this regime as an artifact of the symmetry-related momentum-space shell degeneracy of the finite-size mesh used to produce these results, which we have illustrated in Fig. 1. Secondly, ferromagnetism is seen near half-filling of the band at large interaction strengths. The ground state at \(\nu=1/2\) for this range of interaction parameters is an interaction-induced insulator (the blue line in Fig. 3(a) labels a metal-insulator transition), but the ground state is ferromagnetic rather than antiferromagnetic because spatially indirect exchange interactions (\(\propto\epsilon^{-1}\)) exceed antiferromagnetic superexchange interactions (\(\propto\epsilon\)). The property that Mott insulators are sometimes ferromagnetic in moire materials has been discussed previously [28]. Our main interest here is in the very robust ferromagnetic states that appear near band filling \(\nu=3/4\), where the ground state is metallic. In Fig. 3(b) we plot the ground state spin _vs._\(\nu\) in the moderate interaction strength regime, where non-local exchange is unimportant, demonstrating that its value is unchanged when the interaction model is truncated to include only the on-site Hubbard-like Coulomb interaction term. In the appendix we show that the magnetic competition in the insulating state at \(\nu=1/2\) is shifted in favor of antiferromagnetism with increasing twist angle, as expected since larger band widths imply stronger superexchange interactions.
In Fig. 4 we analyze the competition between ferromagnetism and paramagnetism by partitioning the total energy into four different contributions: kinetic energy \(E_{\rm kin}\), Hartree energy \(E_{\rm H}\), Fock (exchange) energy \(E_{\rm exch}\), and correlation energy \(E_{\rm corr}\). Convergence to the thermodynamic limit is easily obtained for the first three terms, whereas the fourth part, the correlation energy, must be estimated from finite-size calculations and extrapolated to the thermodynamic limit. For the purposes of the qualitative point that we wish to make in this paragraph, we define the sum of the first three terms as the expectation value of the full Hamiltonian in the single Slater determinant (SD) state constructed by occupying the lowest energy single-particle states for a given spin-polarization. We define the mean-field interaction energy difference \(\Delta E_{\rm HF}=\Delta E_{\rm H}+\Delta E_{\rm exch}\) between maximally and minimally spin polarized SD states by subtracting the kinetic energy contribution to the energy difference:
\[\Delta E_{\rm HF}=\Delta E_{\rm SD}-\Delta E_{\rm kin}. \tag{5}\]
Note that \(\Delta E_{\rm HF}\) accounts for the fact that the shape of the charge distribution within the unit cell is different in spin-polarized and unpolarized state, an effect that is absent in the Hubbard model. Because of this effect the lowest energy SD state is not always the one constructed from the lowest energy single-particle states. In the Appendix we show results for \(\Delta E_{\rm exch}\) obtained from multi-band self-consistent Hartree-Fock calculations. These energies have larger negative values because of the additional band-mixing degrees-of-freedom that are optimized.
The correlation energy is defined as the difference between the ED ground state energy and the lowest energy SD ground state energy in a given spin sector with subtracted kinetic energies contributions
\[E_{\rm corr}(S_{\rm max})= E_{\rm tot}(S_{\rm max})-E_{\rm tot}^{kin}(S_{\rm max}) \tag{6}\] \[- (E_{\rm SD}(S_{\rm max})-E_{\rm SD}^{kin}(S_{\rm max})).\]
The correlation energy difference is
\[\Delta E_{\rm corr}=E_{\rm corr}(S_{\rm max})-E_{\rm corr}(S_{\rm min}). \tag{7}\]
With the above definitions, the total energy difference is
\[\Delta E_{\rm tot}=\Delta E_{\rm SD}+\Delta E_{\rm corr}. \tag{8}\]
In Fig. 4 we see that mean-field interaction energies \(\Delta E_{\rm HF}\) strongly favor spin-polarized states, and that the degree to which interactions favor spin-polarized states is strongly reduced when correlations are included. For the parameters of this calculation, increasing the strength of interactions actually does not substantially increase the degree to which interactions favor spin-polarization. This is precisely the problem in estimating where ferromagnetism occurs; once correlations are strong, electrons avoid each other well even if they have the same spin, and even in metallic states. Ferromagnetism is most likely when one subset of states has a high density-of-states so that it is easily polarized, and the remaining states are strongly dispersive so that correlations are suppressed. Conditions favorable for ferromagnetism are regularly achieved in multi-band systems, like the paradigmatic late \(3d\) transition metals. In single-band systems somewhat less favorable conditions can be achieved by having a sharp maximum in the density-of-states. For 2D materials, maxima always appear at saddle points in the band structure. It follows that single-band ferromagnetism in 2D moire materials is most likely when the Fermi level of the paramagnetic state is close to a saddle point in the band structure.
A typical result for the competition in total energy between fully spin polarized and depolarized states is summarized in Fig. 5 where we see that ferromagnetism is most likely near \(\nu=3/4\) as expected. The Hartree-Fock theory results for the weaker of the two interaction strengths considered tell a cautionary tale about finite-size effects since they predict ferromagnetism for \(M=16\) finite-size systems and paramagnetism for \(M=441\) finite-size systems; the \(M=16\) mesh overstates the van Hove singularity, see Fig. 2. In a vicinity of half-filling
ferromagnetism is predicted for \(M=441\) but the energy of SD state with \(S=S_{\rm min}\) is not the lowest one here; instead a state with broken translation symmetry, the three sublattice Neel state, is expected to have lower energy and competes with FM; both of these two states have been indeed observed in experiment [1; 6]. For stronger interactions, ferromagnetism is predicted in a vinity of \(\nu=0.75\) for both meshes. In the following sections, we focus on estimates of transition temperatures for ferromagnetism around this particular filling, indicated by a black dashed line in Fig. 3.
### Magnon Energies
In metallic ferromagnets with large splitting between majority spin and minority spin quasiparticle energies, the ordering temperature is typically limited by collective thermal fluctuations. The Curie temperature then scales with the energies of the magnon modes, just as it does in insulating magnets. In Fig. 6(a) we show the spin-flip excitation spectrum of a typical maximally spin-polarized state near \(\nu=3/4\). We associate the 16 lowest energy excitations (one for each momentum) with magnon modes and the higher-energy excitations with unbound quasiparticle spin-flip excitations. We see that the magnon energies are several times smaller than the quasiparticle spin-splitting energy. In Fig. 6(b) we show the twist angle dependence of the highest magnon energy, which grows with the band width, suggesting that spin-stiffness is supplied mainly by band dispersion.
Since we neglect spin-orbit interactions, our two-dimensional model is spin-rotationally invariant and its critical temperature therefore vanishes. We defer to a separate study the issue of engineering strong spin-orbit interactions in TMD triangular lattice moire materials in order to suppress long-wavelength thermal fluctuations. Fig. 6(b) suggests that ferromagnetic critical temperatures approaching 100 K could be achievable at large twist angles for sufficiently strong spin-orbit interactions. However, it is important to realize that the single-band approximation could fail at large twist angles. We return to this point again in the discussion section.
### Finite Temperature Lanczos Method
One of the interesting aspects of moire materials physics from a fundamental point of view is that the regime in which the temperature is comparable to or larger than the band width is experimentally accessible. In the following paragraphs we address the temperature dependence of magnetic properties over this wide energy interval.
For the evaluation of thermodynamic properties in the canonical ensemble, we need to calculate thermal expectation values of relevant operators \(A\):
\[\langle A\rangle=\frac{\sum_{n=1}^{N_{st}}\langle n|e^{-\beta H}A|n\rangle}{ \sum_{n=1}^{N_{st}}\langle n|e^{-\beta H}|n\rangle}, \tag{9}\]
where \(\beta=1/k_{B}T\) with \(k_{B}\) the Boltzman constant, the partition function \(Z=\sum_{n=1}^{N_{st}}\langle n|e^{-\beta H}|n\rangle\), and \(|n\rangle\) is summed over orthonormal basis states. The exponential increase of \(N_{st}\) with system size places severe limits on the direct application of these fundamental formulas.
Figure 4: Exchange energy and correlation energy difference between maximally \(S_{\rm max}\) and minimally \(S_{\rm min}\) spin polarized states normalized per moiré unit cell. These plots are based on finite-size ED calculations for \(M=16\) and on non-self-consistent Hartree-Fock, single Slater determinant SD, for \(M=441\). A dashed line indicates the position of the van Hove singularity. (a) and (c) for interaction strength \(\epsilon^{-1}=0.04\) and (b) and (d) for interaction strength \(\epsilon^{-1}=0.1\). These plots are for twist angle \(\theta=3.0\), moiré modulation strength \(V_{\rm m}=25\) meV, and potential shape \(\psi=-94^{\circ}\).
Figure 5: The total energy difference between maximally \(S_{\rm max}\) and minimally \(S_{\rm min}\) spin polarized states \(\Delta E_{\rm tot}=E_{\rm tot}(S_{\rm max})-E_{\rm tot}(S_{\rm min}))\) per moiré unit cell for (a) \(\epsilon^{-1}=0.04\) and for (b) \(\epsilon^{-1}=0.1\). \(\Delta E_{\rm tot}\) for \(M=16\) is obtained from ED calculations and for \(M=441\) from exchange energy and extrapolated correlation energy from ED. These results were obtained with model parameters \(\theta=3.0\), \(V_{\rm m}=25\) meV and \(\psi=-94^{\circ}\).
The problem can be avoided if an appropriate statistical average of the full Hilbert space is generated. In the finite temperature Lanczos method (FTLM) [30] one starts with the high temperature expansion:
\[\langle A\rangle_{\beta\to 0}=Z^{-1}\sum_{n=1}^{N_{st}}\sum_{k=0}^{\infty} \frac{(-\beta)^{k}}{k!}\langle n|H^{k}A|n\rangle, \tag{10}\]
where
\[Z=\sum_{n=1}^{N_{st}}\sum_{k=0}^{\infty}\frac{(-\beta)^{k}}{k!} \langle n|H^{k}|n\rangle. \tag{11}\]
The Lanczos algorithm is an iterative method for finding extreme eigenvalue of a large matrix in which expectations of high powers of the Hamiltonian naturally appear. During Lanczos iteration steps, a set of orthogonal basis vectors is generated (a Krylov space), spanning a finite-size space that contains approximations to eigenvectors corresponding to extreme eigenvalues of a full Hilbert space with accuracy controlled by the number of iteration steps. In the Lanczos method the Hamiltonian is diagonalized in this Krylov space obtaining Lanczos eigenvectors \(|l\rangle\) and the associated Lanczos energy eigenvalues \(\epsilon_{l}\). When the number of Lanczos steps \(N_{l}\geq k\) one can write
\[\langle n|H^{k}A|n\rangle \approx \sum_{l=0}^{N_{L}}\langle n|H^{k}|l(n)\rangle\langle l(n)|A|n\rangle= \tag{12}\] \[\sum_{l=0}^{N_{L}}(\epsilon_{l(n)})^{k}\langle n|l(n)\rangle \langle l(n)|A|n\rangle\]
and
\[\langle n|H^{k}|n\rangle\approx\sum_{l=0}^{N_{L}}(\epsilon_{l(n) })^{k}|\langle l(n)|n\rangle|^{2}. \tag{13}\]
\(N_{L}\) is a parameter of the approximation that needs to be large enough to reach accurate extremal energy eigenvalues; for the calculations we present below we take \(N_{L}=150\). Inserting Eq. (12) and Eq. (13) into Eq. (10) and Eq. (11) and replacing the sum over all orthonormal basis states by a much smaller sum over \(R\) random Lanczos seed states, in analogy to Monte Carlo methods, yields
\[\langle A\rangle\approx Z^{-1}\frac{N_{\rm st}}{N_{\rm R}}\sum_{ \nu\in N_{\rm R}}\sum_{l}^{N_{\rm L}}e^{-\beta\epsilon_{l(\nu)}}\langle l( \nu)|A|\nu\rangle\langle\nu|l(\nu)\rangle, \tag{14}\]
where the partition function is
\[Z\approx\frac{N_{st}}{N_{\rm R}}\sum_{\nu}^{N_{\rm R}}\sum_{l}^{ N_{L}}e^{-\beta\epsilon_{l(\nu)}}|\langle l(\nu)|\nu\rangle|^{2}. \tag{15}\]
The exponential-size Hilbert space of the Hamiltonian is thereby approximated by its spectral representation in a Krylov space spanned by the \(N_{\rm L}\) Lanczos vectors starting from each random vector. The chosen random vectors \(|\nu\rangle\) should ideally be mutually orthogonal, but for practical purposes this is not really necessary since two vectors with random components in a large dimensional space are always nearly orthogonal.
In general calculations using this approach are less sensitive to finite size effects as temperature increases, and most sensitive to finite size at \(T=0\). This property is related to the fact that at \(T=0\) both static and dynamical quantities are calculated from one eigenstate only, and the selection of this state can be dependent on the size and on the shape of the finite-size system. \(T>0\) introduces thermodynamic averaging over a larger number of eigenstates and this directly reduces finite-size effects for static quantities. Calculational efficiency can be improved by taking symmetries into account, so that \(N_{st}\) corresponds to the number of states with a given symmetry.
In our view, the finite temperature Lanczos method (FTLM) is ideally suited to exploring the high-temperature physics that is observable for the first time in moire materials. In this work we focus on calculations of the spin magnetic susceptibilty \(\chi=\beta\langle S_{z}^{2}\rangle\) where
\[\langle S_{z}^{2}\rangle=\frac{\sum_{n}\exp{(-\beta\epsilon_{n})}S_{z}(n)^{2} }{\sum_{n}\exp{(-\beta\epsilon_{n})}}. \tag{16}\]
Because \([H,S_{z}]=0\), the Lanczos method can be applied to each \(S_{z}\) sector separately. The FTLM formula for the susceptibilty is
\[\chi=Z^{-1}\sum_{s}\frac{N_{\rm st}(s)}{N_{\rm R}(s)}\sum_{\nu=1 }^{N_{\rm R}(s)}\sum_{l=1}^{N_{L}}e^{-\beta\epsilon_{l(\nu)}}|\langle l(\nu)| \nu\rangle|^{2}s^{2}, \tag{17}\]
Figure 6: Spin-flip excitation spectrum of a \(M=N_{\rm x}\times N_{\rm y}=16\) fully polarized ground state. (a) Energy spectrum for total spin \(S=S_{\rm max}-1\) for the system with \(N_{\rm h}=9\) holes (\(N=23\), \(\nu=0.73\)), corresponding to the filling factor \(\nu\) indicated by a dashed line in Fig. 3, \(\epsilon^{-1}=0.04\), \(\psi=-94^{\circ}\). The 16 lowest energy excitations can be associated with magnon collective modes, and the higher energy excitations with unbound spin-flip particle-hole excitations. \(\Delta E\) indicates the width of the magnon spectrum, which scales with the transition temperature. \(N_{\rm x}\) (\(N_{\rm y}\)) is the number of unit cells along two directions determined by real space lattice vectors \({\bf a}_{1}\) and \({\bf a}_{2}\) on a triangular lattice, \(K_{\rm x}(K_{\rm y})\) are total momenta along two directions determined by reciprocal space lattice vectors \({\bf b}_{1}\) and \({\bf b}_{2}\). (b) The width of magnon spectrum \(\Delta E\) as a function of a twist angle \(\theta\) for the moiré superlattice and its corresponding Hubbard model.
where \(s\) is the \(S_{z}\) value for the subspace. We find that the most accurate results are obtained for \(N_{\rm R}(s)\) chosen such that the ratio between the Hilbert subspace size and the number of vectors is kept constant.
The accuracy of FTLM finite-size calculations is assessed in Fig. 7 by comparing \(\chi\) as calculated by performing the full sum over all states with the FTLM sum. The three plots in Fig. 7 ((a) \(\chi/\beta\) as a function of inverse temperature \(\beta\), (b) the susceptibility \(\chi(T)\) and the (c) inverse susceptibility \(\chi^{-1}(T)\) as a function of temperature \(T\). The \(\beta\to 0\) (\(T\rightarrow\infty\)) and \(\beta\rightarrow\infty\) (\(T\to 0\)) limits of \(\chi/\beta\) can be calculated analytically by averaging \(S_{z}^{2}\) over the full Hilbert space (\(\chi^{-1}/\beta\to M\nu(1-\nu)/2\) for \(\beta\to 0\)) and over the ground state spin multiplet
Figure 7: Comparison of FTLM and exact susceptibility calculations for \(M=16\) moiré unit cells, \(N=4\) electrons, and different number of random vectors \(N_{\rm R}\). (a) \(\chi/\beta\) as a function of inverse temperature \(\beta\). The inset shows low energy many-body spectrum with total spin indicated by color. (b) susceptibility \(\chi(T)\) and (c) inverse susceptibility \(\chi^{-1}(T)\) as a function of temperature \(T\). The blue line in (c) is a linear fit to estimate a transition temperature \(T_{\rm C}\). The number of Lanczos steps is taken to be \(N_{\rm L}=150\). The size of the Hilbert space for \(S_{z}=0\) and fixed total momentum \({\bf K}\) is weakly momentum dependent and around 900. The parameters for this illustration are interaction strength \(\epsilon^{-1}=0.04\), twist angle \(\theta=3.0\), \(V_{\rm m}=25\) meV, and \(\psi=-94^{\circ}\). \(N_{\rm R}=20,10,5\) means that for \(S_{z}=0\) we take \(N_{\rm R}=20\), \(S_{z}=\pm 1\) we take \(N_{\rm R}=10\) and for all other subspaces \(N_{\rm R}=5\). If one numbers is given, for all subspaces we take the same \(N_{\rm R}\). The parameters used for this calculation are: dielectric constant \(\epsilon^{-1}=0.04\), twist angle \(\theta=3.0\), \(V_{\rm m}=25\) meV, and \(\psi=-94^{\circ}\).
(\(\chi^{-1}/\beta\to S(S+1)/3\) for \(\beta\rightarrow\infty\)) respectively. For the test case (\(M=16\) and \(N=4\)) illustrated in Fig. 7, the susceptibility can be calculated exactly from the full many-body spectrum because the Hilbert space dimension for a given \(S_{z}\) subspace does not exceed 1000. The exact result is indicated by a red line in Fig. 7, and compared with FTLM estimates based on different numbers of random vectors \(N_{\rm R}\). All lines overlap for temperatures \(T>20\) K, demonstrating the high accuracy of the method in the high temperature limit. The ground state of the system in this case has \(S=0\) (see the inset), which leads to vanishing susceptibility in (b) and divergence of the inverse susceptibility in (c) as \(T\to 0\). The susceptibility reaches a maximum at around \(T=4\) K. The blue line in (c) is a high-temperature linear fit that extrapolates to a finite value for \(T=0\), consistent with a paramagnetic state.
The FTLM estimates have the advantage that they can be drawn from larger Hilbert spaces. In Fig. 8 we show a typical result obtained for \(N=23\) particles (\(N_{\rm h}=9\) holes) in the \(M=16\) case, in the regime of filling factors where ferromagnetism is expected on the basis of the many-body ground state calculations. In this case the many-body ground state has non-zero total spin \(S=9/2\). The exact value of \(\chi/M\beta\) normalized per moire unit cell in the \(\beta\rightarrow\infty\) limit is therefore 0.515, as indicated by a black arrow in Fig. 8(a). (The \(\beta\to 0\) limit 0.10433, which is independent of interactions, is also indicated by a black arrow.) We see that the FTLM method gives accurate results in both limits, irrespective of \(N_{\rm R}\) at \(T\rightarrow\infty\) and for \(N_{\rm R}\geq 10\) at \(T=0\). Generally speaking, the ratio between \(N_{\rm R}\) and the dimension of a given Hilbert subspace is a good accuracy indicator. The increase in \(\chi\) at intermediate temperatures relative to the high-temperature limit shows that on average interactions lower the energies of states with larger \(S_{z}\) relative to those with smaller \(S_{z}\). The linear fit to the inverse susceptibility shown in 8(c) estimates the Curie temperature \(T_{\rm C}\approx=9\) K for this case, and the estimate is not strongly affected by \(N_{\rm R}\) in reasonable ranges. The inset shows the finite size Stoner parameter \(I\) that has the expected linear-in-\(T\) dependence up to around \(T\approx 12\) K.
Having established the efficacy of the FTLM, we now employ it to study trends in ferromagnetism in triangular lattice moire materials. In SM in Fig. 10(b) and (d) we compare inverse susceptibility results for two other twist angles for the same moire modulation potential. Extrapolating from high temperatures where finite-size effects are less severe, we see that the susceptibility at higher temperatures decreases with twist angle. We attribute this decrease to an increase in bandwidth, which decreases the Pauli susceptibility of non-interacting electrons. At the same time, the high-temperature estimate of the Curie temperature at which the susceptibility diverges (the inverse susceptibility vanishes) increases with twist angle. We attribute this increase also to increasing bandwidth, which increases magnon energies by increasing the kinetic energy cost of spatial modulation of the magnetization.
## IV Discussion
We have used three different indicators available from finite-size exact-diagonalization calculations to address the physics of itinerant ferromagnetism in single-band triangular lattice moire materials: i) ground state spin quantum numbers, ii) magnon excitation energies, and iii) temperature dependent spin-susceptibilities. All indicate that ferromagnetism is common at hole band filling factors near \(\nu=3/4\) at temperatures up to \(\sim 10\)K. Our calculations were performed for particular values of the moire modulation strength and shape parameters. These are however expected to be strongly dependent on the specific heterojunction at which the moire pattern is formed, and in particular on strain relaxations at those heterojunctions which will tend to increase modulation strengths [31; 27]. When \(V_{m}\rightarrow\lambda V_{m}\), twist angle \(\theta\rightarrow\sqrt{\lambda}\theta\), and dielectric screening parameter \(\epsilon\rightarrow\sqrt{\lambda}\epsilon\), the three terms in the continuum model Hamiltonian (interaction, moire potential, and kinetic energy) all increase by a factor of \(\lambda\). Since the properties of interest here are relatively insensitive to the interaction strength parameter within reasonable ranges, it follows that the properties of systems with stronger moire potentials can be read off from our results by increasing temperature scales and twist angles. In particular the larger energy scales increase the temperatures at which ferromagnetism can occur.
It is interesting to compare TMD triangular lattice moire materials, with graphene multilayer moire materials that also support ferromagnetic states. In the latter case, it is known that because of topological obstructions inherited from the individual layer Dirac cones [32; 33; 34], a faithful representation of the flat moire minibands requires multi-band tight-binding [35] models, for which the exact diagonalization approach is not practical. In the TMD moire material case, however, the lowest energy moire bands have Wannier functions that are similar to harmonic oscillator ground states centered on moire potential extrema [19]. Although we do not approximate the interaction matrix elements in our one-band model, we have verified that all properties related to ferromagnetism are similar to those of simple triangular lattice Hubbard models.
It is also interesting to compare TMD triangular lattice materials with rhomohedral graphene multilayers [36; 37; 38; 39; 40; 41; 42; 43; 44], a class of two-dimensional materials in which metallic ferromagnetism has been discovered recently. These graphene multilayer systems are like TMD moire materials in that they have peaks in their densities of states, related in that case to Liftshitz transitions of distorted Dirac cones, but they do not have minibands and are not approximated by Hubbard models. The magnetism that appears in these systems is consistent with the notion that the key to ferromagnetism is a sharp density
of-states peak in a low-density-of-states background.
The exact diagonalization method we have employed is most suitable when the many-electron Hilbert space can be truncated to a single moire miniband. The small parameter which controls the applicability of this approximation is the ratio of the largest interaction scale, the on-site Hubbard interaction \(U_{0}\), to the sub-band separation. As explained in Ref. [19] these can be estimated by making a harmonic approximation for the moire potential. We find that \(U_{0}\sim\mathrm{Ry}^{3/4}(\mathrm{zV_{m}})^{1/4}(\mathrm{a_{B}/a_{M}})^{1/2}\), where \(z=6\) is the triangular lattice coordination number and Ry and \(a_{B}\) are the host 2D semiconductor Rydberg energy scale \(\sim 0.3\) eV and Bohr radius length scale \(\sim 1\) nm. Similarly the subband separation \(\hbar\omega\sim\mathrm{Ry}^{1/2}(\mathrm{zV_{m}})^{1/2}\mathrm{a_{B}/a_{M}}\). It follows that
\[\frac{U_{0}}{\hbar\omega}\sim(\mathrm{Ry}/\mathrm{zV_{m}})^{1/4}(\mathrm{a_{M} /a_{B}})^{1/2}. \tag{18}\]
Truncation to the lowest moire band is justified at all band filling factors \(\nu\in(0,1)\) when the right hand side of Eq. 18 is smaller than \(\sim 1\). Most systems [45; 46] that have been studied to date do not satisfy this criterion. Since continuum model approximations are valid only for \(a_{M}\gtrsim a_{B}\), it follows that single-band ferromagnetism will occur only when the first factor on the right side of Eq. 18 is made small, for example by increasing the dielectric screening environment of the moire system to decrease Ry, or by choosing a system with a particularly large value of \(V_{m}\). From exponentially localized Wannier functions obtained for the topmost valence band used in our calculations, we get, for \(\theta=3.0\), \(U_{0}\epsilon\approx 1121\) meV [28], \(\hbar\omega\approx 58.5\) meV. For \(\epsilon^{-1}=0.1\), \(\frac{U_{0}(\epsilon^{-1}=0.1)}{\hbar\omega}>1\), while for \(\frac{U_{0}(\epsilon^{-1}=0.04)}{\hbar\omega}<1\). Thus for the limit of weaker interaction strength the single band approximation is justified. This suggest that our predictions are relevant for systems with sufficiently close nearby gates.
We note that Coulomb repulsion will increase the energy of the lowest energy hole miniband, as it is filled, by more than it increases the energies of states in higher energy moire minibands. For this reason the regime of parameter space in which occupation of higher energy minibands can be neglected decreases as band filling factor increases. When correlations are included, the ground state at hole filling factor \(\nu=1/2\) is often an insulator. When its lowest energy hole-charged excitation is dominantly in a higher hole miniband, the insulator is referred to as a charge transfer insulator [45; 46]. Since single-band ferromagnetism is most likely near band-filling factor \(\nu=3/4\), the present single-band study is never relevant when the ground state of the half-filled band is a charge transfer insulator, which already involves higher energy subbands in an essential way. If systems could be realized in which the sign of \(V_{m}\) is reversed (or equivalently \(\psi\rightarrow\psi+180^{\circ}\)), ferromagnetism would be expected for minibands that are less than half-filled. For the standard sign of \(V_{m}\) however, any ferromagnetism that occurs when the interaction parameter that is the subject of Eq. 18 is large must be of multi-band character. We leave the analysis of this situation for a future study, for it requires a different approach.
The authors acknowledge helpful interactions with L. Fu and Y. Zhang. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award # DE-SC0022106. PP acknowledges support from the Polish National Science Centre based on Decision No. 2021/41/B/ST3/03322. We acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing high-performance computer resources.
|
2309.13334 | Neighborly partitions, hypergraphs and Gordon's identities | We prove a family of partition identities which is "dual" to the family of
Andrews-Gordon's identities. These identities are inspired by a correspondence
between a special type of partitions and "hypergraphs" and their proof uses
combinatorial commutative algebra. | Pooneh Afsharijoo, Hussein Mourtada | 2023-09-23T10:41:25Z | http://arxiv.org/abs/2309.13334v1 | # Neighborly partitions, hypergraphs
###### Abstract.
We prove a family of partition identities which is "dual" to the family of Andrews-Gordon's identities. These identities are inspired by a correspondence between a special type of partitions and "hypergraphs" and their proof uses combinatorial commutative algebra.
Keywords: Andrew-Gordon's identities; integer partitions; graded algebras; Hypergraphs; Hilbert series Mathematics Subject Classification 2010: 11P84, 11P81, 05A17, 05A19, 05C31, 13F55, 13D40.
## 1. Introduction
The Andrews-Gordon identities (Andrews 1974 [4]) are the \(q-\)series identities which state that for the integers \(r\) and \(i\) satisfying \(r\geq 2,\ 1\leq i\leq r,\) we have
\[\sum_{n_{1},n_{2},\ldots n_{r-1}\geq 0}\frac{q^{N_{1}^{2}+N_{2}^{2}+\cdots+N_{r- 1}^{2}+N_{i}+N_{i+1}+\cdots+N_{r-1}}}{(q)_{n_{1}}(q)_{n_{2}}\ldots(q)_{n_{r-1}} }=\frac{\prod\limits_{n\geq 1,\ n\equiv 0,\pm i(mod.2r+1)}(1-q^{n})}{\prod \limits_{n\geq 1}(1-q^{n})}. \tag{1}\]
Where \(q\) is a variable and \(N_{j}=n_{j}+n_{j+1}+\cdots+n_{r-1}\) for all \(1\leq j\leq r-1\) and \((q)_{n}=(1-q)(1-q^{2})\cdots(1-q^{n})\). In the literature, the right member of the identity (1) is written with the obvious simplification made; we write it in this way to emphasize on the numerator which plays an important role in our paper. The Andrews-Gordon identities are generalizations of the famous Rogers-Ramanujan identities that we obtain if we put \(r=2\) and \(i=1,2\).
There is a combinatorial (versus analytic) version of the identity (1) which is stated in terms of integer partition. Recall that an integer _partition_ (of length \(\ell\)) of a positive integer \(n\) is a sequence \(\lambda=(\lambda_{1}\geq\cdots\geq\lambda_{\ell})\) of positive integers \(\lambda_{i},\) for \(1\leq i\leq\ell,\) such that
\[\lambda_{1}+\cdots+\lambda_{\ell}=n.\]
The integers \(\lambda_{i}\) are called the _parts of the partition \(\lambda.\)_
The combinatorial version of the identities (1) states the following (see Theorem 1 in [7]):
**Theorem**.: _(Gordon's identities). Given integers \(r\geq 2\) and \(1\leq i\leq r,\) let \(B_{r,i}(n)\) denote the number of partitions of \(n\) of the form \((b_{1},\ldots,b_{s})\), where \(b_{j}-b_{j+r-1}\geq 2\)
and at most \(i-1\) of the integers \(b_{j}\) are equal to \(1\). Let \(A_{r,i}(n)\) denote the number of partitions of \(n\) into parts \(\not\equiv 0,\pm i\)\((\text{mod}.2r+1)\). Then \(A_{r,i}(n)=B_{r,i}(n)\) for all integers \(n\)._
Beside combinatorics and number theory, these identities appeared also in representation theory [9] and in Algebraic Geometry and Commutative Algebra [6, 1, 3, 2, 12]).
In this paper, we will prove a family of identities which is in some sense (that will be clarified in a moment) dual to Gordon's identities. For that we will introduce the notions of Neighborly partitions and their signature. These notions generalizes the notions with the same name which were introduced in [11] to prove dual identities to those of Rogers-Ramanujan.
We begin by introducing neighborly partitions. Recall that for an integer partition \(\lambda\), the _multiplicity_\(m_{\lambda}(\lambda_{j})\) of a part \(\lambda_{j}\) of \(\lambda\) is the number of occurrences of \(\lambda_{j}\) in \(\lambda\); for example if \(\lambda\) is the partition \(3+2+2+1+1+1\) of \(10\), then the multiplicities of \(3,2\) and \(1\) are respectively \(1,2\) and \(3\).
**Definition 1.1**.: _(Neighborly partitions) Let \(\lambda=(\lambda_{1}\geq\cdots\geq\lambda_{m})\) be an integer partition and let \(i\) and \(r\) be partitions such that \(1\leq i\leq r\). We say that \(\lambda\) is an \((r,i)\)-Neighborly partition if it satisfies the following conditions:_
1. _For each part_ \(\lambda_{j}\neq 1\) _we have_ \(1\leq m_{\lambda}(\lambda_{j})\leq r\) _and_ \(1\leq m_{\lambda}(1)\leq i.\)__
2. _If_ \(m_{\lambda}(1)=i,\) _then for all parts_ \(\lambda_{j}\neq 1\) _of_ \(\lambda\) _there exists a sub-partition_ \(B_{j}=(\lambda_{k}\geq\cdots\geq\lambda_{k+r-1})\) _of length_ \(r\) _of_ \(\lambda\) _containing_ \(\lambda_{j}\) _in which_ \(\lambda_{k}-\lambda_{k+r-1}\leq 1.\)__
3. _If_ \(m_{\lambda}(1)<i,\) _then for all parts_ \(\lambda_{j}\) _of_ \(\lambda\) _there exists a sub-partition_ \(B_{j}=(\lambda_{k}\geq\cdots\geq\lambda_{k+r-1})\) _of length_ \(r\) _of_ \(\lambda\) _containing_ \(\lambda_{j}\) _in which_ \(\lambda_{k}-\lambda_{k+r-1}\leq 1.\)__
We denote the set of \((r,i)\)-Neighborly partitions by \(\mathcal{N}_{r,i}.\)
**Remark 1.1**.: _Note that each sub-partition of type \(B_{j}\) of a \((r,i)\)-neighborly partition \(\lambda\) is of the form:_
\[(\underbrace{(\ell+1),\cdots,(\ell+1)}_{\text{(r-s) times}},\underbrace{ \ell,\cdots,\ell}_{\text{s-times}}),\]
_for some \(1\leq s\leq r\) and \(\ell\geq 1.\) In particular, \(\lambda_{j}\in\{l,l+1\},\) has many neighbors; these are the parts \(\lambda_{k}\) satisfying \(\mid\lambda_{k}-\lambda_{j}\mid\leq 1.\) This explains the name neighborly. One remarks that defining conditions of neighborly partitions are of opposite nature to the defining conditions of the partitions \(B_{r,i}\) that appear in Gordon's identities._
**Example 1**.: _The integer \(5\) has the following partitions:_
\[5=4+1=3+2=3+1+1=2+2+1=2+1+1+1=1+1+1+1.\]
_For \(r=3\) we have:_
\[\mathcal{N}_{3,3}(5)=\{2+2+1,2+1+1+1\},\ \mathcal{N}_{3,2}(5)=\{2+2+1\},\ \mathcal{N}_{3,1}(5)=\emptyset.\]
To a neighborly partition \(\lambda\), we will associate a hypergraph \(\mathcal{H}_{\lambda}\) and a signature which is a number that we define in terms of \(\mathcal{H}_{\lambda}\).
Recall that the notion "hypergrpah" is a generalization of the notion "graph" where edges may join more than two vertices. More precisely, a _hypergraph_\(\mathcal{H}\) is a pair \((V(\mathcal{H}),E(\mathcal{H}))\) where \(V(\mathcal{H})\) is a set of elements called the vertices of \(\mathcal{H}\) and \(E(\mathcal{H})\) is a set of subsets of \(V(\mathcal{H})\) called the edges of \(\mathcal{H}\).
One can represent a hypergraph \(\mathcal{H}\) graphically by a _Parallel Aggregated Ordered Hypergraph (PAOH for short)_; A _PAOH_ represents the vertices of \(\mathcal{H}\) by parallel horizontal rows, and the edges of \(\mathcal{H}\) by vertical lines in which a point represents a vertex of the edge (see Example 2). A hypergraph is called _\(k\)-uniform_ if each edge contains exactly \(k\) vertices. Thus, a 2-uniform hypergraph is a graph.
The _degree_, \(\deg(v)\), of a vertex \(v\) in a hypergraph \(\mathcal{H}\) is the number of edges of \(\mathcal{H}\) containing \(v.\) If \(\deg(v)=0\) then we say that \(v\) is an _isolated vertices_ of \(\mathcal{H}.\) A hypergraph \(\mathcal{H}\) is simple if there is no edge of \(\mathcal{H}\) which contains another edge.
A _vertex induced sub-hypergraph_\(\mathcal{L}\) of \(\mathcal{H}\) is a hypergraph whose set of vertices \(V(\mathcal{L})\) is a subset of \(V(\mathcal{H})\) and its edges are the edges of \(\mathcal{H}\) whose vertices are in \(V(\mathcal{L}).\)
An _edge induced sub-hypergraph_\(\mathcal{L}\) of \(\mathcal{H}\) is a hypergraph whose edge set is a sub set of \(E(\mathcal{H})\) and whose vertex set is the union of the vertices of its edges.
**Example 2**.: _Consider a hypergraph \(\mathcal{H}\) with \(V(\mathcal{H})=\{v_{1},\cdots,v_{6}\}\) and \(E(\mathcal{H})=\{(v_{1},v_{2},v_{3},v_{4}),(v_{1},v_{3}),(v_{2},v_{5})\}.\) Then its PAOH is represented as follows:_
_and we have:_
* \(\deg(v_{1})=\deg(v_{2})=\deg(v_{3})=2,\ \deg(v_{4})=\deg(v_{5})=1\) _and_ \(v_{6}\) _is an isolated vertices of_ \(\mathcal{H}.\)__
* _The hypergraph_ \(\mathcal{H}\) _is not simple since_ \((v_{1},v_{3})\subset(v_{1},v_{2},v_{3},v_{4}).\)__
* _The hypergraph_ \(\mathcal{L}_{1}\) _with_ \(V(\mathcal{L}_{1})=\{v_{1},v_{2},v_{3},v_{5},v_{6}\}\) _and_ \(E(\mathcal{L}_{1})=\{(v_{1},v_{3}),(v_{2},v_{5})\}\) _is a vertex induced sub-hypergraph of_ \(\mathcal{H}.\)__
* _The hypergraph_ \(\mathcal{L}_{2}\) _with_ \(E(\mathcal{L}_{2})=\{(v_{1},v_{3}),(v_{2},v_{5})\}\) _and_ \(V(\mathcal{L}_{2})=\{v_{1},v_{2},v_{3},v_{5}\}\) _is an edge induced sub-hypergraph of_ \(\mathcal{H}.\)__
To each \((r,i)-\)neighborly partition \(\lambda\) we associate an hypergraph \(\mathcal{H}_{\lambda}\) as follows:
The set of vertices of \(\mathcal{H}_{\lambda}\) is in bijection with the parts of \(\lambda\):
\[V(\mathcal{H}_{\lambda})=\{x_{j,k}|\ j\ \text{is a part of $\lambda$ and $1\leq k\leq m_{\lambda}(j)$}\}.\]
The set edges of \(\mathcal{H}_{\lambda}\) is in bijection with the set of all sub-partitions of type \(B_{j}\) of \(\lambda\) (see Remark 1.1), so we have:
* if \(m_{\lambda}(1)<i,E(\mathcal{H}_{\lambda})=\{(x_{\ell,1},\cdots,x_{\ell,s},x_{( \ell+1),1},\cdots,x_{(\ell+1),(r-s)})|\ \text{for all}\ x_{j,k}\in V(\mathcal{H}_{ \lambda})\ \text{and}\ 1\leq s\leq r\}\).
* If \(m_{\lambda}(1)=i\) then we add to the above set of edges the edge \((x_{1,1},\cdots,x_{1,i})\).
Note that if \(\ell\) is a part of \(\lambda\) with \(m(\ell)=r\) then the edge associated to the sub-partition \((\underbrace{\ell,\cdots,\ell}_{\text{r-times}})\) of \(\lambda\) is \((x_{\ell,1},\cdots,x_{\ell,r})\). This is the case \(s=r\).
Note also that if \(i=r\) or if \(1\leq i<r\) and \(0\leq m_{\lambda}(1)<i\) then \(\mathcal{H}_{\lambda}\) is a \(r-\)uniform hypergraph.
**Example 3**.: _To the partition \(\lambda=2+1+1+1\) of \(\mathcal{N}_{3,3}(5)\) we associate a hypergraph \(\mathcal{H}_{\lambda}\) whose vertex set and edge set are:_
\[V(\mathcal{H}_{\lambda})=\{x_{1,1},x_{1,2},x_{1,3},x_{2,1}\},E(\mathcal{H}_{ \lambda})=\{(x_{1,1},x_{1,2},x_{1,3}),(x_{1,1},x_{1,2},x_{2,1})\}.\]
_The PAOH representation of \(\lambda\) is as follows:_
For a hypergraph \(\mathcal{H}\), we denote by \(Sub_{v}(\mathcal{H})\) (respectively by \(Sub_{e}(\mathcal{H})\)) the set of **vertex induced sub-hypergraphs** (respectively **edge induced sub-hypergraphs**) of \(\mathcal{H}\)**without isolated vertices**. We will also denote by \(|E(\mathcal{H})|\) the number of edges of \(\mathcal{H}\).
**Definition 1.2**.: _Let \(\lambda\in\mathcal{N}_{r,i}\). We define the signature of \(\lambda\) as follows:_
\[\delta(\lambda)=\sum_{\begin{subarray}{c}\mathcal{L}\in Sub_{e}(\mathcal{H}_{ \lambda})\\ V(\mathcal{L})=V(\mathcal{H}_{\lambda})\end{subarray}}(-1)^{|E(\mathcal{L})|},\]
Let us now to denote by \(\mathcal{R}_{r,i}(n)\) the set of integer partitions of \(n\) whose parts are distinct, congruent to \(0,-i\) or \(i\) modulo \(2r+1\). The main result of this paper is the following theorem (see Theorem 5.1):
**Theorem 1.2**.: _Let \(1\leq i\leq r\) be the integers. Then:_
\[\sum_{\lambda\in\mathcal{N}_{r,i}}\delta(\lambda)q^{|\lambda|}=\sum_{n\in \mathbb{N}}\mathcal{R}_{r,i}(n)q^{n}=\prod_{j\equiv 0,\pm i[2r+1]}(1-q^{j}),\]
_where \(|\lambda|\) is the sum of the parts of \(\lambda\)._
This is equivalent to the following theorem (see Theorem 5.2):
**Theorem 1.3**.: _Let \(1\leq i\leq r\) be the integers. Then:_
\[\sum_{\lambda\in\mathcal{N}_{r,i}(n)}\delta(\lambda)=\sum_{\lambda\in\mathcal{R} _{r,i}(n)}(-1)^{\ell(\lambda)}.\]
**Example 4**.: _For the partitions of the integer \(7\) we have:_
\[\mathcal{N}_{3,3}(7)=\{\underbrace{3+2+2}_{\alpha},\underbrace{2+2+2+1}_{ \beta},\underbrace{2+2+1+1+1}_{\gamma}\},\ \mathcal{R}_{3,3}(7)=\{7,4+3\}.\]
_we have:_
\[\sum_{\lambda\in\mathcal{R}_{3,3}(7)}(-1)^{\ell(\lambda)}=(-1)^{\ell(7)}+(-1) ^{\ell(4+3)}=(-1)^{1}+(-1)^{2}=0.\]
_The hyper graphs associated to the partitions \(\alpha,\beta\) and \(\gamma\) are, from left to right, as follows:_
_The unique edge induced hypergraph of \(\mathcal{H}_{\alpha}\) with the same vertex set as \(\mathcal{H}_{\alpha}\) and without isolated vertices is itself. Since it has just one edge, hence \(\delta(\alpha)=(-1)^{1}=-1.\)_
_Similarly, the unique hypergraph in \(Sub_{e}(\mathcal{H}_{\beta})\) with the vertex set \(V(\beta)\) is itself. Since it has two edges, we have \(\delta(\beta)=(-1)^{2}=1.\)_
_But, the set \(Sub_{e}(\mathcal{H}_{\gamma})\) contains two hypergraphs with the vertex set equal to \(V(\mathcal{H}_{\gamma}):\) the hypergraph \(\gamma\) and the following hypergraph which has two edges:_
_Thus, \(\delta(\gamma)=(-1)^{3}+(-1)^{2}=0\) and therefore:_
\[\sum_{\lambda\in\mathcal{N}_{r,i}(7)}\delta(\lambda)=\delta(\alpha)+\delta( \beta)+\delta(\gamma)=-1+1+0=0,\]
_which is equal to \(\sum_{\lambda\in\mathcal{R}_{r,i}(n)}(-1)^{\ell(\lambda)}\) and the theorem holds._
The main theorem greatly generalizes the main results of [11]. It is worth noticing that the proof not only uses the Andrews-Gordon's identities, but are actually equivalent to them, which means that a direct proof of our theorem gives also another proof of the Andrews-Gordon's identities. This program was very recently pursued in the case of Rogers-Ramanujan's identities by O'Hara and Stanton [13].
The proof of our main results follows the organization of the paper: in the second section, we introduce an infinite hypergraph \(\mathcal{H}_{r,i}^{\infty}\) and we express the left member of the identity in theorem 1.2 via a counting series \(S(\textbf{v},y)\) of some finite sub-hypergarph of \(\mathcal{H}_{r,i}^{\infty}.\) In section three, with a simple hypergraph \(\mathcal{H},\) we associate a monomial ideal \(\mathcal{I}(\mathcal{H})\) in a weighted polynomial ring \(A\) whose variables are in bijection with the vertices of \(\mathcal{H};\) we then consider a kind of Hilbert series \(H_{\mathcal{H}}\) of this ideal, this is the generating series of the monomials in the quotient of \(A\) by \(\mathcal{I}(\mathcal{H});\) this latter series is expressed in the same section via \(S(\textbf{v},y).\) We consider a specialization of \(H_{\mathcal{H}}\) which we link in section four to the left member of the identity in theorem 1.2, and in section five to the right member of the same identity, using Gordon's identities.
## 2. \((r,i)\)-Neighborly partitions and hypergraphs
In this section, we introduce an infinite hypergraph \(\mathcal{H}_{r,i}^{\infty}\) and we express the left member of the identity in theorem 1.2 via a counting series \(S(\textbf{v},y)\) of some finite sub-hypergarph of \(\mathcal{H}_{r,i}^{\infty},\) see Lemma 2.1.
For \(i,r\in\mathbb{N},1\leq i\leq r,\) consider the infinite hypergraph \(\mathcal{H}_{r,i}^{\infty}\) with:
* \(V(\mathcal{H}_{r,i}^{\infty})=\{x_{1,1},\cdots,x_{1,i},x_{j,k}|\ k\in[|1,r|],j \in\mathbb{N}^{*}\setminus\{1\}\}.\)
* \(E(\mathcal{H}_{r,i}^{\infty})=\{(x_{1,1},\cdots,x_{1,i}),(x_{\ell,1},\cdots,x _{\ell,s},x_{(\ell+1),1},\cdots,x_{(\ell+1),(r-s)})\},\) where \(\ell\in\mathbb{N}^{*}.\) If \(\ell=1\) then \(1\leq s\leq i-1,\) otherwise \(1\leq s\leq r.\)
The _PAOH_ representation of \(\mathcal{H}_{r,r}^{\infty}\) is given in figure 1.
Note that, the set of **finite** sub-hypergrapghes in \(Sub_{\textbf{v}}(\mathcal{H}_{r,i}^{\infty})\) is in bijection with the set of integer partitions \(\lambda\in\mathcal{N}_{r,i}.\)
**Definition 2.1**.: _Let \(\mathcal{H}\) be a simple hypergraph with a countable vertex set \(V(\mathcal{H})=\{v_{h}|\ h\in I\}.\) We define the following multivariable series in \(\textbf{v}:=(v_{h})_{h\in I}.\) :_
\[S(\textbf{v},y)=\sum_{\begin{subarray}{c}\mathcal{L}\in Sub_{e}(\mathcal{H}) \\ \text{finite}\end{subarray}}(\prod_{v_{h}\in V(\mathcal{L})}v_{h})\ y^{|E( \mathcal{L})|},\]
_where we assume that the coefficient of \(y^{0}\) is equal to \(1\)_
We have the following lemma which gives us an expression of the left hand side series in the Theorem 5.1 in terms of the series just defined above.
**Lemma 2.1**.: _Let \(\textbf{v}=(x_{j,k})_{x_{j,k}\in V(\mathcal{H}^{\infty}_{r,i})}\) and denote by \(S^{w}_{r,i}(q,y)\) the series obtained from \(S(\textbf{v},y)\) by replacing each \(x_{j,k}\) by \(q^{j}\). Then we have:_
\[S^{w}_{r,i}(q,-1)=\sum_{\lambda\in\mathcal{N}_{r,i}}\delta(\lambda)q^{|\lambda |}.\]
Proof.: By definition of \(S_{\mathcal{H}}(\mathbf{v},y)\) we have:
\[S(\mathbf{v},y)=\sum_{\begin{subarray}{c}\mathcal{L}\in Sub_{e}(\mathcal{H}_{r,i }^{\infty})\\ \text{finite}\end{subarray}}(\prod_{x_{j,k}\in V(\mathcal{L})}x_{j,k})\ y^{|E( \mathcal{L})|}\]
\[=\sum_{\begin{subarray}{c}V\subset V(\mathcal{H}_{r,i}^{\infty})\\ \text{Finite subset}\end{subarray}}\Big{(}\sum_{\begin{subarray}{c}\mathcal{L} \in Sub_{e}(\mathcal{H}_{r,i}^{\infty})\\ V(\mathcal{L})=V\end{subarray}}(\prod_{x_{j,k}\in V}x_{j,k})\ y^{|E(\mathcal{L} )|}\Big{)}\]
\[=\sum_{\begin{subarray}{c}V\subset V(\mathcal{H}_{r,i}^{\infty})\\ \text{Finite subset}\end{subarray}}(\prod_{x_{j,k}\in V}x_{j,k})\Big{(}\sum_{ \begin{subarray}{c}\mathcal{L}\in Sub_{e}(\mathcal{H}_{r,i}^{\infty})\\ V(\mathcal{L})=V\end{subarray}}y^{|E(\mathcal{L})|}\Big{)}.\]
On the one hand, as we mentioned before, for each finite sub-hypergraph \(\mathcal{L}^{\prime}\in Sub_{v}(\mathcal{H}_{r,i}^{\infty})\) with a finite \(V(\mathcal{L}^{\prime})=V\subset V(\mathcal{H}_{r,i}^{\infty})\), there exists a unique partition \(\lambda\in\mathcal{N}_{r,i}\) such that \(\mathcal{L}^{\prime}=\mathcal{H}_{\lambda}\). Thus any sub-hypergraph \(\mathcal{L}\in Sub_{e}(\mathcal{H}_{r,i}^{\infty})\) with \(V(\mathcal{L})=V\) is actually an induced edge sub-hypergraph of \(\mathcal{L}^{\prime}=\mathcal{H}_{\lambda}\) with \(V(\mathcal{L})=V.\) So we have:
\[S(\mathbf{v},y)=\sum_{\lambda\in\mathcal{N}_{r,i}}(\prod_{x_{j,k}\in V( \mathcal{H}_{\lambda})}x_{j,k})\Big{(}\sum_{\begin{subarray}{c}\mathcal{L} \in Sub_{e}(\mathcal{H}_{\lambda})\\ V(\mathcal{L})=V(\mathcal{H}_{\lambda})\end{subarray}}(y)^{|E(\mathcal{L})|} \Big{)}.\]
On the other hand, remember that the parts of \(\lambda\) are in bijection with the vertices of \(\mathcal{L}=\mathcal{H}_{\lambda}.\) By definition, \(x_{j,k}\) is a vertices of \(\mathcal{H}_{\lambda}\) if and only if \(j\) is a part of \(\lambda\) repeated at least \(k\) times. This means that if we replace each \(x_{j,k}\) by \(q^{j}\) then \(\prod_{x_{j,k}\in V}x_{j,k}=q^{|\lambda|}\) and therefore:
\[S_{r,i}^{w}(q,-1)=\sum_{\lambda\in\mathcal{N}_{r,i}}q^{|\lambda|}\Big{(}\sum_ {\begin{subarray}{c}\mathcal{L}\in Sub_{e}(\mathcal{H}_{\lambda})\\ V(\mathcal{L})=V(\mathcal{H}_{\lambda})\end{subarray}}(-1)^{|E(\mathcal{L})|} \Big{)}=\sum_{\lambda\in\mathcal{N}_{r,i}}\delta(\lambda)q^{|\lambda|}.\]
## 3. Multigraded Hilbert series of an edge ideal of a hypergraph
Let \(\mathcal{H}\) be a simple hypergraph with a countable vertex set \(V(\mathcal{H})=\{v_{h}|\ h\in I\}.\) In this section, with such \(\mathcal{H}\), we associate a monomial ideal \(\mathcal{I}(\mathcal{H})\) in a weighted polynomial ring \(A\) whose variables are in bijection with the vertices of \(\mathcal{H};\) we then consider a kind of Hilbert series \(H_{\mathcal{H}}\) of this ideal, this is the generating series of the monomials in the quotient of \(A\) by \(\mathcal{I}(\mathcal{H});\) this latter series is expressed in Proposition 3.1 via \(S(\mathbf{v},y).\)
Let \(\mathbb{K}\) be a field of characteristic zero. Consider the polynomial ring \(A=\mathbb{K}[v_{h}|\ h\in I].\) To each edge \(e=(v_{h_{1}},\cdots,v_{h_{\ell}})\) of \(\mathcal{H}\) we can associate a monomial \(m_{e}=v_{h_{1}}\cdots v_{h_{\ell}}\in A.\)
_The edge ideal \(\mathcal{I}(\mathcal{H})\)_ of \(\mathcal{H}\) is a square-free monomial ideal in \(A\) whose generators are obtained from the edges of \(\mathcal{H}\). i.e.,
\[\mathcal{I}(\mathcal{H})=\langle m_{e_{h}}|\ e_{h}\in E(\mathcal{H})\rangle.\]
**Definition 3.1**.: _Let \(\mathcal{H}\) be a simple hypergraph with a countable vertex set \(V(\mathcal{H})=\{v_{h}|\ h\in I\}\). The Hilbert series \(H_{\mathcal{H}}=H_{A/\mathcal{I}(\mathcal{H})}\) of \(\mathcal{H}\) is the following multivariable series in \(\textbf{v}:=(v_{h})_{h\in I}\):_
\[H_{\mathcal{H}}(\textbf{v})=H_{A/\mathcal{I}(\mathcal{H})}(\textbf{v})=\sum_{ \begin{subarray}{c}m\in A\setminus\mathcal{I}(\mathcal{H})\\ \text{monomial}\end{subarray}}m\]
_which is a series whose variables are the vertices of \(\mathcal{H}\)._
**Proposition 3.1**.: _Let \(\mathcal{H}\) be a simple hypergraph with a countable vertex set \(V(\mathcal{H})=\{v_{h}|\ h\in I\}\). Then:_
\[H_{\mathcal{H}}(\textbf{v})=\frac{S(\textbf{v},-1)}{\prod_{v_{h}\in V}(1-v_{ h})}.\]
Proof.: Consider the hypergraph \(\mathcal{L}\) with \(V(\mathcal{L})=V(\mathcal{H})=V\) and \(E(\mathcal{L})=\emptyset.\) We have:
\[H_{\mathcal{L}}(\textbf{v})=H_{A}(\textbf{v})=\sum_{\begin{subarray}{c}m\in A \\ \text{monomial}\end{subarray}}m=\sum_{i_{h}\in\mathbb{N}}(\prod_{v_{h}\in V}v_{ h}^{i_{h}})=\prod_{v_{h}\in V}(\sum_{i_{h}\in\mathbb{N}}v_{h}^{i_{h}})=\frac{1}{ \prod_{v_{h}\in V}(1-v_{h})}.\]
In order to compute \(H_{\mathcal{H}}(\textbf{v})\) we have to consider all monomials of \(A\) which are not in the monomial ideal \(\mathcal{I}(\mathcal{H})=\langle m_{e_{h}}|\ e_{h}\in E(\mathcal{H})\rangle\). Note that a monomial \(m^{\prime}\in A\) is in the ideal \(\mathcal{I}(\mathcal{H})\) if and only if it is a multiple of **at least** one generator of \(\mathcal{I}(\mathcal{H})\). i.e.,
\[m^{\prime}\in\mathcal{I}(\mathcal{H})\iff\exists e_{h}\in E(\mathcal{H}),\ \exists m\in A,\ m^{\prime}=m_{e_{h}}.m.\]
So there could exist **several different generators**\(m_{e_{1}},\cdots,m_{e_{h}}\) of \(\mathcal{I}(\mathcal{H})\) such that \(m^{\prime}\) is a multiple of each of these generators. Thus \(m^{\prime}\) is also a multiple of the least common multiple of any subset of \(m_{e_{1}},\cdots,m_{e_{h}}\).
Therefore, if we denote by \(\operatorname{lcm}(m_{e_{1}},\cdots,m_{e_{h}})\) the least common multiple of
\(m_{e_{1}},\cdots,m_{e_{h}}\) then, in order to remove all monomials of \(\mathcal{I}(\mathcal{H})\) from \(A\)**only once** we have to:
* Add all monomials of \(A\) (which is equivalent to compute \(H_{A}(\textbf{v})=H_{\mathcal{L}}(\textbf{v})\)).
* remove once the monomials of the from \(m_{e}.m\) for some \(e\in E(\mathcal{H})\) and \(m\in A\). i.e, \[-\sum_{\begin{subarray}{c}m\in A\\ e\in E(\mathcal{H})\end{subarray}}m_{e}.m=-H_{A}(\textbf{v})\sum_{e\in E( \mathcal{H})}m_{e}\]
* Since in the previous step we have removed twice the monomials of the form \(\operatorname{lcm}(m_{e_{1}},m_{e_{2}}).m\) so we have to add them ones. which means adding the following series:
\[\sum_{\begin{subarray}{c}m\in A\\ \{e_{1},e_{2}\}\subset E(\mathcal{H})\end{subarray}}\operatorname{lcm}(m_{e_{1}},m_{e_{2}}).m=H_{A}(\mathbf{v})\sum_{\{e_{1},e_{2}\}\subset E(\mathcal{H})} \operatorname{lcm}(m_{e_{1}},m_{e_{2}}).\]
* Once again, since in the previous step we have added the monomials of the form \(\operatorname{lcm}(m_{e_{1}},m_{e_{2}},m_{e_{3}}).m\) twice, we now need to remove them once and so on.
Thus:
\[H_{\mathcal{H}}(\mathbf{v})=H_{A}(\mathbf{v})\Big{(}1-\sum_{e\in E(\mathcal{H })}m_{e}+\sum_{\{e_{1},e_{2}\}\subset E(\mathcal{H})}\operatorname{lcm}(m_{e_ {1}},m_{e_{2}})\]
\[-\sum_{\{e_{1},e_{2},e_{3}\}\subset E(\mathcal{H})}\operatorname{lcm}(m_{e_{1} },m_{e_{2}},m_{e_{3}})+\cdots+.\]
\[+(-1)^{k}\sum_{\{e_{1},\cdots,e_{k}\}\subset E(\mathcal{H})}\operatorname{lcm }(m_{e_{1}},\cdots,m_{e_{k}})+\cdots\Big{)}\]
If we denote by \(T_{k}=(-1)^{k}\sum_{\{e_{1},\cdots,e_{k}\}\subset E(\mathcal{H})} \operatorname{lcm}(m_{e_{1}},\cdots,m_{e_{k}})\) then we have:
\[H_{\mathcal{H}}(\mathbf{v})=H_{A}(\mathbf{v})(1+T_{1}+\cdots+T_{k}+\cdots).\]
Note that choosing \(k\) edges \(\{e_{1},\cdots,e_{k}\}\subset E(\mathcal{H})\) is equivalent to considering an edge induced sub-hypergraph \(\mathcal{L}\) of \(\mathcal{H}\) with \(|E(\mathcal{L})|=k\) whose edge set is \(E(\mathcal{L})=\{e_{1},\cdots,e_{k}\}\subset E(\mathcal{H})\) and whose vertex set is the union of the vertices of its edges. Thus \(\operatorname{lcm}(m_{e_{1}},\cdots,m_{e_{k}})\) is equal to the product of the vertices of \(\mathcal{L}\) and therefore:
\[T_{k}=\sum_{\begin{subarray}{c}\mathcal{L}\subset Sub_{e}(\mathcal{H})\\ |E(\mathcal{L})|=k\end{subarray}}(-1)^{|E(\mathcal{L})|}(\prod_{v_{\ell}\in V( \mathcal{L})}v_{\ell}),\]
and
\[H_{\mathcal{H}}(\mathbf{v}) =H_{A}(\mathbf{v})\Big{(}1+\sum_{k\in\mathbb{N}^{*}}\sum_{ \begin{subarray}{c}\mathcal{L}\subset Sub_{e}(\mathcal{H})\\ |E(\mathcal{L})|=k\end{subarray}}(-1)^{|E(\mathcal{L})|}(\prod_{v_{\ell}\in V( \mathcal{L})}v_{\ell})\Big{)}\] \[=H_{A}(\mathbf{v})\Big{(}1+\sum_{\begin{subarray}{c}\mathcal{L} \subset Sub_{e}(\mathcal{H})\\ \operatorname{finite}\end{subarray}}(-1)^{|E(\mathcal{L})|}(\prod_{v_{\ell}\in V (\mathcal{L})}v_{\ell})\Big{)}\] \[=H_{A}(\mathbf{v})\ S(\mathbf{v},-1)\]
\[= \frac{S(\mathbf{v},-1)}{\prod_{v_{h}\in V}(1-v_{h})}.\]
## 4. Hilbert-Poincare series and signature of the \((r,i)\)-Neighborly partition
For the integers \(1\leq i\leq r\) consider the \(\mathbb{K}\)-algebra:
\[\mathcal{P}_{r,i}=\mathbb{K}[x_{j,k}|\ x_{j,k}\in V(\mathcal{H}_{r,i}^{\infty})] /\mathcal{I}(\mathcal{H}_{r,i}^{\infty}),\]
which is graded by giving the weight \(j\) to \(x_{j,k}.\) This means that there exist finite-dimensional algebras \(\mathcal{P}_{(r,i),n},\) each generated by the monomials of weight \(n\) which do not belong to the ideal \(\mathcal{I}(\mathcal{H}_{r,i}^{\infty})\) and such that \(\mathcal{P}_{r,i}=\oplus_{n\in\mathbb{N}}\mathcal{P}_{(r,i),n}.\) By definition, the _Hilbert-poincare series_ of \(\mathcal{P}_{r,i}\) is given by:
\[HP_{\mathcal{P}_{r,i}}(q)=\sum_{n\in\mathbb{N}}\dim_{\mathbb{K}}(\mathcal{P}_{ (r,i),n})q^{n}.\]
Using Proposition 3.1 and Lemma 2.1 we can see the relation between this \(q\) series and the signature of the neighborly partitions in the following proposition:
**Proposition 4.1**.: _For the integers \(1\leq i\leq r\) we have:_
\[HP_{\mathcal{P}_{r,i}}(q)=\frac{\sum_{\lambda\in\mathcal{N}_{r,i}}\delta( \lambda)q^{|\lambda|}}{(1-q)^{i}\prod_{j\in\mathbb{N}\setminus\{0,1\}}(1-q^{j })^{r}}.\]
Proof.: Note that the monomials of weight \(n\) in \(\mathbb{K}[x_{j,k}|\ x_{j,k}\in V(\mathcal{H}_{r,i}^{\infty})]\) are those whose sum of the first indices is equal to \(n.\) i.e., a monomial \(m=x_{j_{1},k_{1}}\cdots x_{j_{k}k_{\ell}}\) is of weight \(n\) if \(\sum_{s=1}^{\ell}j_{s}=n.\) This means that if we replace \(x_{j_{s},k_{s}}\) by \(q^{j_{s}}\) in \(m\) then we obtain \(q^{n}.\) Thus, by definition of the Hilbert series of a hypergraph and the Hilbert-Poincare series of a graded Algebra, if we denote by \(H_{\mathcal{H}_{r,i}}^{w}(q)\) the series obtained from the Hilbert series of \(\mathcal{H}_{r,i}^{\infty}\) then we have:
\[HP_{\mathcal{P}_{r,i}}(q)=H_{\mathcal{H}_{r,i}}^{w}(q).\]
Applying Proposition 3.1 and Lemma 2.1 on \(H_{\mathcal{H}_{r,i}}^{w}(q)\) we obtain:
\[HP_{\mathcal{P}_{r,i}}(q)=\frac{S_{r,i}^{w}(q,-1)}{(1-q)^{i}\prod_{j\in \mathbb{N}\setminus\{0,1\}}(1-q^{j})^{r}}=\frac{\sum_{\lambda\in\mathcal{N}_{ r,i}}\delta(\lambda)q^{|\lambda|}}{(1-q)^{i}\prod_{j\in\mathbb{N}\setminus\{0,1\}}(1-q^{j })^{r}}.\]
## 5. Gordon's identities and \((r,i)\)-Neighborly partitions
In this section we give the relation between the partitions appearing in the Gordon's identities and the \((r,i)\)-Neighborly partitions. Using this relation, we prove our main theorem.
To do so, we need to use the _polarization_ of the monomial ideals. This is an operation which allows us to obtain a square-free monomial ideal from a non-square-free one by turning the monomial \(x_{\ell}^{s}\in\mathbb{K}[x_{j}|\ j\in B]\) to the square free monomial \(x_{\ell,1}\cdots x_{\ell,s}\in\mathbb{K}[x_{j,k}|\ j\in B,k\in C].\) There is a close relation between the Hilbert-Poincare series of the quotient ring of an ideal and the Hilbert-Poincare series of the quotient ring of its polarization ideal (See Corollary 1.6.3 of [8], see also [14] and [10]).
Denote by \(J_{r,i}\) the following monomial ideal:
\[J_{r,i}=\langle x_{1}^{i},x_{\ell}^{s}x_{\ell+1}^{r-s}|\ 1\leq s\leq r\text{ and } \ell\geq 1\rangle\subset\mathbb{K}[x_{j}|\ j\geq 1].\]
Note that the \(\mathcal{I}(\mathcal{H}_{r,i}^{\infty}),\) is the polarization of the ideal \(J_{r,i}\). In [6] (see also [5]) Bruschek, Mourtada and Schepers proved that if we give to \(x_{j}\) the weight \(j\) then Hilbert-poincare series of the graded algebra \(\mathbb{K}[x_{j}|\ j\geq 1]/J_{r,i}\) is equal to the generating series of partitions appearing in the Gordon's identities, which are counted by \(B_{r,i}(n).\) We use this result and the relation between \(\mathcal{I}(\mathcal{H}_{r,i}^{\infty})\) and \(J_{r,i}\) to prove our main results:
**Theorem 5.1**.: _Let \(1\leq i\leq r\) be the integers. We have_
\[\sum_{\lambda\in\mathcal{N}_{r,i}}\delta(\lambda)q^{|\lambda|}=\sum_{n\in \mathbb{N}}\mathcal{R}_{r,i}(n)q^{n}=\prod_{j\equiv 0,\pm i[2r+1]}(1-q^{j}),\]
_where \(|\lambda|\) is the sum of the parts of \(\lambda.\)_
Proof.: On the one hand, since \(\mathcal{I}(\mathcal{H}_{r,i}^{\infty})\) is the polarization of the ideal \(J_{r,i},\) we have the following relation between the Hilbert-Poincare series of the quotient rings of these ideals (See Corollary 1.6.3 of [8], see also [14] and [10]):
\[HP_{\mathcal{P}_{r,i}}(q)=\frac{HP_{\frac{\mathbb{K}[x_{j}|\ j\geq 1]}{J_{r,i}}} (q)}{(1-q)^{i-1}\prod_{j\in\mathbb{N}\setminus\{0,1\}}(1-q^{j})^{r-1}}\]
On the other hand, since the Hilbert-poincare series of \(\mathbb{K}[x_{j}|\ j\geq 1]/J_{r,i}\) is equal to the generating series of partitions appearing in the Gordon's identities, we have:
\[HP_{\frac{\mathbb{K}[x_{j}|\ j\geq 1]}{J_{r,i}}}(q)=\sum_{n\in\mathbb{N}}B_{r,i} (n)q^{n}=\sum_{n\in\mathbb{N}}A_{r,i}(n)q^{n}=\frac{\prod_{j\equiv 0,\pm i[2r+1]}(1 -q^{j})}{\prod_{j\in\mathbb{N}}(1-q^{j})}.\]
and thus:
\[HP_{\mathcal{P}_{r,i}}(q)=\frac{\prod_{j\equiv 0,\pm i[2r+1]}(1-q^{j})}{(1-q)^{ i}\prod_{j\in\mathbb{N}\setminus\{0,1\}}(1-q^{j})^{r}}.\]
By comparing this formula with the formula in Proposition 4.1 we obtain:
\[\sum_{\lambda\in\mathcal{N}_{r,i}}\delta(\lambda)q^{|\lambda|}=\prod_{j\equiv 0,\pm i[2r+1]}(1-q^{j}).\]
The right-hand side of the equation above is the generating series of \(\mathcal{R}_{r,i}(n).\)
As we mentioned in the introduction, Theorem 5.1 is equivalent to the following theorem:
**Theorem 5.2**.: _Let \(1\leq i\leq r\) be the integers. Then:_
\[\sum_{\lambda\in\mathcal{N}_{r,i}(n)}\delta(\lambda)=\sum_{\lambda\in \mathcal{R}_{r,i}(n)}(-1)^{\ell(\lambda)}.\]
Proof.: Fix an integer \(n\in\mathbb{N}\). In order to prove this theorem we prove that the coefficients of \(q^{n}\) in both sides of the equation in Theorem 5.1 are equal.
On the one hand, we have:
\[\sum_{\lambda\in\mathcal{N}_{r,i}}\delta(\lambda)q^{|\lambda|}=\sum_{n\in \mathbb{N}}(\sum_{\lambda\in\mathcal{N}_{r,i}(n)}\delta(\lambda))q^{n}.\]
On the other hand, in order to find the coefficient of \(q^{n}\) in \(\prod_{j\equiv 0,\pm i[2r+1]}(1-q^{j})\), we take a partition \(\lambda:(\lambda_{1},\cdots,\lambda_{\ell})\in\mathcal{R}_{r,i}(n)\) of length \(\ell\). Thus, for each \(1\leq j\leq\ell\) we know that \(\lambda_{j}\equiv 0,\pm i[2r+1]\) with \(\sum_{j=1}^{\ell}\lambda_{j}=n\) and we have:
\[1.(-q^{\lambda_{1}})(-q^{\lambda_{2}})\cdots(-q^{\lambda_{\ell}})=(-1)^{\ell }q^{\sum_{j=1}^{\ell}\lambda_{j}}=(-1)^{\ell}q^{n}.\]
This proves that the coefficient of \(q^{n}\) in the right-hand side of the equation in Theorem 5.1 is equal to:
\[\sum_{\lambda\in\mathcal{R}_{r,i}(n)}(-1)^{\ell(\lambda)}.\]
Thus:
\[\sum_{\lambda\in\mathcal{N}_{r,i}(n)}\delta(\lambda)=\sum_{\lambda\in \mathcal{R}_{r,i}(n)}(-1)^{\ell(\lambda)}.\]
|
2308.16603 | Weighted approximation for limsup sets | Theorems of Khintchine, Groshev, Jarn\'ik, and Besicovitch in Diophantine
approximation are fundamental results on the metric properties of $\Psi$-well
approximable sets. These foundational results have since been generalised to
the framework of weighted Diophantine approximation for systems of real linear
forms (matrices). In this article, we prove analogues of these weighted results
in a range of settings including the $p$-adics (Theorems 7 and 8), complex
numbers (Theorems 9 and 10), quaternions (Theorems 11 and 12), and formal power
series (Theorems 13 and 14). We also consider approximation by uniformly
distributed sequences. Under some assumptions on the approximation functions,
we prove a 0-1 dichotomy law (Theorem 15). We obtain divergence results for any
approximation function under some natural restrictions on the discrepancy
(Theorems 16, 17, and 19).
The key tools in proving the main parts of these results are the weighted
ubiquitous systems and weighted mass transference principle introduced recently
by Kleinbock and Wang [Adv. Math. 428 (2023), Paper No. 109154], and Wang and
Wu [Math. Ann. 381 (2021), no. 1-2, 243--317] respectively. | Gerardo González Robert, Mumtaz Hussain, Nikita Shulga, Benjamin Ward | 2023-08-31T10:04:00Z | http://arxiv.org/abs/2308.16603v2 | # Weighted approximation for limsup sets
###### Abstract.
Theorems of Khintchine, Groshev, Jarnik, and Besicovitch in Diophantine approximation are fundamental results on the metric properties of \(\Psi\)-well approximable sets. These foundational results have since been generalised to the framework of weighted Diophantine approximation for systems of real linear forms (matrices). In this article, we prove analogues of these weighted results in a range of settings including the \(p\)-adics (Theorems 7 and 8), complex numbers (Theorems 9 and 10), quaternions (Theorems 11 and 12), and formal power series (Theorems 13 and 14). We also consider approximation by uniformly distributed sequences. Under some assumptions on the approximation functions, we prove a 0-1 dichotomy law (Theorem 15). We obtain divergence results for any approximation function under some natural restrictions on the discrepancy (Theorems 16, 17, and 19).
The key tools in proving the main parts of these results are the weighted ubiquitous systems and weighted mass transference principle introduced recently by Kleinbock and Wang [Adv. Math. 428 (2023), Paper No. 109154], and Wang & Wu [Math. Ann. 381 (2021), no. 1-2, 243-317] respectively.
This research is supported by the ARC Discovery Project 200100994.
4.2 Proof of Theorem 7
* 4.3 Proof of Theorem 8
* 5 Complex approximation
* 5.1 A Minkowski-type theorem
* 5.2 Proof of Theorem 9
* 5.3 Proof of Theorem 10
* 6 Quaternion approximation
* 6.1 Ubiquity for quaternions
* 6.2 Proof of Theorem 11
* 6.3 Proof of Theorem 12
* 7 Formal power series approximation
* 7.1 Proof of Theorem 13
* 7.2 Proof of Theorem 14
* 8 Uniformly distributed sequences
* 8.1 The ubiquity property
* 8.2 Proof of Theorem 15
* 8.3 Proof of Theorem 16
* 8.4 Proof of Theorem 17
* 8.5 Proof of Theorem 19 and 20
* 8.6 Proof of Proposition 8
## 1. Introduction
Let \(n,m\geq 1\) be integers and \(\Psi=(\psi_{1},\ldots,\psi_{n})\) be an \(n\)-tuple of multivariable approximation functions of the form \(\psi_{i}:\mathbb{N}^{m}\to\mathbb{R}_{+}\) with
\[\psi_{i}(\mathbf{q})\to 0\quad\text{ as }\|\mathbf{q}\|:=\max(|q_{1}|,\ldots,|q_{m}|) \to\infty.\]
Let
\[W_{n,m}(\Psi):=\left\{X\in[0,1]^{m\times n}:\begin{array}{l}|\mathbf{q}X_{i} +p_{i}|<\psi_{i}(\mathbf{q})\quad 1\leq i\leq n,\\ \text{ for i. m. }(\mathbf{p},\mathbf{q})\in\mathbb{Z}^{n}\times\mathbb{Z}^{m} \setminus\{\mathbf{0}\}\end{array}\right\}.\]
Here and in what follows "i. m." stands for "infinitely many". \(X=(x_{i,j})_{1\leq i\leq m,1\leq j\leq n}\) is an \(m\times n\) matrix with entries \(x_{i,j}\in[0,1]\), and \(X_{i}\) denotes the \(i\)th column vector of \(X\). So
\[\mathbf{q}X+\mathbf{p}=\left(\begin{array}{c}\mathbf{q}X_{1}+p_{1}\\ \vdots\\ \mathbf{q}X_{n}+p_{n}\end{array}\right)=\left(\begin{array}{c}q_{1}x_{1,1} +\cdots+q_{m}x_{m,1}+p_{1}\\ \vdots\\ q_{1}x_{1,n}+\cdots+q_{m}x_{m,n}+p_{n}\end{array}\right).\]
Generally, a Khintchine-Groshev type theorem tells us the \(nm\)-dimensional Lebesgue measure of \(\mathbb{W}_{n,m}(\Psi)\) which is either zero or full depending upon the convergence or divergence of a certain series respectively. Naturally, the series is dependent upon the nature of the approximation functions.
There are many variations of Khintchine-Groshev type theorems, to highlight a few we recall the following definitions. When \(\psi_{1}=\cdots=\psi_{n}\) we say the approximation function \(\Psi\) is _non-weighted_ and simply denote it by \(\psi\), otherwise, it is called _weighted_. If the approximation functions \(\psi_{i}\) are of the form \(\phi_{i}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\) with \(\psi_{i}(\mathbf{a})=\phi_{i}(\|\mathbf{a}\|)\) for each \(1\leq i\leq n\) we say the approximation function is \(univariable\). A generalisation of univariable approximation function is when the sup norm is replaced by a quasi-norm \(\|\cdot\|_{v}=\max_{1\leq i\leq m}|\cdot|^{1\nu_{i}}\) with vector \(\upsilon=(\nu_{1},\ldots,\nu_{m})\) for each \(\nu_{i}>0\) and \(\sum\nu_{i}=m\). If each approximation function is of the form \(\psi_{i}(\mathbf{a})=\phi_{i}(\|\mathbf{a}\|_{v})\) we follow [6] and say \(\Psi\) has _property P_. If each approximation function \(\psi_{i}\) is monotonic decreasing we say \(\Psi\) is monotonic decreasing. In the case of multivariable approximation, this means
\[\psi_{i}(\mathbf{a})<\psi_{i}(\mathbf{b})\quad\forall\ \|\mathbf{b}\|<\| \mathbf{a}\|.\]
The following result, due to Kleinbock and Wang [57, Theorem 2.7] provides the most modern version of the Khintchine-Groshev theorem in the weighted monotonic univariable setting. For the sake of brevity, we have stated this result for the setting satisfying property \(P\), but it should be noted that Kleinbock & Wang's result was more general still, see [57] for more details.
**Theorem 1** ([57]).: _Let \(\Psi=(\psi_{1},\ldots,\psi_{n})\) be an \(n\)-tuple of monotonic univariable approximation functions satisfying property P. Then_
\[\mu_{m\times n}^{\mathbb{R}}\left(W_{n,m}(\Psi)\right)=\left\{\begin{array}[] {ll}0&\mathrm{if}\quad\sum\limits_{r=1}^{\infty}r^{m-1}\prod_{i=1}^{n}\psi_{i }(r)<\infty,\\ 1&\mathrm{if}\quad\sum\limits_{r=1}^{\infty}r^{m-1}\prod_{i=1}^{n}\psi_{i}(r)= \infty.\end{array}\right.\]
Here and throughout we denote the \(m\times n\) dimensional Lebesgue measure to be \(\mu_{m\times n}^{\mathbb{R}}\). Below we briefly highlight the results preceding this. These include but are not limited to
* \(n=m=1\), \(\psi\) monotonic, Khintchine [49].
* \(n=m=1\), \(\psi\) non-monotonic, conjectured by Duffin and Schaeffer (1941) and proven by Maynard and Koukoulopoulos [58].
* \(n>1,m=1\), \(\psi\) is monotonic, Khintchine [50].
* \(nm\geq 1\), \(\psi\) is univariable, proven by Groshev [37]. In fact, in the original theorem, there were some stronger assumptions on the approximating functions \(\psi\), that is, \(r^{\max(1,m-1)}\psi(r)^{n}\) to be monotonic. The reason that this assumption is unnecessary is due to [9].
* \(nm>1\), \(\psi\) is univariable, non-monotonic, by Beresnevich & Velani [13].
* \(n\geq 1\)\(m>1\), for univariable non-monotic weighted \(\Psi\) by Hussain & Yusupova [45].
* In slightly different settings, for the small linear forms, the analogue of Theorem 1 was established by Fischler, Hussain, Kirstensen, & Levesley [35].
Similar results for the more general _inhomogeneous approximation_ are known but, as far as the authors are aware the case of weighted multivariable inhomogeneous approximation remains unexplored.
When the \(n\)-tuple of approximation functions \(\Psi\) decreases sufficiently fast such that \(\mu_{m\times n}^{\mathbb{R}}\left(W_{n,m}(\Psi)\right)=0\) we desire a more delicate notion of "size". The Hausdorff measure and dimension provide us with such a tool. The following Hausdorff measure and dimension results are known for the set \(W_{n,m}(\Psi)\).
**Theorem 2** ([76, Theorem 10.2]).: _Let \(\Psi=(\psi_{1},\ldots,\psi_{n})\) be an \(n\)-tuple of approximation functions of the form_
\[\psi_{i}(q)=q^{-\tau_{i}}\,,\quad(1\leq i\leq n)\]
_for \(\tau=(\tau_{1},\ldots,\tau_{n})\in\mathbb{R}_{+}^{n}\) with_
\[\sum\limits_{i=1}^{n}\tau_{i}>m.\]
_Then_
\[\dim_{\rm H}W_{n,m}(\Psi)=\min\limits_{1\leq k\leq n}\left\{n(m-1)+\frac{n+m+ \sum\limits_{j:\tau_{j}<\tau_{k}}(\tau_{k}-\tau_{j})}{1+\tau_{k}}\right\}.\]
It should be noted that the above result was proven for a more general range of non-increasing approximation functions, with the \(\tau_{i}\)'s in the dimension result being replaced by limit points of vectors related to \(\psi_{i}\). For example, in the univariable case \(\tau\) is replaced by the lower order at infinity of \(\psi\), that is, \(\tau=\liminf_{q\to\infty}\frac{-\log\psi(q)}{\log q}\).
As with the result of Kleinbock and Wang various results preceded this. We highlight a few below. For brevity, we stick to considering the approximation function of the form considered in the above theorem.
* \(n=m=1\) was independently proven by Jarnik and Besicovitch.
* for all linear forms but for the non-weighted case, it was proven by Bovey and Dodson in [15]. This result was further generalised to the lower order settings by Dodson in [23].
* for \(m=1\) and \(n\geq 1\) it was proven in the weighted case by Rynne [69]. In fact, the result of Rynne was more general, as he restricted the rational denominators to an infinite subset of integers. This result was further generalised to dual settings by Dickinson and Rynne [70].
The aim of this paper is to prove analogies of the above two theorems in a variety of settings. In order to prove Lebesgue dichotomy statements (analogue of Theorem 1) in a variety of settings we use the recently developed notion of weighted ubiquitous systems. The notion of weighted ubiquity was introduced by Wang and Wu in [76] and was developed further by Kleinbock and Wang in [57]. In the paper [76], the authors established a very powerful weighted mass transference principle that, under weighted ubiquity assumption, allows for the Hausdorff measure/dimension results in the weighted settings. We refer the reader to [4, 5] for a comprehensive survey of the mass transference principle.
In the following section, we recall this theory. In the subsequent sections, along with some other things, we apply this framework to a variety of settings including Diophantine approximation in \(p\)-adics, formal power series, complex numbers, quaternions, and uniform distribution theory.
## 2. Toolbox
This section consists of a range of tools used to determine metric properties of \(\limsup\) sets. The first subsection recalls Dirichlet's and Minkowski's theorems on linear forms. The second subsection provides the generalised setup of weighted ubiquitous systems into which our applications fall. The third subsection gives the techniques required to determine ambient measure results on \(\limsup\) sets. The last subsection recalls the definition of Hausdorff measure and dimension and gives the tools required to prove Hausdorff measure and dimension results on \(\limsup\) sets.
### Dirichlet and Minkowski's theorems
A basic problem of Diophantine approximation is to approximate a given real number by rational numbers to a certain degree. For example, it is obvious that for a given \(\alpha\in\mathbb{R}\) and any \(q\in\mathbb{N}\), there is some integer \(p\) such that
\[|q\,\alpha-p|<\frac{1}{2}.\]
A classical result by Dirichlet improves this observation. Namely, given \(\alpha\in\mathbb{R}\), for each real number \(Q>1\) there is some \((q,p)\in\mathbb{Z}^{2}\) such that
\[1\leq q\leq Q\quad\text{ and }\quad|q\alpha-p|<\frac{1}{Q}.\]
As a consequence, there are infinitely many pairs \((q,p)\in\mathbb{N}\times\mathbb{Z}\) satisfying
\[|q\alpha-p|<\frac{1}{q}.\]
Dirichlet's result can be proven using the pigeonhole principle. Furthermore, the argument can be easily extended to linear forms (see, for example, [71, Chapter II, Theorem 1EJ]).
**Theorem 3** (Dirichlet, 1842).: _Let \(m,n\in\mathbb{N}\) and \(A\in\mathbb{R}^{m\times n}\) be given. For any \(Q>1\), there exists a non-zero \(\mathbf{q}=(q_{1},\ldots,q_{m})\in\mathbb{Z}^{m}\) and some \(\mathbf{p}=(p_{1},\ldots,p_{n})\in\mathbb{Z}^{n}\) such that_
\[|\mathbf{q}A_{i}-p_{i}| <Q^{-1}\quad(1\leq i\leq n),\] \[|q_{j}| \leq Q^{\frac{n}{m}}\quad(1\leq j\leq m).\]
Take \(m,n\in\mathbb{N}\). Define the function \(\Psi:\mathbb{N}\to\mathbb{R}_{+}\) by \(\Psi(r)=r^{-m/n}\). For every \(\mathbf{q}=(q_{1},\ldots,q_{m})\in\mathbb{Z}^{m}\), write \(\|\mathbf{q}\|=\max\{|q_{1}|,\ldots,|q_{m}|\}\). By Theorem 3, for every matrix \(A\in\mathbb{R}^{m\times n}\) there are infinitely many vectors \(0\neq\mathbf{q}\in\mathbb{Z}^{m}\) and \(\mathbf{p}\in\mathbb{Z}^{n}\) such that
\[|\mathbf{q}A_{i}-p_{i}|<\Psi(\|\mathbf{q}\|)\quad(1\leq i\leq n). \tag{2.1}\]
Dirichlet's theorem and its consequences, despite their foundational character, are not strong enough for many applications including the ones we study in this article. This is because we might need to replace the function \(\Psi\) in (2.1) by positive and non-increasing functions \(\Psi_{i}\) in each coordinate axis \(i\in\{1,\ldots,n\}\). Minkowski's theorem for linear forms (Theorem 4 below) is a key tool towards this goal.
Recall that a _lattice_\(\Lambda\) on \(\mathbb{R}^{n}\), \(n\in\mathbb{N}\), is a set of the form
\[\Lambda=\{a_{1}\mathbf{v}_{1}+\ldots+a_{n}\mathbf{v}_{n}:a_{1},\ldots,a_{n}\in \mathbb{Z}\},\]
where \(\mathbf{v}_{1},\ldots,\mathbf{v}_{n}\in\mathbb{R}^{n}\) are \(n\) given linearly independent row vectors. The determinant of \(\Lambda\), denoted \(\det(\Lambda)\), is the determinant of the matrix formed by \(\mathbf{v}_{1},\ldots,\mathbf{v}_{n}\).
**Theorem 4** (Minkowski, 1896).: _Let \(N\in\mathbb{N}\), \(\Lambda\subseteq\mathbb{R}^{N}\) a lattice of determinant \(d\), and \(A\in\mathbb{R}^{N\times N}\). If the positive real numbers \(c_{1},\ldots,c_{N}\) satisfy_
\[\det(\Lambda)|\det(A)|\leq c_{1}\cdots c_{N},\]
_then, for \(j_{0}\in\{1,\ldots,N\}\), there is some non-zero \(\mathbf{u}\in\Lambda\) such that_
\[\left|\mathbf{u}A_{j}\right|\leq c_{j}\quad(1\leq j\leq j_{0}),\] \[\left|\mathbf{u}A_{j}\right|<c_{j}\quad(j_{0}+1\leq j\leq N).\]
A proof of Theorem 4 can be found in [18, Chapter III, Theorem III]. Our formulation is slightly different from the one in the reference, but the proof can be easily adapted to our case.
In relation to Diophantine approximation Theorem 4 allows us to deduce that for vector \(v=(v_{1},\ldots,v_{m})\) with \(\sum_{i=1}^{m}v_{i}=m\), for any \(A\in\mathbb{R}^{m\times n}\)
\[|\mathbf{q}A_{i}-p_{i}|<\psi_{i}(\|\mathbf{q}\|_{v})\quad(1\leq i\leq n)\,,\]
is solved for infinitely many integer vectors \((\mathbf{p},\mathbf{q})\in\mathbb{Z}^{n}\times(\mathbb{Z}^{m}\setminus\{0\})\) provided the \(n\)-tuple of approximation functions satisfy
\[\prod_{i=1}^{n}\psi_{i}(r)=r^{-1}\quad\text{ and }\quad\psi_{i}(r)<\tfrac{1}{2} \quad(1\leq i\leq n)\text{ monotonic decreasing}\]
for all \(r\in\mathbb{R}_{+}\).
### Weighted Ubiquitous systems
In this section we give the definition of local ubiquity for rectangles as given in [57]. This definition is a generalisation of ubiquity for rectangles as found in [76]. The notion of an "Ubiquitous system" for balls was introduced by Dodson, Rynne, and Vickers [26] which was then generalised to the abstract metric space settings in [9].
Fix an integer \(n\geq 1\), and for each \(1\leq i\leq n\) let \((\Omega_{i},|\cdot|_{i},\mu_{i})\) be a bounded locally compact metric space, where \(|\cdot|_{i}\) denotes a metric on \(\Omega_{i}\), and \(\mu_{i}\) denotes a Borel probability measure over \(\Omega_{i}\). Further, we assume that the measure \(\mu_{i}\) is \(\delta_{i}\)-Ahlfors regular probability measure. That is, there exists constants \(0<c_{1}<c_{2}<\infty\) such that for any ball \(B_{i}(x_{i},r)\) with centre \(x_{i}\in\Omega_{i}\) and radius \(0<r<r_{0}\) for some \(r_{0}\in\mathbb{R}_{+}\) we have that
\[c_{1}r^{\delta_{i}}\leq\mu_{i}(B_{i}(x_{i},r))\leq c_{2}r^{\delta_{i}}\,.\]
Consider the product space \((\Omega,\|\cdot\|,\mu)\), where
\[\Omega=\prod_{i=1}^{n}\Omega_{i},\quad\mu=\prod_{i=1}^{n}\mu_{i},\quad\|\cdot \|=\max_{1\leq i\leq n}|\cdot|_{i}\]
are defined in the usual way. For any \(x\in\Omega\) and \(r\in\mathbb{R}_{+}\) define the open ball
\[B(x,r)=\left\{y\in\Omega:\max_{1\leq i\leq n}|x_{i}-y_{i}|_{i}<r\right\}= \prod_{i=1}^{n}B_{i}(x_{i},r),\]
where \(B_{i}\) are the open \(r\)-balls associated with the \(i^{\text{th}}\) metric space \(\Omega_{i}\). Let \(J\) be a countably infinite index set, and \(\beta:J\to\mathbb{R}_{+}\), \(\alpha\mapsto\beta_{\alpha}\) a positive function satisfying the condition that for any \(N\in\mathbb{N}\)
\[\#\left\{\alpha\in J:\beta_{\alpha}<N\right\}<\infty.\]
Let \(l_{k},u_{k}\) be two sequences in \(\mathbb{R}_{+}\) such that \(u_{k}\geq l_{k}\) with \(l_{k}\to\infty\) as \(k\to\infty\). Define
\[J_{k}=\{\alpha\in J:l_{k}\leq\beta_{\alpha}\leq u_{k}\}.\]
Let \(\rho=(\rho_{1},\ldots,\rho_{n})\) be an \(n\)-tuple of non-increasing functions \(\rho_{i}:\mathbb{R}_{+}\to\mathbb{R}_{+}\) such that each \(\rho_{i}(x)\to 0\) as \(x\to\infty\). For each \(1\leq i\leq n\), let \((R_{\alpha,i})_{\alpha\in J}\) be a sequence of subsets in \(\Omega_{i}\). The family of sets \((R_{\alpha})_{\alpha\in J}\) where
\[R_{\alpha}=\prod_{i=1}^{n}R_{\alpha,i},\]
for each \(\alpha\in J\), are called _resonant sets_.
Define
\[\Delta(R_{\alpha},\rho(r))=\prod_{i=1}^{n}\Delta_{i}(R_{\alpha,i},\rho_{i}(r)),\]
where for any set \(A\subset\Omega_{i}\) and \(b\in\mathbb{R}_{+}\)
\[\Delta_{i}(A,b)=\bigcup_{a\in A}B_{i}(a,b)\]
is the union of balls in \(\Omega_{i}\) of radius \(b\) centred at all possible points in \(A\). Generally \(\Delta(R_{\alpha},\rho(r))\) is the product of \(\rho_{i}(r)\)-neighbourhoods of \(R_{a,i}\) for each coordinate \(i\in\{1,\ldots,n\}\).
The following definition, which was introduced in [2] as a generalisation of the intersection properties of [9], on the properties of the resonant sets is required.
**Definition 1** (\(\kappa\)-scaling property).: Let \(0\leq\kappa_{i}<1\) and \(1\leq i\leq n\). The sequence \((R_{\alpha,i})_{\alpha\in J}\) has a _\(\kappa_{i}\)-scaling property_ if for any \(\alpha\in J\), any ball \(B_{i}(x_{i},r)\subset\Omega_{i}\) with centre \(x_{i}\in R_{\alpha,i}\), and \(0<\epsilon<r\) we have
\[c_{2}r^{\delta_{i}\kappa_{i}}\epsilon^{\delta_{i}(1-\kappa_{i})}\leq\mu_{i} \left(B_{i}(x_{i},r)\cap\Delta(R_{\alpha,i},\epsilon)\right)\leq c_{3}r^{ \delta_{i}\kappa_{i}}\epsilon^{\delta_{i}(1-\kappa_{i})},\]
for some constants \(c_{2},c_{3}>0\).
See [2, Section 2] for calculations of \(\kappa_{i}\) for various resonant sets. Intuitively, one can think of \(\kappa_{i}\) being the value such that \(\delta_{i}\kappa_{i}\) is the box dimension of \(R_{\alpha,i}\). As an example note that \(\kappa=0\) when \(R_{\alpha,i}\) is a finite collection of points and for \(R_{\alpha,i}\)\((m-1)\)-dimensional affine hyperplanes \(\kappa=\frac{m-1}{m}\). Although the definition considers the \(\kappa_{i}\)-scaling property in each coordinate axis, for our purpose we take \(\kappa_{1}=\cdots=\kappa_{n}=\kappa\), and refer to it as \(\kappa\)-scaling property. In particular this is the \(\kappa\)-scaling property considered in [76].
The following notion of ubiquity for rectangles can be found in [57, Section 2.2].
**Definition 2** (Local ubiquitous system for rectangles).: Call the pair \(((R_{\alpha})_{\alpha\in J},\beta)\)_a local ubiquitous system for rectangles with respect to \(\rho\)_ if there exists a constant \(c>0\) such that for any ball \(B\subset\Omega\) and all sufficiently large \(k\in\mathbb{N}\)
\[\mu\left(B\cap\bigcup_{\alpha\in J_{k}}\Delta(R_{\alpha},\rho(u_{k}))\right) \geq cm(B).\]
For an \(n\)-tuple of approximation functions \(\Psi=(\psi_{1},\ldots,\psi_{n})\) with each \(\psi_{i}:\mathbb{R}_{+}\to\mathbb{R}_{+}\) define
\[W^{\Omega}(\Psi) =\left\{x\in\Omega:x\in\Delta\left(R_{a},\Psi(\beta_{a})\right)\ \text{for infinitely many}\ \alpha\in J\right\}\] \[=\limsup_{a\in J}\Delta\left(R_{a},\Psi(\beta_{a})\right).\]
For all the applications listed below, the corresponding sets will be described by the \(\limsup\) set outlined above.
### Ambient measure statements
The following is the well-known Borel-Cantelli Lemma usually used to prove the convergence cases for Lebesgue dichotomy statements, see for example [47, Theorem 4.18] for a proof.
**Lemma 1** (Borel-Cantelli lemma, convergence part).: _Let \((\Omega,\mathcal{A},\mu)\) be a measure space and let \((A_{k})_{k\geq 1}\) be a sequence of measurable sets. If_
\[\sum_{k\geq 1}\mu(A_{k})<\infty,\quad\text{then}\quad\mu\left(\limsup_{k\to \infty}A_{k}\right)=0\,.\]
The following theorem from [57] provides the ambient measure theory for \(W^{\Omega}(\Psi)\) in the divergence case. Prior to stating the result we need one more definition on functions. For constant \(0<c<1\) a function \(f\) is said to be \(c\)-regular with respect to a sequence \(\{r_{i}\}_{i\in\mathbb{N}}\) if
\[f(r_{i+1})\leq cf(r_{i})\]
for all sufficiently large \(i\).
**Theorem 5** ([57]).: _Let \(W^{\Omega}(\Psi)\) be defined as above and assume that \(\left((R_{a})_{a\in J},\beta\right)\) is a local ubiquitous system for rectangles with respect to \(\rho\), and that the resonant sets \((R_{a,i})\) have \(\kappa_{i}\)-scaling property and each measure \(\mu_{i}\) is \(\delta_{i}\)-Ahlfors regular. Suppose that_
1. _each_ \(\psi_{i}\) _is decreasing,_
2. _for each_ \(1\leq i\leq n\)_,_ \(\psi_{i}(r)\leq\rho_{i}(r)\) _for all_ \(r\in\mathbb{R}_{+}\) _and_ \(\rho_{i}(r)\to 0\) _as_ \(r\to\infty\)_,_
3. _either_ \(\rho_{i}\) _is_ \(c\)_-regular on_ \((u_{k})_{k\geq 1}\) _for all_ \(1\leq i\leq n\) _or_ \(\psi_{i}\) _is_ \(c\)_-regular on_ \((u_{k})_{k\geq 1}\) _for all_ \(1\leq i\leq n\) _for some_ \(0<c<1\)_._
_Then,_
\[\mu(W(\Psi))=\mu(\Omega)\quad\text{if}\qquad\sum_{k=1}^{\infty}\prod_{i=1}^{ n}\left(\frac{\psi_{i}(u_{k})}{\rho_{i}(u_{k})}\right)^{\delta_{i}(1-\kappa_{i})}=\infty.\]
It will often be convenient at times to multiply the approximating function \(\Psi\) by some constant. The following lemma, see [7, Lemma 5.7], shows that, in terms of ambient measure, the measure of the \(\limsup\) set remains unchanged.
**Lemma 2** ([7]).: _Let \((\Omega,\|\cdot\|,\mu)\) be the product space defined above. Let \((S_{i})_{i\in\mathbb{N}}\) be a sequence of subsets in the support of \(\mu\) and \((\delta_{i})_{i\in\mathbb{N}}\) be a sequence of positive \(n\)-tuples \(\delta_{i}=(\delta_{i}^{(1)},\ldots,\delta_{i}^{(n)})\) such that \(\delta_{i}^{(j)}\to 0\) as \(i\to\infty\) for each \(1\leq j\leq n\). Then, for any \(\mathbf{C}=(C_{1},\ldots,C_{n})\) and \(\mathbf{c}=(c_{1},\ldots,c_{n})\) with \(0<c_{j}\leq C_{j}\) for each \(1\leq j\leq n\)_
\[\mu\left(\limsup_{i\to\infty}\Delta(S_{i},\mathbf{C}\delta_{i})\,\setminus\, \limsup_{i\to\infty}\Delta(S_{i},\mathbf{c}\delta_{i})\right)=0\,, \tag{2.2}\]
_where \(\mathbf{c}\delta_{i}=(c_{1}\delta_{i}^{(1)},\ldots,c_{n}\delta_{i}^{(n)})\) and similarly \(\mathbf{C}\delta_{i}=(C_{1}\delta_{i}^{(1)},\ldots,C_{n}\delta_{i}^{(n)})\)._
### Hausdorff measure and dimension statements
For completeness, we give below a very brief introduction to Hausdorff measures and dimensions. For further details see [31].
Let \((\Omega,d)\) be a metric space and \(F\subset\Omega\). Then for any \(0<\rho\leq\infty\), any finite or countable collection \(\{B_{i}\}\) of subsets of \(\Omega\) such that \(F\subset\bigcup_{i}B_{i}\) and
\[r(B_{i})=\frac{1}{2}\inf\{r\geq 0:d(x,y)\leq r\quad(x,y\in B_{i})\}\leq\rho\]
is called a \(\rho\)_-cover_ of \(F\). Let
\[\mathcal{H}_{\rho}^{f}(F)=\inf\sum_{i}f\left(r(B_{i})\right),\]
where the infimum is taken over all possible \(\rho\)-covers \(\{B_{i}\}\) of \(F\). The \(f\)_-dimensional Hausdorff measure of \(F\)_ is defined to be
\[\mathcal{H}^{f}(F)=\lim_{\rho\to 0}\mathcal{H}_{\rho}^{f}(F).\]
In the case that \(f(r)=r^{s}\)\((s\geq 0)\), the measure \(\mathcal{H}^{f}\) is denoted \(\mathcal{H}^{s}\) and is called \(s\)_-dimensional Hausdorff measure_. For any set \(F\subset\Omega\) one can easily verify that there exists a unique critical value of \(s\) at which the function \(s\mapsto\mathcal{H}^{s}(F)\) "jumps" from infinity to zero. The value taken by \(s\) at this discontinuity is referred to as the _Hausdorff dimension_ of \(F\) and is denoted by \(\dim_{\mathrm{H}}F\); i.e.
\[\dim_{\mathrm{H}}F:=\inf\{s\geq 0\,:\,\mathcal{H}^{s}(F)=0\}\,.\]
A countable collection \(\{B_{i}\}\) is called a _fine cover_ of \(F\) if for every \(\rho>0\) it contains a subcollection that is a \(\rho\)-cover of \(F\).
Below is the Hausdorff measure analogue of the Borel-Cantelli lemma in the convergence case, which will allow us to estimate the convergence case Hausdorff measure of certain sets via calculating the Hausdorff \(f\)-sum of a fine cover. Recall a collection \(\mathcal{B}=\{B_{i}\}\) of subsets covering \(F\) is called a fine cover if for any \(x\in F\) and any \(r>0\) there exists \(B_{i}\in\mathcal{B}\) such that \(r(B_{i})<r\) and \(x\in B_{i}\). Trivially this implies that for any \(\rho>0\) there exists a subset of \(\mathcal{B}\) that is a \(\rho\)-cover of \(F\).
**Lemma 3** (Hausdorff-Cantelli lemma).: _Let \(\{B_{i}\}\subset\Omega\) be a fine cover of a set \(F\) and let \(f\) be a dimension function such that_
\[\sum_{i}f(r(B_{i}))<\infty. \tag{2.3}\]
_Then \(\mathcal{H}^{c}(F)=0\). In particular, if \(f(r)=r^{s}\) and we have (2.3) then_
\[\dim_{\mathrm{H}}F\leq s.\]
For the lower bound of the Hausdorff dimension and the divergent counterpart of the Hausdorff measure theory, we have the following theorem due to Wang and Wu [76, Theorem 3.1-3.2], which appeals to the notion of weighted ubiquitous systems.
**Theorem 6** ([76]).: _Let \(W(\Psi)\) be defined as above and assume that \(\big{(}(R_{\alpha})_{\alpha\in J},\beta\big{)}\) is a local ubiquitous system for rectangles with respect to \(\rho=(\rho^{a_{1}},\ldots,\rho^{a_{n}})\) for some function \(\rho:\mathbb{R}_{+}\to\mathbb{R}_{+}\) and \((a_{1},\ldots,a_{n})\in\mathbb{R}_{+}^{n}\). Assume, for each \(1\leq i\leq n\), the resonant sets \((R_{\alpha,i})\) have \(\kappa\)-scaling property. Assume each measure \(\mu_{i}\) is \(\delta_{i}\)-Ahlfors regular. Then, if \(\Psi=(\rho^{a_{1}+t_{1}},\ldots,\rho^{a_{n}+t_{n}})\) for some \(\boldsymbol{t}=(t_{1},\ldots,t_{n})\in\mathbb{R}_{+}^{n}\),_
\[\dim_{\mathrm{H}}W(\Psi)\geq\min_{A_{i}\in A}\left\{\sum_{j\in\mathcal{K}_{1} }\delta_{j}+\sum_{j\in\mathcal{K}_{2}}\delta_{j}+\kappa\sum_{j\in\mathcal{K}_{ 3}}\delta_{j}+(1-\kappa)\frac{\sum\limits_{j\in\mathcal{K}_{3}}a_{j}\delta_{j }-\sum\limits_{j\in\mathcal{K}_{2}}t_{j}\delta_{j}}{A_{i}}\right\}=s,\]
_where \(A=\{a_{i},a_{i}+t_{i},1\leq i\leq n\}\) and \(\mathcal{K}_{1},\mathcal{K}_{2},\mathcal{K}_{3}\) are a partition of \(\{1,\ldots,n\}\) defined as_
\[\mathcal{K}_{1}=\{j:a_{j}\geq A_{i}\},\quad\mathcal{K}_{2}=\{j:a_{j}+t_{j} \leq A_{i}\}\setminus\mathcal{K}_{1},\quad\mathcal{K}_{3}=\{1,\ldots n\} \setminus(\mathcal{K}_{1}\cup\mathcal{K}_{2}).\]
_Furthermore, for any ball \(B\subset\Omega\) we have_
\[\mathcal{H}^{s}(B\cap W(\Psi))=\mathcal{H}^{s}(B). \tag{2.4}\]
The main motivation for this result came from the landmark paper of Beresnevich and Velani [12] in which they developed the mass transference principle from balls to balls. This is surprising as the Hausdorff measure theory underpins the Lebesgue measure theory. This powerful tool has since been generalised to various settings, see for instance [2, 3, 4, 76, 77] and references therein.
We clarify some notations that will be used throughout. For real quantities \(A,B\) and a parameter \(t\), we write \(A\ll_{t}B\) if \(A\leq c(t)B\) for a constant \(c(t)>0\) that depends on \(t\) only (while \(A\) and \(B\) may depend on other parameters). We write \(A\asymp_{t}B\) if \(A\ll_{t}B\ll_{t}A\). If the constant \(c>0\) is absolute, we simply write \(A\ll B\) and \(A\asymp B\).
## 3. A motivating example
As a motivating example, and a warm-up to later applications, let us consider the set
\[W^{\mathbb{R}}_{1,2}(\psi_{1},\psi_{2})=\left\{(x_{1},x_{2})\in I^{2}:|qx_{i}-p_{ i}|<\psi_{i}(q)\quad(i=1,2)\quad\text{ for i.m. }(p_{1},p_{2},q)\in\mathbb{Z}^{2}\times\mathbb{N}\right\}\,,\]
where \(I^{2}=[0,1]^{2}.\) Theorem 1 allows us to deduce the following result.
**Corollary 1**.: _For \(\psi_{1},\psi_{2}:\mathbb{N}\to\mathbb{R}_{+}\) monotonic decreasing functions, we have that_
\[\mu_{2}^{\mathbb{R}}\left(W^{\mathbb{R}}_{1,2}(\psi_{1},\psi_{2})\right)= \begin{cases}0\quad\text{if}\quad\sum\limits_{q=1}^{\infty}\psi_{1}(q)\psi_{2 }(q)<\infty\,,\\ 1\quad\text{if}\quad\sum\limits_{q=1}^{\infty}\psi_{1}(q)\psi_{2}(q)=\infty\,. \end{cases}\]
Furthermore, from Theorem 2 we can deduce the following
**Corollary 2**.: _For_
\[\psi_{1}(q)=q^{-\tau_{1}}\quad\text{and}\quad\psi_{2}(q)=q^{-\tau_{2}}\]
_with \(\tau_{1}+\tau_{2}>1\) and \(\tau_{1},\tau_{2}>0\), we have that_
\[\dim_{\rm H}W^{\mathbb{R}}_{1,2}(\psi_{1},\psi_{2})=\min_{i=1,2}\left\{\frac{3 +(\tau_{i}-\min\{\tau_{1},\tau_{2}\})}{1+\tau_{i}}\right\}\,.\]
We now show how both of these results can be derived by constructing a suitable ubiquitous system of rectangles and then applying the theorems of the previous section. The weighted ubiquitous system is setup in much the same was as the classical ubiquitous system, with the key difference being that Theorem 4 is used in place of Theorem 3. See for example [11, Theorem 1.1.4] for the statement in the classical one dimensional setting. However, the application of the weighted ubiquitous system is slightly more complex. In particular it requires a careful choice of the \(\rho\) function.
To begin, let
\[\Omega =[0,1]^{2},\quad\mu=\mu_{2}^{\mathbb{R}},\] \[J =\mathbb{N},\quad\beta_{\alpha}=\alpha,\] \[l_{k} =M^{k-1},\quad u_{k}=M^{k},\] \[J_{k} =\left\{q\in\mathbb{N}:M^{k-1}\leq q\leq M^{k}\right\},\] \[R_{q} =\left\{\left(\frac{p_{1}}{q},\frac{p_{2}}{q}\right):0\leq p_{1}, p_{2}\leq q\right\}\,.\]
Note that \(R_{q}\) are collections of points and so \((R_{q})_{q\geq 1}\) has \(\kappa\)-scaling property \(\kappa=0.\) Since \(\mu_{2}^{\mathbb{R}}=\mu_{1}^{\mathbb{R}}\times\mu_{1}^{\mathbb{R}}\) trivially \(\delta_{1}=\delta_{2}=1.\) Here \(M\) in the definitions of \(u_{k}\) and \(l_{k}\) is some large constant (we can take any integer \(M\geq 64\)). Observe that
\[W^{\mathbb{R}}_{1,2}(\psi_{1},\psi_{2})=\limsup_{q\to\infty}\Delta\left(R_{q}, \left(\frac{\psi_{1}(q)}{q},\frac{\psi_{2}(q)}{q}\right)\right).\]
We prove the following.
**Proposition 1**.: _Let \(\rho_{1},\rho_{2}:\mathbb{N}\to\mathbb{R}_{+}\) be functions of the form_
\[\rho_{i}(q)=M\frac{\phi_{i}(q)}{q}\quad(i=1,2)\quad\text{with}\quad\phi_{1}(q) \phi_{2}(q)=q^{-1}, \tag{3.1}\]
_and each \(\phi_{i}(q)\to 0\) as \(q\to\infty\). Then for any ball \(B\subset I^{2}\) there exists \(k_{0}\in\mathbb{N}\) such that for all \(k>k_{0}\) we have_
\[\mu_{2}^{\mathbb{R}}\left(B\cap\bigcup_{q\in J_{k}}\Delta\left(R_{q}\left( \rho_{1}(M^{k}),\rho_{2}(M^{k})\right)\right)\right)\geq\frac{1}{2}\mu_{2}^{ \mathbb{R}}(B).\]
Proof.: The following is a standard method of proving such a result. Initially let us give some bounds on \(k_{0}\) that will be used later. Choose \(k_{0}\) such that for all \(k>k_{0}\) we have
\[\phi_{i}(M^{k})<\frac{1}{2}\qquad(i=1,2)\quad\text{and}\quad M^{k}>\frac{32}{ 3}\pi^{2}\mu_{2}^{\mathbb{R}}(B)^{-1}.\]
It will become clear later why such bounds are chosen. Now pick any \(k>k_{0}\). Firstly, by Minkowski's Theorem for systems of linear forms (Theorem 4) for any \((x_{1},x_{2})\in[0,1]^{2}\) the system
\[\begin{cases}|qx_{1}-p_{1}|<\phi_{1}(M^{k}),\\ |qx_{2}-p_{2}|<\phi_{2}(M^{k}),\\ |q|\leq M^{k}\end{cases}\]
has a non-zero integer solution \((p_{1},p_{2},q)\in\mathbb{Z}^{3}\). Note by our choice of \(k_{0}\) that \(q\neq 0\), since otherwise we would have \(p_{1}\) and \(p_{2}\) non-zero solving \(|p_{i}|<\frac{1}{2}\) which is clearly false. So, multiplying the solution \((p_{1},p_{2},q)\) through by \(-1\) if necessary, we have that \((p_{1},p_{2},q)\in\mathbb{Z}^{2}\times\mathbb{N}\). Dividing through by \(q\) in the first two inequalities allows us to see that
\[\mu_{2}^{\mathbb{R}}\left(B\cap\bigcup_{1\leq q\leq M^{k}}\Delta\left(R_{q}, \left(\tfrac{\phi_{1}(M^{k})}{q},\tfrac{\phi_{2}(M^{k})}{q}\right)\right) \right)=\mu_{2}^{\mathbb{R}}(B)\,. \tag{3.2}\]
Now observe that
\[\mu_{2}^{\mathbb{R}}\left(B\cap\bigcup_{1\leq q\leq M^{k}}\Delta \left(R_{q},\left(\tfrac{\phi_{1}(M^{k})}{q},\tfrac{\phi_{2}(M^{k})}{q}\right) \right)\right)\] \[\leq\mu_{2}^{\mathbb{R}}\left(B\cap\bigcup_{1\leq q<M^{k-1}} \Delta\left(R_{q},\left(\tfrac{\phi_{1}(M^{k})}{q},\tfrac{\phi_{2}(M^{k})}{q} \right)\right)\right)+\mu_{2}^{\mathbb{R}}\left(B\cap\bigcup_{M^{k-1}\leq q \leq M^{k}}\Delta\left(R_{q},\left(\tfrac{\phi_{1}(M^{k})}{M^{k-1}},\tfrac{ \phi_{2}(M^{k})}{M^{k-1}}\right)\right)\right).\]
The second summation on the right-hand side of the above inequality is precisely what we are trying to calculate (since \(\tfrac{\phi_{i}(M^{k})}{M^{k-1}}=\rho_{i}(M^{k})\) for \(i=1,2\)). Note (3.2) gives us that the left hand side of the above inequality is \(\mu_{2}^{\mathbb{R}}(B)\), so if we can show that
\[\mu_{2}^{\mathbb{R}}\left(B\cap\bigcup_{1\leq q<M^{k-1}}\Delta\left(R_{q}, \left(\tfrac{\phi_{1}(M^{k})}{q},\tfrac{\phi_{2}(M^{k})}{q}\right)\right) \right)<\tfrac{1}{2}\mu_{2}^{\mathbb{R}}(B)\]
we are done. To see this we compute that
\[\mu_{2}^{\mathbb{R}}\left(B\cap\bigcup_{1\leq q<M^{k-1}}\Delta\left(R _{q},\left(\tfrac{\phi_{1}(M^{k})}{q},\tfrac{\phi_{2}(M^{k})}{q}\right)\right)\right) \leq\sum_{1\leq q<M^{k-1}}\sum_{\left(\tfrac{p_{1}}{q},\tfrac{p_{2} }{q}\right)\in R_{q}\cap B}\mu_{2}^{\mathbb{R}}\left(\Delta\left((\tfrac{p_{1}} {q},\tfrac{p_{2}}{q}),\left(\tfrac{\phi_{1}(M^{k})}{q},\tfrac{\phi_{2}(M^{k})}{ q}\right)\right)\right)\] \[\leq\sum_{1\leq q<M^{k-1}}(2qr(B)+1)^{2}4q^{-2}\phi_{1}(M^{k}) \phi_{2}(M^{k})\] \[\overset{(*)}{\leq}\sum_{1\leq q<M^{k-1}}\left(4q^{2}\mu_{2}^{ \mathbb{R}}(B)+4\right)4q^{-2}M^{-k}\] \[\leq 16M^{-1}\mu_{2}^{\mathbb{R}}(B)+16M^{-k}\sum_{1\leq q<M^{k-1} }q^{-2}\] \[\overset{(M\geq 64)}{\leq}\tfrac{1}{4}\mu_{2}^{\mathbb{R}}(B)+M^{-k }16\tfrac{q^{2}}{6}\] \[\leq\tfrac{1}{2}\mu_{2}^{\mathbb{R}}(B),\]
where \((*)\) follows on using that \((a+b)^{2}\leq(2a)^{2}+(2b)^{2}\) for all \(a,b\geq R_{+}\). Hence the proof is complete.
### Proof of Corollary 1
Begin with the convergence case of Corollary 1. By the Borel-Cantelli convergence lemma, and the above formulation of \(W_{1,2}^{\mathbb{R}}(\psi_{1},\psi_{2})\) in terms of a \(\limsup\) set, we have that
\[\mu_{2}^{\mathbb{R}}\left(W_{1,2}^{\mathbb{R}}(\psi_{1},\psi_{2})\right)=0 \quad\text{ if }\quad\sum_{q=1}^{\infty}\mu_{2}^{\mathbb{R}}\left(\Delta\left(R_{q}, \left(\tfrac{\psi_{1}(q)}{q},\tfrac{\psi_{2}(q)}{q}\right)\right)\right)<\infty. \tag{3.3}\]
A quick calculation of the Lebesgue measure of the collection of rectangles \(\Delta\left(R_{q},\left(\tfrac{\psi_{1}(q)}{q},\tfrac{\psi_{2}(q)}{q}\right)\right)\) yields
\[\mu_{2}^{\mathbb{R}}\left(\Delta\left(R_{q},\left(\tfrac{\psi_{1}(q)}{q}, \tfrac{\psi_{2}(q)}{q}\right)\right)\right)\leq 4\psi_{1}(q)\psi_{2}(q)\,,\]
and inputting this into (3.3) completes the convergence case.
In order to prove the divergence case of Corollary 1 we now need to show conditions (I)-(III) of Theorem 5 are verified for some suitably chosen functions \(\rho_{1},\rho_{2}\) satisfying (3.1). Notice in this setting, our functions are of the form
\[\Psi(q)=\left(\tfrac{\psi_{1}(q)}{q},\tfrac{\psi_{2}(q)}{q}\right),\]
So the conditions (I)-(III) correspond to
1. \(\frac{\psi_{1}(q)}{q},\frac{\psi_{2}(q)}{q}\) monotonic decreasing as \(q\to\infty\),
2. \(\rho_{1}(q)\geq\frac{\psi_{1}(q)}{q}\) and \(\rho_{2}(q)\geq\frac{\psi_{2}(q)}{q}\) for all \(q\in\mathbb{N}\), and \(\rho_{1}(q),\rho_{2}(q)\to 0\) as \(q\to\infty\).
3. \(\frac{\psi_{1}(q)}{q}\) and \(\frac{\psi_{2}(q)}{q}\) are \(c\)-regular on the sequence \((M^{k})_{k\geq 1}\).
Since \(\psi_{1},\psi_{2}\) are decreasing \(\mathrm{I}(\mathbb{R})\) is immediately satisfied. For \(\mathrm{III}(\mathbb{R})\) note that for each \(i=1,2\)
\[\frac{\psi_{i}(M^{k+1})}{M^{k+1}}=M^{-1}\frac{\psi_{i}(M^{k+1})}{M^{k}}\leq M^{- 1}\frac{\psi_{i}(M^{k})}{M^{k}}\]
where the last inequality follows due to the monotonic decreasing property of \(\psi_{i}\). Thus it remains to choose functions \(\rho_{1},\rho_{2}\) so that \(\mathrm{II}(\mathbb{R})\) is satisfied.
For each \(q\in\mathbb{N}\), let \(l_{1}(q),l_{2}(q)\in\{1,2\}\) be the ordering such that
\[\psi_{l_{1}(q)}(q)\geq\psi_{l_{2}(q)}(q)\,.\]
We can assume without loss of generality that
\[\psi_{1}(q)\psi_{2}(q)<q^{-1} \tag{3.4}\]
for all sufficiently large \(q\in\mathbb{N}\), say \(q>q_{0}\). Otherwise by Theorem 4 and the monotonicity of each \(\psi_{i}\) we have that \(W^{\mathbb{R}}_{1,2}(\psi_{1},\psi_{2})=[0,1]^{2}\). For \(q>q_{0}\) consider the function
\[\phi^{*}(q)=q^{-1}\psi_{l_{1}(q)}(q)^{-1}.\]
Observe that \(\phi^{*}(q)>\psi_{l_{2}(q)}(q)\), since otherwise we would not have (3.4). Now, for each \(q>q_{0}\) choose functions \(\rho_{1},\rho_{2}\) to be
\[\phi_{l_{1}(q)}(q)=\begin{cases}\psi_{l_{1}(q)}(q)&\text{ if }\quad\psi_{l_{1}(q)}(q)>q^{- \frac{1}{2}}\,,\\ q^{-\frac{1}{2}}&\text{ otherwise.}\end{cases}\]
\[\phi_{l_{2}(q)}(q)=\begin{cases}\phi^{*}(q)&\text{ if }\quad\psi_{l_{1}(q)}(q)>q^{ -\frac{1}{2}}\,,\\ q^{-\frac{1}{2}}&\text{ otherwise.}\end{cases}\]
Note \(\phi_{1},\phi_{2}\) satisfy condition (3.1), and furthermore
\[\rho_{i}(q)=M\frac{\phi_{i}(q)}{q}>\frac{\psi_{i}(q)}{q}\quad(i=1,2)\,,\]
where the inequality follows by our choice of each \(\phi_{i}\). So \(\mathrm{II}(\mathbb{R})\) is satisfied, thus Theorem 4 is applicable. It remains to see, via Cauchy condensation, that
\[\sum_{q=1}^{\infty}\psi_{1}(q)\psi_{2}(q)\asymp\sum_{k=1}^{\infty}M^{k}\psi_{ 1}(M^{k})\psi_{2}(M^{k}),\]
and so this completes the divergence case.
### Proof of Corollary 2
For the upper bound consider the standard cover
\[\bigcup_{q>N}\Delta\left(R_{q},\left(q^{-1-\tau_{1}},q^{-1-\tau_{2}}\right)\right)\]
for any \(N\in\mathbb{N}\). Letting \(N\) tend to infinity naturally gives rise to a better cover of \(W_{1,2}^{\mathbb{R}}(\psi_{1},\psi_{2})\). Each layer \(\Delta\left(R_{q},(q^{-1-\tau_{1}},q^{-1-\tau_{2}})\right)\) is composed of \(q^{2}\) rectangles, each of which can be covered by either one large ball with the diameter equal to the longest sidelength, or by several smaller balls with diameters equal to the shortest sidelenght of the rectangle. For now take the balls with larger diameter. Say \(\tau_{1}\geq\tau_{2}\) and so \(2q^{-1-\tau_{2}}\) is the larger sidelenght. Then we have that
\[\mathcal{H}^{s}\left(W_{1,2}^{\mathbb{R}}(\psi_{1},\psi_{2})\right)\leq\sum_{ q>N}q^{2}2^{s}q^{-(1+\tau_{2})s}\leq 2^{s}\sum_{q>N}q^{2-(1+\tau_{2})s}\to 0\]
as \(N\to\infty\) for any \(s>\frac{3}{1+\tau_{2}}\). Hence
\[\dim_{\mathrm{H}}W_{1,2}^{\mathbb{R}}(\psi_{1},\psi_{2})\leq\frac{3}{1+\tau_{ 2}}\]
when \(\tau_{1}\geq\tau_{2}\). Now consider the other case. Then each rectangle in \(\Delta\left(R_{q},(q^{-1-\tau_{1}},q^{-1-\tau_{2}})\right)\) can be covered by
\[\frac{2q^{-(1+\tau_{2})}}{2q^{-(1+\tau_{1})}}=q^{(\tau_{1}-\tau_{2})}\]
balls of diameter \(2q^{-(1+\tau_{1})}\). Hence
\[\mathcal{H}^{s}\left(W_{1,2}^{\mathbb{R}}(\psi_{1},\psi_{2})\right)\leq\sum_{ q>N}q^{2}q^{(\tau_{1}-\tau_{2}}2^{s}q^{-(1+\tau_{1})s}\leq 2^{s}\sum_{q>N}q^{2+( \tau_{1}-\tau_{2})-(1+\tau_{1})s}\to 0\]
as \(N\to\infty\) for any \(s>\frac{3+(\tau_{1}-\tau_{2})}{1+\tau_{1}}\). Hence
\[\dim_{\mathrm{H}}W_{1,2}^{\mathbb{R}}(\psi_{1},\psi_{2})\leq\frac{3+(\tau_{1} -\tau_{2})}{1+\tau_{1}}\,.\]
Taking the minimum over the two cases completes the upper bound of Corollary 2.
For the lower bound we use Theorem 6 combined with Proposition 1. Within the notation of Theorem 6 set
\[a_{i}=a_{i}^{*}+1,\quad t_{i}=\tau_{i}-a_{i}^{*}\quad(i=1,2),\]
and consider the function \(\rho^{*}(q)=q^{-1}\). Then provided \(a_{1}^{*}+a_{2}^{*}=1\) and each \(a_{i}^{*}>0\) the functions \(\rho_{1}(q)=\rho^{*}(q)^{a_{1}}\), \(\rho_{2}(q)=\rho^{*}(q)^{a_{2}}\) are applicable to Proposition 1. The constant \(M\) appearing in the conditions of \(\rho\) in Proposition 1 can clearly be omitted.
Suppose that \(\tau_{1}\geq\tau_{2}\). If
1. \(\tau_{2}>\frac{1}{2}\): Then set \(a_{1}^{*}=a_{2}^{*}=\frac{1}{2}\). So \(a_{1}^{*}+a_{2}^{*}=1\). Now the sets \(\mathscr{K}_{1},\mathscr{K}_{2},\mathscr{K}_{3}\) as defined in Theorem 6 become \[\begin{cases}\mathscr{K}_{1}=\{1,2\},\quad\mathscr{K}_{2}=\emptyset,\quad \mathscr{K}_{3}=\emptyset&\text{if}\quad A=a_{1}^{*}=a_{2}^{*},\\ \mathscr{K}_{1}=\emptyset,\quad\mathscr{K}_{2}=\{2\},\quad\mathscr{K}_{3}=\{ 1\}&\text{if}\quad A=a_{2}+t_{2}=\tau_{2}+1,\\ \mathscr{K}_{1}=\emptyset,\quad\mathscr{K}_{2}=\{1,2\},\quad\mathscr{K}_{3}= \emptyset&\text{if}\quad A=a_{1}+t_{1}=\tau_{1}+1.\end{cases}\] Inputting each of these into the formula for \(s\) as in Theorem 6 we get the values \[\dim_{\rm H}W_{1,2}^{\mathbb{R}}(\psi_{1},\psi_{2})\geq\min\left\{2,\frac{3}{ 1+\tau_{2}},\frac{3+(\tau_{1}-\tau_{2})}{1+\tau_{1}}\right\}\,.\]
2. \(\tau_{2}<\frac{1}{2}\): Then set \(a_{1}^{*}=1-\tau_{2}\) and \(a_{2}^{*}=\tau_{2}\). Again, note \(a_{1}^{*}+a_{2}^{*}=1\). Now the sets \(\mathscr{K}_{1},\mathscr{K}_{2},\mathscr{K}_{3}\) as defined in Theorem 6 become \[\begin{cases}\mathscr{K}_{1}=\{1,2\},\quad\mathscr{K}_{2}=\emptyset,\quad \mathscr{K}_{3}=\emptyset,&\text{if}\quad A=a_{2}^{*}+1=\tau_{2}+1,\\ \mathscr{K}_{1}=\{1\},\quad\mathscr{K}_{2}=\{2\},\quad\mathscr{K}_{3}=\emptyset &\text{if}\quad A=a_{1}^{*}+1=2-\tau_{2},\\ \mathscr{K}_{1}=\emptyset,\quad\quad\mathscr{K}_{2}=\{1,2\},\,\mathscr{K}_{3}= \emptyset&\text{if}\quad A=a_{2}+t_{2}=\tau_{2}+1,\\ \mathscr{K}_{1}=\emptyset,\quad\quad\mathscr{K}_{2}=\{1,2\},\,\mathscr{K}_{3}= \emptyset&\text{if}\quad A=a_{1}+t_{1}=\tau_{1}+1.\end{cases}\] Inputting each of these into the formula for \(s\) as in Theorem 6 we get the values \[\dim_{\rm H}W_{1,2}^{\mathbb{R}}(\psi_{1},\psi_{2})\geq\min\left\{2,2,2,\frac{ 3+(\tau_{1}-\tau_{2})}{1+\tau_{1}}\right\}=\min\left\{2,\frac{3+(\tau_{1}- \tau_{2})}{1+\tau_{1}}\right\}.\]
Combining the two cases gives us the lower bound of Corollary 2 completing the proof.
## 4. \(p\)-adic approximation
Fix a prime \(p\) and \(n,m\in\mathbb{N}\). Let \(|\cdot|_{p}\) be the \(p\)-adic norm, \(\mathbb{Q}_{p}^{m\times n}\) the set of \(m\times n\) dimensional matrices with entries from the \(p\)-adic numbers \(\mathbb{Q}_{p}\), and \(I_{p}^{m\times n}:=\mathbb{Z}_{p}^{m\times n}\) the \(m\times n\) matrices with entries from the \(p\)-adic integers \(\mathbb{Z}_{p}:=\{x\in\mathbb{Q}_{p}:|x|_{p}\leq 1\}\). For matrix \(X\in I_{p}^{m\times n}\) we denote by \(X_{i}\) the \(i\)th column vector of \(X\). Let \(\mu_{m\times n}^{Q_{p}}\) denote the \(m\times n\)-dimensional Haar measure on \(\mathbb{Q}_{p}^{m\times n}\) normalised by \(\mu_{m\times n}^{Q_{p}}(I_{p}^{m\times n})=1\).
Let \(\Psi:\mathbb{Z}^{n+m}\to\mathbb{R}_{+}^{n}\) be an \(n\)-tuple of approximation functions of the form
\[\Psi(\mathbf{a})=\left(\frac{\psi_{1}(\|\mathbf{a}\|_{v})}{\|\mathbf{a}\|_{v} },\ldots,\frac{\psi_{n}(\|\mathbf{a}\|_{v})}{\|\mathbf{a}\|_{v}}\right)\]
with functions \(\psi_{i}:\mathbb{R}_{+}\to\mathbb{R}_{+}\). Throughout this section let \(\|\cdot\|_{v}\) be the quasi-norm \(\|\mathbf{a}\|_{v}=\max_{1\leq i\leq n+m}|a_{i}|^{1/v_{i}}\) for vector \(v=(v_{1},\ldots,v_{n+m})\) with
\[v_{i}>0\quad(1\leq i\leq m),\quad\sum_{i=1}^{m}v_{i}=m\,,\quad v_{i+m}=1\quad( 1\leq i\leq n). \tag{4.1}\]
Define
\[W_{n,m}^{\mathbb{Z}_{p}}(\Psi):=\left\{X\in I_{p}^{m\times n}:\begin{array}{l}| \mathbf{a}_{0}X_{i}-a_{i}|_{p}<\frac{\psi_{i}(\|\mathbf{a}\|_{v})}{\|\mathbf{a} \|_{v}}\quad(1\leq i\leq n)\\ \text{for infinitely many }\mathbf{a}=(\mathbf{a}_{0},a_{1},\ldots,a_{n})\in \mathbb{Z}^{m+n}\end{array}\right\}.\]
Note that the approximation function depends on \(n+m\) values, rather than \(m\) values in the real setting. This is because in the \(p\)-adic setting, for any \(x\in\mathbb{Z}_{p}\), one can make the value \(|x-z|_{p}\) arbitrarily small for a sufficiently large choice of \(z\in\mathbb{Z}\), that is, \(\mathbb{Z}\) is dense in \(\mathbb{Z}_{p}\). This is also why we additionally require each approximation function to decrease faster than \(\frac{1}{\|\mathbf{a}\|_{v}}\), and the condition that the last \(n\) digits of the vector \(v\) are each less than or equal to \(1\) is needed. For instance consider the simplified case \(n=2\), \(m=1\), \(\|\cdot\|_{v}\) the sup norm (i.e. \(v=(1,1,1)\)), and suppose that \(\frac{\psi_{1}(r)}{r}>r^{-1}\) for all sufficiently large \(r\in\mathbb{N}\). Then \(\mathcal{W}_{2,1}((\psi_{1},\psi_{2}))=\mathbb{Z}_{p}^{2}\) for any choice of function \(\psi_{2}\), since for any \(X\in\mathbb{Z}_{p}^{2}\) there are infinitely many integers vectors (of the form \((a_{0},a_{1},a_{2})=(0,p^{k},0)\), for \(k\in\mathbb{N}\) sufficiently large) that \((\psi_{1},\psi_{2})\)-approximate \(X\).
We prove the following \(p\)-adic analogue to the weighted Khintchine-Groshev Theorem 1.
**Theorem 7**.: _Let \(\Psi\) be an \(n\)-tuple of approximation functions defined as above and suppose each \(\psi_{i}\) is monotonically decreasing. Then_
\[\mu_{m\times n}^{\mathbb{Q}_{p}}\left(W_{n,m}^{\mathbb{Z}_{p}}(\Psi)\right)= \left\{\begin{aligned} 0&\text{if}\quad\sum\limits_{r=1}^{ \infty}r^{m-1}\prod\limits_{i=1}^{n}\psi_{i}(r)<\infty,\\ 1&\text{if}\quad\sum\limits_{r=1}^{\infty}r^{m-1}\prod \limits_{i=1}^{n}\psi_{i}(r)=\infty.\end{aligned}\right.\]
We again highlight previously known results, these include
* \(n=m=1\), \(\psi\) monotonic Jarnik [46].
* \(n=m=1\), \(\psi\) non-monotonic proven by Haynes [39] via Maynard & Koukoulopolous [58]. In fact, in this paper, Haynes proved that if the variance method from probability theory can be used to solve the Duffin-Schaeffer conjecture, then almost the entire (classical) Duffin-Schaeffer conjecture will follow. Conversely, if the variance method can be used to solve completely the classical Duffin-Schaeffer conjecture, then the corresponding conjecture is true in every field \(\mathbb{Q}_{p}\).
* \(nm\geq 1\), \(\psi\) monotonic univariable proven by Lutz [66]. See also Beresnevich, Dickinson, Velani [9] for proof via ubiquitous systems.
* \(n>1\)\(m=1\), \(\Psi\) weighted monotonic univariable, proven by Beresnevich, Levesley & Ward [10].
* \(n=1\)\(m\geq 1\), \(\psi\) monotonic univariable and for the inhomogeneous settings by Datta & Ghosh [20]. This result with property \(P\) in place of the univariable condition is claimed true in [20, Remark 1.4 (5)].
We also provide the complimentary Hausdorff dimension result for the same setup.
**Theorem 8**.: _Let \(\Psi\) be of the form_
\[\Psi(\boldsymbol{a})=\left(\|\boldsymbol{a}\|_{v}^{-\tau_{1}},\ldots,\| \boldsymbol{a}\|_{v}^{-\tau_{n}}\right)\]
_for vectors \(\boldsymbol{v}=(1,\ldots,1)\) and \(\boldsymbol{\tau}=(\tau_{1},\ldots,\tau_{n})\in\mathbb{R}_{+}^{n}\) with \(\sum\limits_{i}\tau_{i}>m+n\) and each \(\tau_{i}>1\). Then_
\[\dim_{\mathrm{H}}W_{n,m}^{\mathbb{Z}_{p}}(\Psi)=\min\limits_{1\leq j\leq n} \left\{n(m-1)+\frac{n+m-\sum\limits_{i:\tau_{i}<\tau_{j}}(\tau_{i}-\tau_{j})}{ \tau_{j}}\right\}=s\]
_and_
\[\mathcal{H}^{s}\left(W_{n,m}^{\mathbb{Z}_{p}}(\Psi)\right)=\infty.\]
The condition that \(\sum\limits_{i}\tau_{i}>m+n\) is standard. For \(\sum\limits_{i}\tau_{i}\leq m+n\) by the standard version of Dirichlet's approximation theorem in the \(p\)-adic setting, see Lemma 4 below, we have that \(W_{n,m}^{\mathbb{Z}_{p}}(\Psi)=I_{p}^{m\times n}\).
Note that in the simultaneous setting, the dimension and Hausdorff measure have already been proven [1, 9], and for the weighted setting with \(m=1\) and \(n\geq 1\) was proven in [10]. Hence the novelty of the above two theorems is the weighted approximation case of the dual linear forms.
### Ubiquity for \(p\)-adics
Our key statement to prove the above results is the following ubiquity statement in the \(p\)-adic setting (Proposition 2). Given the notation for a ubiquitous system for rectangles as in SS2.2 we now define our setup. Let
1. \(J=\mathbb{Z}^{m+n}\),
2. \(\beta:J\to\mathbb{R}_{+}\), \(\alpha=\mathbf{a}=(\mathbf{a}_{0},\mathbf{a}_{1})=(a_{0,1},\ldots,a_{0,m},a_{1 },\ldots,a_{n})\mapsto\beta_{\mathbf{a}}=\|\mathbf{a}\|_{v}\),
3. \(l_{k+1}=u_{k}=M^{k+1}\) for some \(M\in\mathbb{N}\),
4. \(J_{k}=\{\alpha\in J:M^{k}\leq\|\mathbf{a}\|_{v}\leq M^{k+1}\}\),
5. \(R_{\mathbf{a},i}=\{X_{i}\in\mathbb{Z}_{p}^{m}:\mathbf{a}_{0}X_{i}-a_{i}=0\}\),
6. \(R_{\mathbf{a}}=\prod_{i=1}^{n}R_{\mathbf{a},i}\),
7. \(\kappa=\frac{m-1}{m}\) and \(\delta_{i}=m\) for each \(i=1,\ldots,n\).
We prove the following key statement and then prove Theorems 7 and 8 in subsections 4.2 and 4.3 respectively.
**Proposition 2**.: _Consider any ball \(B=B(X,r)\subset I_{p}^{m\times n}\) with centre \(X\in I_{p}^{m\times n}\) and radius \(r>0\). Let \(\rho=(\rho_{1},\ldots,\rho_{n})\) be an \(n\)-tuple of \(\rho_{i}:\mathbb{R}_{+}\to\mathbb{R}_{+}\) functions satisfying_
\[\rho_{i}(h)=\frac{\phi_{i}(h)}{h},\ (1\leq i\leq n)\ \ and\ \ \ \prod_{i=1}^{n}\rho_{i}(h)=p^{-n}h^{-(m+n)}\ \ \ (h\in\mathbb{R}_{+}), \tag{4.2}\]
_for \(\phi_{i}:\mathbb{R}_{+}\to\mathbb{R}_{+}\) with each \(\phi_{i}(h)\to 0\) as \(h\to\infty\). Suppose that_
\[p^{\lambda_{0}}>2^{m+n+2}p^{n}\frac{p^{m-\frac{1}{2}}}{p^{m-\frac{1}{2}}-1} \tag{4.3}\]
_and_
\[M\geq\left(p^{n(\lambda_{0}-1)}3^{n}4\right)^{\frac{1}{m+n}}. \tag{4.4}\]
_Then for all sufficiently large \(k\in\mathbb{N}\), we have that_
\[\mu_{m\times n}^{\mathbb{Q}_{p}}\left(B\cap\bigcup_{\alpha\in J_{k}}\Delta \left(R_{\alpha},\rho\left(M^{k+1}\right)p^{\lambda_{0}}\right)\right)\geq \frac{1}{2}\mu_{m\times n}^{\mathbb{Q}_{p}}(B).\]
The following lemma, which can be seen as the \(p\)-adic version of Theorem 4 is crucial in our ubiquitous system construction.
**Lemma 4** ([7, Lemma 6.2]).: _Let \(n,m\in\mathbb{N}\) and \(\Psi=(\psi)_{1\leq i\leq n}\) be a \(n\)-tuple of approximation functions. Let \(H_{1},\ldots,H_{m+n}\geq 1\) be positive integers and let \(H^{n+m}=\prod_{i=1}^{n+m}H_{i}\). Suppose that_
\[\prod_{i=1}^{n}\psi_{i}(H)\geq H^{-(n+m)}p^{-n}\]
_with \(\psi_{i}(H)<p^{-1}\) for all \(1\leq i\leq n\). Then for any \(X=(x_{i,j})\in I_{p}^{m\times n}\) there exists \(H_{0}\) dependent only on \(\Psi\), such that for all \(H\geq H_{0}\),_
\[\left|a_{0,1}x_{i,1}+\cdots+a_{0,m}x_{i,m}-a_{i}\right|_{p}<\psi_{i}(H)\quad( 1\leq i\leq n),\]
_has a non-zero solution in integers \(a_{0,1},\ldots,a_{0,m},a_{1},\ldots,a_{n}\) satisfying_
\[\left|a_{0,i}\right|\leq H_{i}\ (1\leq i\leq m)\quad\text{and}\quad\left|a_{j} \right|\leq H_{j}\ (1\leq j\leq n).\]
_Remark 1_.: Many variants of this result have appeared, see for example [10, 56]. This is a well-known result that can be obtained in the standard way using the pigeonhole principle.
We also need the following lemma which generally tells us how often, for each \(\mathbf{a}_{0}\in\mathbb{Z}^{m}\), the thickened resonant sets \(R_{(\mathbf{a}_{0},\mathbf{a}_{1})}\) intersect with a ball.
**Lemma 5**.: _Let \(\lambda,\lambda_{1}\in\mathbb{N}_{0}\) and fix some \(\boldsymbol{a}_{0}\in\mathbb{Z}^{m}\) with \(\|\boldsymbol{a}_{0}\|_{p}\leq p^{-\lambda}\). Let \(B\subset I_{p}^{m\times n}\) with \(r(B)=r<p^{-1}\), and \(U\in\mathbb{N}\). Then for all \(V\in\mathbb{N}\) such that_
\[\rho_{i}(V)p^{\lambda_{1}}<r\quad\text{and}\quad\ V\geq U\,,\]
_the cardinality of the set_
\[\#\left\{\boldsymbol{a}_{1}\in\mathbb{Z}^{n}:\left\{\begin{aligned} &\|\boldsymbol{a}_{1}\|\leq U\\ &\|\boldsymbol{a}_{1}\|_{p}\leq p^{-\lambda}\end{aligned} \right.\qquad\text{and}\quad\ \Delta\left(R_{(\boldsymbol{a}_{0},\boldsymbol{a}_{1})},\rho_{i}(V)p^{\lambda_ {1}}\right)\cap B\neq\emptyset\right\}\]
_is at most_
\[\left(1+\frac{U}{p^{\lambda-1}}r\right)^{n}\,.\]
Proof.: Let \(B=B(X_{i},r)=\prod_{i=1}^{n}B_{i}(X_{i},r)=\prod_{i=1}^{n}B_{i}\) for \(B_{i}(X_{i},r)\) representing a \(m\)-dimensional \(p\)-adic ball with centre \(X_{i}=(x_{i,1},\ldots,x_{i,m})\in\mathbb{Z}_{p}^{m}\) and radius \(r>0\) in the \(i\)th coordinate of the \(mn\)-dimensional ball \(B\) with centre \(X\in I_{p}^{m\times n}\) and radius \(r(B)=r>0\). Consider one coordinate \(1\leq i\leq n\) at a time. If there exists
\[Y_{i}\in\Delta_{i}\left(R_{\mathbf{a},i},\rho_{i}(V)p^{\lambda_{1}}\right) \cap B_{i}\subset\mathbb{Z}_{p}^{m}\,,\]
then by definition there exists some \(Z_{i}=(z_{i,1},\ldots,z_{i,m})\in R_{\mathbf{a},i}\) such that
\[Y_{i}\in B_{i}\left(Z_{i},\rho_{i}(V)p^{\lambda_{1}}\right)\cap B_{i}\,.\]
By the ultrametric property of \(\mathbb{Q}_{p}\) we have that
\[B_{i}(X_{i},r)\cup B_{i}(Z_{i},\rho_{i}(V)p^{\lambda_{1}})=B_{i}\left(Z_{i}, \max\left\{\rho_{i}(V)p^{\lambda_{1}},r\right\}\right).\]
Hence
\[\|Z_{i}-X_{i}\|_{p}=\max_{1\leq j\leq m}|z_{i,j}-x_{i,j}|_{p}\leq\max\left\{ \rho_{i}(V)p^{\lambda_{1}},r\right\}.\]
Then by the strong triangle inequality, we have that
\[\left|\sum_{j=1}^{m}a_{0,j}(z_{i,j}-x_{i,j})\right|_{p} \leq\|\mathbf{a}_{0}\|_{p}\max\left\{\rho_{i}(V)p^{\lambda_{1}},r\right\}\] \[\leq\max\left\{\rho_{i}(V)p^{\lambda_{1}-\lambda},rp^{-\lambda} \right\}:=f_{i}(V,r,\lambda,\lambda_{1})\,, \tag{4.5}\]
where the second inequality follows from the condition \(\|\mathbf{a}_{0}\|_{p}\leq p^{-\lambda}\). Since \(Z_{i}\in R_{\mathbf{a},i}\) we have that \(\left(\sum\limits_{j=1}^{m}a_{0,j}z_{i,j}\right)+a_{i}=0\). Combining with (4.5) we have
\[\left|\left(\sum_{j=1}^{m}a_{0,j}x_{i,j}\right)+a_{i}\right|_{p}\leq f_{i}(V, r,\lambda,\lambda_{1}), \tag{4.6}\]
Since \(\|\mathbf{a}_{1}\|_{p}\leq p^{-\lambda}\) we must have that \(p^{\lambda}|a_{i}\) for each \(1\leq i\leq n\). Hence
\[a_{i}\equiv 0\mod p^{\lambda}\,. \tag{4.7}\]
Combining (4.6)-(4.7) and noting that \(\|\mathbf{a}_{1}\|\leq U\) we have that each \(a_{i}\), written in base \(p\)-adic expansion, is of the form
\[a_{i}=\sum_{k=\lambda+1}^{\max\{\lambda+1,-\log_{p}f_{i}(V,r,\lambda,\lambda_ {1})\}}d_{k}p^{k}+\sum_{j=\max\{\lambda+1,-\log_{p}f_{i}(k,r,\lambda,\lambda_ {1})+1}}^{1+\log_{p}U}d_{j}p^{j}\qquad d_{k}\in\{0,\ldots p-1\}\,,\]
with \(d_{k}\) fixed for \(k\in\{\lambda,\ldots,\max\{\lambda+1,-\log_{p}f_{i}(V,r,\lambda,\lambda_{1}) \}\}\) depending on \(X_{i}\). Thus there are at most
\[1+p^{\left(1+\log_{p}U-\max\{\lambda+1,-\log_{p}f_{i}(V,r,\lambda,\lambda_{1}) \}\right)}=1+pU\min\{p^{-\lambda-1},f_{i}(V,r,\lambda,\lambda_{1})\}\]
possible values of \(a_{i}\). Inputting the value for \(f_{i}(V,r,\lambda,\lambda_{1})\) we have that the cardinality of possible values of each \(a_{i}\) is bounded above by
\[1+\min\left\{\frac{U}{p^{\lambda}},\max\left\{\frac{U}{p^{\lambda}}\left(\rho_{i }(V)p^{\lambda_{1}+1}\right),\frac{U}{p^{\lambda}}rp\right\}\right\}.\]
Since \(\rho_{i}(V)p^{\lambda_{1}+1}<r\) we have that the cardinality is bounded from above by
\[1+\frac{U}{p^{\lambda}}rp\,.\]
Taking the product over each coordinate axis gives us our claimed upper bound.
We now proceed with the proof of Proposition 2.
Let \(B=B(X,r)\) with \(p^{-1}>r>0\) and \(X=(X_{1},\ldots,X_{n})=(x_{i,j})\in I_{p}^{m\times n}\). Choose \(k_{r}\) sufficiently large such that for all \(k>k_{r}\)
\[\max_{1\leq i\leq n}\phi_{i}(M^{k+1})<r\quad\text{ and }\quad\rho_{i}(M^{k+1})p^{ \lambda_{0}}<r\,. \tag{4.8}\]
Without loss of generality we can assume that \(r\in\{p^{-t}:t\in\mathbb{N}_{0}\}\). Denote each linear form
\[\mathbf{a}_{0}X_{i}+a_{i}=\left(\sum_{j=1}^{m}a_{0,j}x_{i,j}\right)+a_{i} \quad(1\leq i\leq n)\]
for \(\mathbf{a}=(\mathbf{a}_{0},a_{1},\ldots,a_{n})=(a_{0,1},\ldots,a_{0,m},a_{1}, \ldots,a_{n})\in\mathbb{Z}^{m+n}\). Then, by Lemma 4 and conditions (4.2) on \(\rho\), we have that the system
\[\left\{\begin{aligned} &|\mathbf{a}_{0}X_{i}+a_{i}|_{p}\leq\rho_{i}(H) \quad(1\leq i\leq n),\\ &|a_{0,i}|\leq H^{v_{i}}\quad\quad\quad\quad\quad\quad(1\leq i \leq m),\\ &|a_{i}|\leq H\quad\quad\quad\quad\quad(1\leq i\leq n),\end{aligned}\right.\]
has a non-zero rational integer solution \(\mathbf{a}\in\mathbb{Z}^{m+n}\). Furthermore, by (4.1) for each \(1\leq i\leq n\) we have that \(|a_{i}|\leq H\) and since each \(\rho_{i}(H)<H^{-1}\) we are forced to conclude that \(\mathbf{a}_{0}\neq 0\). Note that if \(\|\mathbf{a}_{0}\|_{p}=p^{-\lambda}\) for some \(\lambda\in\mathbb{N}_{0}\) then \(\|\mathbf{a}_{1}\|_{p}\leq p^{-\lambda}\). This follows by applying the strong triangle inequality to first row of the above system of inequalities along with the previous observation that each \(\rho_{i}(H)<H^{-1}\).
The system of inequalities readily implies that
\[\mu_{m\times n}^{\mathbb{Q}_{p}}\left(B\cap\bigcup_{\alpha\in J:\|\alpha\|_{ p}\leq M^{k+1}}\Delta\left(R_{\alpha},\rho(M^{k+1})\|\mathbf{a}_{0}\|_{p}^{-1} \right)\right)=\mu_{m\times n}^{\mathbb{Q}_{p}}(B). \tag{4.9}\]
Since we want the sidelenghts of the rectangles in the above set to be independent of \(\mathbf{a}_{0}\) we want to bound the size of \(\|\mathbf{a}_{0}\|_{p}\). We need \(\lambda_{0}\in\mathbb{N}\) to be large enough to satisfy some technical condition later, hence we have assumed condition (4.3).
For each \(\lambda\in\mathbb{N}_{0}\) consider the sets
\[\widetilde{\mathcal{J}}(k,\lambda):=\left\{\alpha\in J:\|\alpha\|_{v} \leq M^{k+1}\quad\text{and}\quad\|\mathbf{a}_{0}\|_{p}=p^{-\lambda}\right\},\] \[\widetilde{\mathcal{J}}(k,\lambda_{0}):=\left\{\alpha\in J:\| \alpha\|_{v}<M^{k}\quad\text{ and }\quad\|\mathbf{a}_{0}\|_{p}\geq p^{-\lambda_{0}}\right\},\] \[J_{k}(\lambda_{0}):=\left\{\alpha\in J:M^{k}\leq\|\alpha\|_{v} \leq M^{k+1}\quad\text{ and }\quad\|\mathbf{a}_{0}\|_{p}\geq p^{-\lambda_{0}}\right\},\]
and note that
\[J_{k}\supseteq J_{k}(\lambda_{0})\cup\widetilde{\mathcal{J}}(k,\lambda_{0}) \cup\bigcup_{\lambda>\lambda_{0}}\widehat{\mathcal{J}}(k,\lambda).\]
Hence, we have that
\[\mu_{m\times n}^{\mathbb{Q}_{p}}(B)=\mu_{m\times n}^{\mathbb{Q}_{ p}}\left(B\cap\bigcup_{\alpha\in J:\|\alpha\|_{v}\leq M^{k+1}}\Delta\left(R_{ \alpha},\rho\left(M^{k+1}\right)\|\mathbf{a}_{0}\|_{p}^{-1}\right)\right)\leq\] \[\underbrace{\mu_{m\times n}^{\mathbb{Q}_{p}}\left(B\cap\bigcup_{ \alpha\in\widetilde{\mathcal{J}}(k,\lambda_{0})}\Delta\left(R_{\alpha},\rho \left(M^{k+1}\right)p^{\lambda}\right)\right)}_{:=\mu_{m\times n}^{\mathbb{Q}_ {p}}(A_{2})}\] \[+\underbrace{\mu_{m\times n}^{\mathbb{Q}_{p}}\left(B\cap\bigcup_{ \alpha\in\widetilde{\mathcal{J}}(k,\lambda_{0})}\Delta\left(R_{\alpha},\rho \left(M^{k+1}\right)p^{\lambda_{0}}\right)\right)}_{:=\mu_{m\times n}^{ \mathbb{Q}_{p}}(A_{3})} \tag{4.10}\]
Considering our set of interest note that \(J_{k}\supseteq J_{k}(\lambda_{0})\) and so we have that
\[\mu_{m\times n}^{\mathbb{Q}_{p}}\left(B\cap\bigcup_{\alpha\in J_{k}}\Delta \left(R_{\alpha},\rho\left(M^{k+1}\right)p^{\lambda_{0}}\right)\right)\geq\mu _{m\times n}^{\mathbb{Q}_{p}}(A_{3})\geq\mu_{m\times n}^{\mathbb{Q}_{p}}(B)- \mu_{m\times n}^{\mathbb{Q}_{p}}(A_{1})-\mu_{m\times n}^{\mathbb{Q}_{p}}(A_{2}).\]
Thus showing that
\[\mu_{m\times n}^{\mathbb{Q}_{p}}(A_{1})<\tfrac{1}{4}\mu_{m\times n}^{\mathbb{Q }_{p}}(B)\quad\text{ and }\quad\mu_{m\times n}^{\mathbb{Q}_{p}}(A_{2})<\tfrac{1}{4}\mu_{m \times n}^{\mathbb{Q}_{p}}(B)\]
completes the proof.
For each \(\alpha\in J\), \(\lambda\in\mathbb{N}_{0}\) the thickened resonant sets with non-empty intersection with the ball \(B\) has measure at most
\[\mu_{m\times n}^{\mathbb{Q}_{p}}\left(B\cap\Delta\left(R_{\alpha},\rho\left(M^ {k+1}\right)p^{\lambda}\right)\right)\leq p^{n\lambda}r^{n(m-1)}\prod_{i=1}^{ n}\rho_{i}\left(M^{k+1}\right)=p^{n\lambda-n}r^{n(m-1)}M^{-(k+1)(m+n)}\,.\]
Now
\[\mu_{m\times n}^{\mathbb{Q}_{p}}(A_{1}) \leq\sum_{\lambda>\lambda_{0}}\sum_{\alpha\in\overline{J}(k,\lambda )}p^{n\lambda-n}r^{n(m-1)}M^{-(k+1)(m+n)}\] \[\leq\sum_{\lambda>\lambda_{0}}\sum_{\|\mathbf{a}_{0}\|_{p}\leq M^ {k+1}:\|\mathbf{a}_{0}\|_{p}=p^{-\lambda}}\left(1+\frac{M^{k+1}}{p^{\lambda-1}}r \right)^{n}p^{n\lambda-n}r^{n(m-1)}M^{-(k+1)(m+n)}\]
by Lemma 5 setting \(U=V=M^{k+1}\) and \(\lambda_{1}=\lambda\). Observe Lemma 5 is applicable because \(\|\mathbf{a}_{0}\|_{p}=p^{-\lambda}\) implies that \(\|\mathbf{a}_{1}\|_{p}\leq p^{-\lambda}\), and, since \(p^{\lambda}\leq M^{k+1}\), we have \(\rho_{i}(M^{k+1})p^{\lambda}<r\) by (4.8). Then
\[\mu_{m\times n}^{\mathbb{Q}_{p}}(A_{1}) \leq\sum_{\lambda>\lambda_{0}}\left(2\frac{M^{k+1}}{p^{\lambda}} \right)^{m}\left(1+\frac{M^{k+1}}{p^{\lambda-1}}r\right)^{n}p^{n\lambda-n}r^{ n(m-1)}M^{-(k+1)(m+n)} \tag{4.11}\] \[\leq 2^{m}p^{-n}\sum_{\lambda>\lambda_{0}}M^{-(k+1)n}r^{n(m-1)} \left(1+\frac{M^{k+1}}{p^{\lambda-1}}r\right)^{n}p^{(n-m)\lambda}\] (4.12) \[= 2^{m}p^{-n}\sum_{\lambda>\lambda_{0}}\left(M^{-(k+1)}p^{\lambda( 1-\frac{1}{2n})}\right)^{n}r^{n(m-1)}\left(1+\frac{M^{k+1}}{p^{\lambda-1}}r \right)^{n}p^{(-m+\frac{1}{2})\lambda}\]
Now, suppose \(\frac{M^{k+1}}{p^{\lambda}}r<1\). Then since \(p^{\lambda}<M^{k+1}\), and by the choice of \(k_{0}\) so that \(M^{-\frac{1}{2n}(k+1)}<r\) for all \(k>k_{0}\), we have that
\[\left(M^{-(k+1)}p^{\lambda(1-\frac{1}{2n})}\right)^{n}<r^{n}\,.\]
If \(\frac{M^{k+1}}{p^{\lambda}}r\geq 1\) then trivially
\[\left(1+\frac{M^{k+1}}{p^{\lambda-1}}r\right)^{n}\leq 2^{n}p^{n}\left(\frac{M^{k+ 1}}{p^{\lambda}}r\right)^{n}\,.\]
So
\[\left(M^{-(k+1)}p^{\lambda(1-\frac{1}{2n})}\right)^{n}r^{n(m-1)}\left(1+\frac {M^{k+1}}{p^{\lambda-1}}r\right)^{n}p^{(-m+\frac{1}{2})\lambda}\leq 2^{n}r^{nm}p^{(-m +\frac{1}{2})\lambda}\quad\text{if}\quad\frac{M^{k+1}}{p^{\lambda}}r<1\,,\]
and
\[M^{-(k+1)n}r^{n(m-1)}\left(1+\frac{M^{k+1}}{p^{\lambda-1}}r\right)^{n}p^{(n-m )\lambda}\leq 2^{n}p^{n}r^{nm}p^{-m\lambda}\quad\text{if}\quad\frac{M^{k+1}}{p^{ \lambda}}r\geq 1\,.\]
Hence
\[\mu_{m\times n}^{\mathbb{Q}_{p}}(A_{1})\leq 2^{m+n}p^{n}r^{nm}\sum_{\lambda> \lambda_{0}}p^{(-m+\frac{1}{2})\lambda}\leq\mu_{m\times n}^{\mathbb{Q}_{p}}( B)2^{m+n}p^{n}\frac{p^{m-\frac{1}{2}}}{p^{m-\frac{1}{2}}-1}p^{-\lambda_{0}}\]
and so by (4.3) we have that
\[\mu_{m\times n}^{\mathbb{Q}_{p}}(A_{1})\leq\frac{1}{4}\mu_{m\times n}^{ \mathbb{Q}_{p}}(B)\]
as required.
To show that \(\mu_{m\times n}^{\mathbb{Q}_{p}}(A_{2})<\frac{1}{4}\mu_{m\times n}^{\mathbb{Q}_{p}}(B)\) observe that
\[\mu_{m\times n}^{\mathbb{Q}_{p}}(A_{2})\leq\mu_{m\times n}^{\mathbb{Q}_{p}} \left(B\cap\bigcup_{\alpha\in J:|\alpha|_{e}\leq M^{k}}\Delta\left(R_{\alpha}, \rho\left(M^{k+1}\right)p^{\lambda_{0}}\right)\right)\]
and furthermore that,
\[\mu_{m\times n}^{\mathbb{Q}_{p}}\left(B\cap\bigcup_{\alpha\in J:| \alpha|_{e}\leq M^{k}}\Delta\left(R_{\alpha},\rho\left(M^{k+1}\right)p^{ \lambda_{0}}\right)\right)\] \[\leq\sum_{|\mathbf{a}0|_{[c_{1},\ldots,c_{m})}\leq M^{k}}\sum_{ \begin{subarray}{c}|(a_{1},\ldots,a_{n})|\leq M^{k}:\\ \Delta(R_{\mathbf{a}},\rho(M^{k+1})p^{\lambda_{0}})\cap B\neq\phi\end{subarray}} \mu_{m\times n}^{\mathbb{Q}_{p}}\left(B\cap\Delta(R_{\mathbf{a}},\rho(M^{k+1})p ^{\lambda_{0}}\right).\]
Now,
\[\mu_{m\times n}^{\mathbb{Q}_{p}}(A_{2})\leq\left(\prod_{i=1}^{m}M^{kv_{i}} \right)\left(2M^{k}r+1\right)^{n}\mu_{m\times n}^{\mathbb{Q}_{p}}\left(B\cap \Delta\left(R_{\mathbf{a}},\rho(M^{k+1})p^{\lambda_{0}}\right)\right)\]
by Lemma 5 setting \(U=M^{k}\), \(V=M^{k+1}\), \(\lambda=0\) and \(\lambda_{1}=\lambda_{0}\). Since \(\lambda_{0}\) is fixed it is clear that \(\rho_{i}(M^{k+1})p^{\lambda_{0}}<r\).
Then inputting the previous upper bound on the measure of such thickened resonant sets we have
\[\mu_{m\times n}^{\mathbb{Q}_{p}}(A_{2})\leq M^{k\sum_{i=1}^{m}v_{i}}\left(2M^ {k}r+1\right)^{n}p^{n\lambda_{0}-n}r^{n(m-1)}M^{-(k+1)(m+n)}.\]
Since \(k\) is chosen sufficiently large such that \(M^{k}r>1\) we have that
\[\mu_{m\times n}^{\mathbb{Q}_{p}}(A_{2})\leq 3^{n}r^{nm}p^{n\lambda_{0}-n}M^{-(m +n)}\leq 3^{n}p^{n(\lambda_{0}-1)}M^{-(n+m)}\mu_{m\times n}^{\mathbb{Q}_{p}}(B)\]
Thus, by (4.4) we have that
\[\mu_{m\times n}^{\mathbb{Q}_{p}}(A_{2})\leq\frac{1}{4}\mu_{m\times n}^{ \mathbb{Q}_{p}}(B).\]
Hence, by considering (4.10) and the above two calculations we have completed the proof.
### Proof of Theorem 7
We will on occasion use the following estimate
\[\#J_{k}=\#\left\{\mathbf{a}\in\mathbb{Z}^{n+m}:M^{k}<\|\mathbf{a}\|_{v}\leq M ^{k+1}\right\}\leq cM^{k(n+m)} \tag{4.13}\]
for constant \(c=2(n+m)3^{n+m-1}\big{(}M^{n+m}-M^{n+m-\min_{i}v_{i}}\big{)}\) independent of \(k\). To see this observe that
\[\sum_{M^{k}<\|\mathbf{a}\|_{p}}\leq M^{k+1} 1=\sum_{j=1}^{n+m}\sum_{\begin{subarray}{c}M^{k}<|a_{j}|^{1/v_{j}} \leq M^{k+1}\\ |a_{t}|^{1/v_{t}}\leq M^{k+1},1\leq t\leq n+m\ t\neq j\end{subarray}}1\] \[\leq\sum_{j=1}^{n+m}3^{n+m-1}M^{(k+1)\sum\limits_{t=1\neq j}^{n+m }v_{t}}2\Big{(}M^{(k+1)v_{j}}-M^{kv_{j}}\Big{)}\] \[=3^{n+m-1}2\sum_{j=1}^{n+m}M^{(k+1)\sum\limits_{t=1\neq j}^{n+m}v _{t}}M^{kv_{j}}\big{(}M^{v_{j}}-1\big{)}\] \[\leq 3^{n+m-1}2(n+m)\max_{1\leq j\leq n+m}M^{k(n+m)}M^{\sum \limits_{t=1\neq j}^{n+m}v_{t}}\big{(}M^{v_{j}}-1\big{)}\] \[=3^{n+m-1}2(n+m)\Big{(}M^{n+m}-M^{n+m-\min_{1\leq j\leq n+m}v_{j} }\Big{)}M^{k(n+m)}.\]
Recall
\[W_{n,m}^{\mathbb{Z}_{p}}(\Psi) :=\left\{X\in I_{p}^{m\times n}:\begin{array}{c}|\mathbf{a}_{0 }X_{i}-a_{i}|_{p}<\frac{\psi_{i}(\|\mathbf{a}\|_{p})}{\|\mathbf{a}\|_{p}}\quad( 1\leq i\leq n)\\ \text{for infinitely many }\mathbf{a}=(\mathbf{a}_{0},a_{1},\ldots,a_{n})\in \mathbb{Z}^{m+n}\end{array}\right\}\] \[=\limsup_{\mathbf{a}\in\mathbb{Z}^{m+n}}\left\{X\in I_{p}^{m\times n }:|\mathbf{a}_{0}X_{i}-a_{i}|_{p}<\frac{\psi_{i}(\|\mathbf{a}\|_{p})}{\| \mathbf{a}\|_{p}}\quad(1\leq i\leq n)\right\}.\]
For ease of notation, we will write for all \(\mathbf{a}=(\mathbf{a}_{0},a_{1},\ldots,a_{n})\in\mathbb{Z}^{m+n}\)
\[\mathcal{A}_{\mathbf{a}}(\Psi)=\left\{X\in I_{p}^{m\times n}:|\mathbf{a}_{0}X_ {i}-a_{i}|_{p}<\frac{\psi_{i}(\|\mathbf{a}\|_{p})}{\|\mathbf{a}\|_{p}}\quad(1 \leq i\leq n)\right\}.\]
For completeness, we prove the convergence case of Theorem 7.
By the Borel-Cantelli convergence Lemma (Lemma 1), we have that
\[\mu_{m\times n}^{\mathbb{Q}_{p}}\Big{(}W_{n,m}^{\mathbb{Z}_{p}}(\Psi)\Big{)} =0\quad\text{if}\quad\sum_{\mathbf{a}\in\mathbb{Z}^{n+m}}\mu_{m\times n}^{ \mathbb{Q}_{p}}\left(\mathcal{A}_{\mathbf{a}}(\Psi)\right)<\infty.\]
Observe that
\[\sum_{\mathbf{a}\in\mathbb{Z}^{n+m}}\mu_{m\times n}^{\mathbb{Q}_{p}}( \mathscr{A}_{\mathbf{a}}(\Psi)) =\sum_{k=1}^{\infty}\sum_{\begin{subarray}{c}\mathbf{a}\in\mathbb{Z}^ {m+n}:\\ M^{k}<\|\mathbf{a}\|_{v}\leq M^{k+1}\end{subarray}}\mu_{m\times n}^{\mathbb{Q}_ {p}}(\mathscr{A}_{\mathbf{a}}(\Psi))\] \[\simeq_{p,n}\sum_{k=1}^{\infty}\sum_{\begin{subarray}{c}\mathbf{ a}\in\mathbb{Z}^{m+n}:\\ M^{k}<\|\mathbf{a}\|_{v}\leq M^{k+1}\end{subarray}}\prod_{i=1}^{n}\frac{\psi_{ i}(\|\mathbf{a}\|_{v})}{\|\mathbf{a}\|_{v}}\] \[\leq\sum_{k=1}^{\infty}\sum_{\begin{subarray}{c}\mathbf{a}\in \mathbb{Z}^{m+n}:\\ M^{k}<\|\mathbf{a}\|_{v}\leq M^{k+1}\end{subarray}}\prod_{i=1}^{n}\frac{\psi_{ i}(M^{k})}{M^{k}}\] \[\leq\sum_{k=1}^{\infty}CM^{k(n+m)}\prod_{i=1}^{n}\frac{\psi_{i}(M ^{k})}{M^{k}}\] \[\leq c\sum_{k=1}^{\infty}M^{k}\left(M^{k(m-1)}\prod_{i=1}^{n}\psi _{i}(M^{k})\right).\]
Note that we can trivially assume
\[\prod_{i=1}^{n}\psi_{i}(r)<p^{-n}r^{-m}\]
for all \(r\in\mathbb{N}\), otherwise by Lemma 4 we have \(W_{n,m}^{\mathbb{Z}_{p}}(\Psi)=\mathbb{Z}_{p}^{m\times n}\). Hence we can assume
\[M^{k(m-1)}\prod_{i=1}^{n}\psi_{i}(M^{k})\]
is decreasing in \(k\in\mathbb{N}\), and so by Cauchy condensation we have that
\[\sum_{\mathbf{a}\in\mathbb{Z}^{n+m}}\mu_{m\times n}^{\mathbb{Q}_{p}}(\mathscr{ A}_{\mathbf{a}}(\Psi))\leq Mc\sum_{r=1}^{\infty}r^{m-1}\prod_{i=1}^{n}\psi_{i}(r).\]
So the convergence case follows by the Borel-Cantelli Lemma and convergence assumption on \(\sum\limits_{r=1}^{\infty}r^{m-1}\prod_{i=1}^{n}\psi_{i}(r)\).
To prove the divergence case of Theorem 7 we use Proposition 2 and Theorem 5. We need to find an \(n\)-tuple of functions \(\rho=(\rho_{1},\dots,\rho_{n})\) such that (4.2) and the conditions (I)-(III) are satisfied for any chosen \(\Psi\). Importantly observe that our \(n\)-tuple of functions \(\Psi\) are of the form \(\frac{\psi_{i}(r)}{r}\) in each coordinate axis \(1\leq i\leq n\), and so we are asking that:
1. For each \(1\leq i\leq n\), the function \(\frac{\psi_{i}(r)}{r}\) is montonic decreasing as \(r\to\infty\),
2. For each \(1\leq i\leq n\), we have \(\frac{\psi_{i}(r)}{r}\leq\rho_{i}(r)\) and \(\rho_{i}(r)\to 0\) as \(r\to\infty\),
3. For each \(1\leq i\leq n\), the function \(\frac{\psi_{i}(r)}{r}\) is \(c\)-regular on \((M^{k})_{k\geq 1}\) for some fixed constant \(0<c<1\).
Firstly, by assumption, each \(\psi_{i}\) is monotonic so \(\mathrm{I}(\mathbb{Q}_{p})\) is satisfied. Secondly, note that \(\Psi\) is \(M^{-1}\)-regular on \((M^{k})_{k\geq 1}\) since in each coordinate
\[\frac{\psi_{i}(M^{k+1})}{M^{k+1}}\leq\frac{\psi_{i}(M^{k})}{M^{k+1}}\leq M^{-1} \frac{\psi_{i}(M^{k})}{M^{k}},\]
where the first inequality follows due to the monotonicity of each \(\psi_{i}\), so \(\mathrm{III}(\mathbb{Q}_{p})\) is satisfied. Thus it remains to choose functions \(\rho_{i}\) such that \(\mathrm{II}(\mathbb{Q}_{p})\) and (4.2) are satisfied.
At each \(u\in\mathbb{N}\) let \(l(u)_{1},\ldots,l(u)_{n}\) be an ordering of the digits \(1,\ldots,n\) such that
\[\psi_{l(u)_{1}}(u)\geq\psi_{l(u)_{2}}(u)\geq\cdots\geq\psi_{l(u)_{n}}(u).\]
For each \(u\in\mathbb{N}\) there exists unique \(0\leq j\leq n-1\) such that, for \(\nu(u,j)\) defined by
\[\nu(u,j)=\left(p^{-n}u^{-m}\left(\prod_{i=1}^{j}\psi_{l(u)_{i}}(u)\right)^{-1 }\right)^{\frac{1}{n-j}},\]
we have that
\[\psi_{l(u)_{1}}(u)\geq\cdots\geq\psi_{l(u)_{j}}(u)\geq\nu(u,j)>\psi_{l(u)_{j+ 1}}(u)\geq\cdots\geq\psi_{l(u)_{n}}(u).\]
We can assume without loss of generality that \(\nu(u,n-1)>\psi_{l(u)_{n}}(u)\) for all sufficiently large \(u\in\mathbb{N}\), and so such \(0\leq j\leq n-1\) exists. Otherwise, we would have that for infinitely many \(u\in\mathbb{N}\)
\[p^{-n}u^{-(n+m)}<\prod_{i=1}^{n}\frac{\psi_{l(u)_{i}}(u)}{u}\]
and so, by applying Lemma 4, we could conclude that \(W_{n,m}^{\mathbb{Z}_{p}}(\mathbb{V})=I_{p}^{m\times n}\), thus completing the proof.
For each \(u\in\mathbb{N}\) set
\[\left\{\begin{aligned} \phi_{l(u)_{i}}(u)&=\psi_{l(u)_{i }}(u)&\qquad(1\leq i\leq j),\\ \phi_{l(u)_{i}}(u)&=\nu(u,j)&\qquad(j+1 \leq i\leq n).\end{aligned}\right.\]
Note that, by definition, \(\phi_{i}(u)\geq\psi_{i}(u)\), and that \(\phi_{i}(u)\to 0\) as \(u\to\infty\) for each \(1\leq i\leq n\) since either
\[\left\{\begin{aligned} \phi_{i}(u)&=\psi_{i}(u)\to 0,\\ \phi_{i}(u)&=\nu(u,j)\leq\psi_{j}(u)\to 0& \qquad(1\leq j\leq n-1),\\ \phi_{i}(u)&=\nu(u,0)=p^{-1}u^{-\frac{m}{n}}\to 0 \end{aligned}\right.\]
as \(u\to\infty\). For each \(1\leq i\leq n\) define
\[\rho_{i}(u)=\frac{\phi_{i}(u)}{u}\geq\frac{\psi_{i}(u)}{u},\]
thus \(\Pi(\mathbb{Q}_{p})\) is satisfied. Lastly, observe that for each \(u\in\mathbb{N}\)
\[\prod_{i=1}^{n}\rho_{i}(u) =\prod_{i=1}^{n}\frac{\phi_{l(u)_{i}}(u)}{u}\] \[=u^{-(n-j)}\left(p^{-n}u^{-m}\left(\prod_{i=1}^{j}\psi_{l(u)_{i}}( u)\right)^{-1}\right)\times\prod_{i=1}^{j}\frac{\psi_{l(u)_{i}}(u)}{u}\] \[=p^{-n}u^{-(n+m)},\]
so (4.2) is satisfied.
Hence Theorem 5 and Proposition 2.2 are applicable. Thus \(\mu_{m\times n}^{Q_{p}}\left(\mathbb{W}_{n,m}^{Z_{p}}(\mathbb{V})\right)=1\) if
\[\sum_{k=1}^{\infty}\prod_{i=1}^{n}\left(\frac{\psi_{i}(u_{k})}{u_{k}\rho_{i}(u _{k})}\right)^{m\left(1-\frac{m-1}{m}\right)}=\infty.\]
Inputting our chosen \(\rho\) we have that
\[\sum_{k=1}^{\infty}u_{k}^{n+m}\prod_{i=1}^{n}\frac{\psi_{i}(u_{k})}{u_{k}} \asymp\sum_{k=1}^{\infty}u_{k}^{m}\prod_{i=1}^{n}\psi_{i}(u_{k})\asymp\sum_{r= 1}^{\infty}r^{m-1}\prod_{i=1}^{n}\psi_{i}(r),\]
where the last line follows by Cauchy condensation with our choice of \((u_{k})_{k\geq 1}=(M^{k})_{k\geq 1}\). This completes the proof of Theorem 7.
### Proof of Theorem 8
Recall that
\[\Psi(r)=(r^{-\tau_{1}},\ldots,r^{-\tau_{n}}),\]
and for ease of notation let
\[\mathcal{A}_{\mathbf{a}}(\tau)=\left\{X\in\mathbb{Z}_{p}^{m\times n}:|\mathbf{ a}_{0}X_{i}-a_{i}|_{p}<\|\mathbf{a}\|_{p}^{-\tau_{i}}\quad(1\leq i\leq n) \right\},\]
so that
\[W_{n,m}^{Z_{p}}(\mathbb{V})=\limsup_{\mathbf{a}\in\mathbb{Z}^{n+m}}\mathcal{A }_{\mathbf{a}}(\tau).\]
The upper bound uses a standard covering argument, but for completeness, we include it here. Note, for any \(N\in\mathbb{N}\)
\[W_{n,m}^{Z_{p}}(\tau)\subset\bigcup_{r\geq N}\bigcup_{|\mathbf{a}|_{p}=r} \mathcal{A}_{\mathbf{a}}(\tau)\]
is a cover of \(\mathbb{W}_{n,m}^{Z_{p}}(\tau)\) by rectangles. Since the Hausdorff dimension is determined by covers of balls we consider each \(1\leq j\leq n\) and cover the collections of rectangles \(\mathcal{A}_{\mathbf{a}}(\tau)\) in the layer \(\|\mathbf{a}\|_{p}=r\) by balls of radius \(\|\mathbf{a}\|_{p}^{-\tau_{j}}\) to obtain our upper bound. Observe that in each coordinate axis
\[\mathcal{A}_{\mathbf{a},i}(\tau)=\left\{X_{i}\in\mathbb{Z}_{p}^{m}:|\mathbf{a} _{0}X_{i}-a_{i}|<\|\mathbf{a}\|_{p}^{-\tau_{i}}\right\}\]
can be covered by
\[\asymp\max\left\{1,\frac{\left\|\mathbf{a}\right\|_{v}^{-\tau_{j}}}{\left\| \mathbf{a}\right\|_{v}^{-\tau_{j}}}\right\}\times\left(\frac{1}{\left\| \mathbf{a}\right\|_{v}^{-\tau_{j}}}\right)^{m-1} \tag{4.14}\]
balls of size \(\left\|\mathbf{a}\right\|_{v}^{-\tau_{j}}\). So, for any \(s>0\),
\[\mathcal{H}^{s}\left(W_{n,m}^{\mathbb{Z}_{p}}(\Psi)\right) \ll\sum_{r\geq N}r^{-s\tau_{j}}r^{n+m-1}r^{n(m-1)\tau_{j}}\prod_{i= 1}^{n}\max\{1,r^{\tau_{j}-\tau_{i}}\}\] \[\ll\sum_{r\geq N}r^{-\tau_{j}s+n+m-1+n(m-1)\tau_{j}+\sum_{i:\tau_ {j}>\tau_{i}}(\tau_{j}-\tau_{i})},\]
which converges for all
\[s>s_{j}:=n(m-1)+\frac{n+m-\sum\limits_{i:\tau_{j}>\tau_{i}}(\tau_{i}-\tau_{j} )}{\tau_{j}}.\]
Hence by definition of Hausdorff dimension \(\dim_{\mathrm{H}}W_{n,m}^{\mathbb{Z}_{p}}(\Psi)\leq s_{j}\). Since we can do the same calculation as above for each \(1\leq j\leq n\) the upper bound of Theorem 8 follows.
For the lower bound of Theorem 8 we use Theorem 6 combined with Proposition 2.2. Assume without loss of generality that
\[\tau_{1}\geq\tau_{2}\geq\cdots\geq\tau_{n}>1.\]
In Proposition 2 we pick each \(\rho_{i}\) to be of the form
\[\rho_{i}(h)=\rho(h)^{\ell_{i}}=\left(p^{-1}h^{-1}\right)^{\ell_{i}}\]
for
\[\ell=(\ell_{1},\ldots,\ell_{n})\in(1,m-1)^{n}\quad\text{ with }\sum_{i=1}^{n}\ell_{i}=n+m. \tag{4.15}\]
Observing such a choice of exponent satisfies the requirements of Proposition 2.2. As in the setup of Theorem 6, set each
\[t_{i}=\tau_{i}-\ell_{i}\quad(1\leq i\leq n).\]
This choice of \(\rho\) function includes the constant "\(p^{-1}\)" appearing in Proposition 2, but note this can safely be removed to obtain our result by observing that for any choice of \(\varepsilon>0\) there exists sufficiently large \(h\in\mathbb{R}_{+}\) such that
\[h^{-\tau_{i}-\varepsilon}\leq\rho(h)^{\ell_{i}+t_{i}}=\left(p^{-1}h^{-1} \right)^{\tau_{i}}\leq h^{-\tau_{i}}\quad(1\leq i\leq n).\]
Consider the following cases:
1. (Ball to rectangle) \(\tau_{i}\geq\frac{n+m}{n}\) for all \(1\leq i\leq n\). Then let each \(\ell_{i}=\frac{n+m}{n}\). Note such choice satisfies (4.15). The set \(A\) from Theorem 6 takes the following order: \[\ell_{1}=\cdots=\ell_{n}\leq\tau_{n}\leq\cdots\leq\tau_{1}.\] So for any \(\ell_{i}\) we have \[\mathcal{K}_{1}=\{1,\ldots,n\},\quad\mathcal{K}_{2}=\emptyset,\quad\mathcal{K }_{3}=\emptyset,\] which trivially leads to a full dimension lower bound in the case \(A=\ell_{i}\). For each \(\tau_{j}\) we have \[\mathcal{K}_{1}=\emptyset,\quad\mathcal{K}_{2}=\{j,\ldots,n\},\quad\mathcal{K }_{3}=\{1,\ldots,j-1\}.\] Hence \[\dim_{\mathrm{H}}W_{n,m}^{\mathcal{Z}_{p}}(\Psi) \geq\min_{1\leq j\leq n}\left\{(n-j+1)m+(m-1)(j-1)+\frac{\frac{n+m}{ n}(j-1)-\sum\limits_{j\leq i\leq n}(\tau_{i}-\frac{n+m}{n})}{\tau_{j}}\right\}\] (4.16) \[\geq\min_{1\leq j\leq n}\left\{n(m-1)+\frac{(n+m)-\sum\limits_{j \leq i\leq n}(\tau_{i}-\tau_{j})}{\tau_{j}}\right\}.\]
2. (Rectangle to rectangle) \(\tau_{j}<\frac{n+m}{n}\) for some \(1\leq j\leq n\). The idea now is to form a "Dirichlet-exponent" rectangle that contains the \(\tau\)-rectangle. To do this we are trying to find \(1\leq u\leq n\) that solves \[u\times\widetilde{D}+\sum\limits_{u<i\leq n}\tau_{i}=n+m,\] for some \(\widetilde{D}>0\) with \(\tau_{i}>\widetilde{D}\) for all \(1\leq i\leq u\). That is, pick \(u\) such that \[\tau_{1}\geq\cdots\geq\tau_{u}>\frac{n+m-\sum\limits_{u\leq i\leq n}\tau_{i}}{ u}:=\widetilde{D}\geq\tau_{u+1}\geq\cdots\geq\tau_{n}.\] Set \[\ell_{i}=\widetilde{D}\quad(1\leq i\leq u),\] \[\ell_{i}=\tau_{i}\quad(u+1\leq i\leq n),\] and observe, by definition of \(\widetilde{D}\) and the fact that each \(\tau_{i}>1\), that \(\ell\) satisfies (4.15). For \(\ell_{i}\) with \(1\leq i\leq u\) we have that \[\mathcal{K}_{1}=\{1,\ldots,u\},\quad\mathcal{K}_{2}=\{u+1,\ldots,n\},\quad \mathcal{K}_{3}=\emptyset,\] and for \(\ell_{i}\) with \(u+1\leq i\leq n\) \[\mathcal{K}_{1}=\{1,\ldots,i\},\quad\mathcal{K}_{2}=\{i+1,\ldots,n\},\quad \mathcal{K}_{3}=\emptyset.\] Thus, for \(A=\ell_{i}\) we obtain the trivial full dimension as a lower bound.
For each \(\tau_{j}\) with \(1\leq j\leq u\) we have that
\[\mathcal{K}_{1}=\emptyset,\quad\mathcal{K}_{2}=\{j,\ldots,n\},\quad\mathcal{K}_{3 }=\{1,\ldots,j-1\},\]
thus the same calculation of (4.16) follows to complete the proof.
## 5. Complex approximation
In this section we obtain a Lebesgue measure dichotomy on certain sets of systems of complex linear forms. Under an additional assumption, we calculate the Hausdorff dimension of such sets when they are null.
For each \(z\in\mathbb{C}\), let \([z]\) be the distance from \(z\) to its nearest Gaussian integer; that is
\[[z]:=\min\left\{|z-p|:p\in\mathbb{Z}[i]\right\}.\]
where \(|\cdot|\) is the Euclidean norm in the complex plane. Let \(I_{\mathbb{C}}\) be the compact square
\[I_{\mathbb{C}}:=\left\{z\in\mathbb{C}:-\frac{1}{2}\leq\Re(z)\leq\frac{1}{2} \;\;\text{and}\;\;-\frac{1}{2}\leq\Im(z)\leq\frac{1}{2}\right\}.\]
Let \(m,n\in\mathbb{N}\) be arbitrary, they will remain fixed for the rest of this section. For any \(n\)-tuple of non-increasing positive functions \(\varphi=(\varphi_{1},\ldots,\varphi_{n}):\mathbb{N}\to\mathbb{R}_{+}^{m}\) with
\[\lim_{q\to\infty}\varphi_{j}(q)=0\quad(1\leq j\leq n),\]
and any \(m\)-tuple of non-decreasing positive functions \(\Phi=(\Phi_{1},\ldots,\Phi_{m}):\mathbb{N}\to\mathbb{R}_{+}^{m}\) such that
\[\lim_{q\to\infty}\Phi_{k}(q)=\infty\quad(1\leq k\leq m),\]
let \(W_{n,m}^{\mathbb{C}}(\varphi,\Phi)\subseteq I_{\mathbb{C}}^{m\times n}\) be the collection of \(m\times n\) matrices \(A\) with entries in \(I_{\mathbb{C}}\) which verify the following property:
for infinitely many integers \(u\geq 1\) there is a non-zero \(\mathbf{q}\in\mathbb{Z}[i]^{1\times m}\) such that
\[\left[\mathbf{q}\,A_{j}\right]<\varphi_{j}(u)\quad(1\leq j\leq n),\]
\[|q_{k}|\leq\Phi_{k}(u)\quad(1\leq k\leq m).\]
For \(k\in\{1,\ldots,m\}\) and \(u\in\mathbb{N}\), write
\[\Phi_{k}^{-1}(u):=\min\{v\in\mathbb{N}:\Phi_{k}(v)\geq u\}.\]
The set \(W_{n,m}^{\mathbb{C}}(\varphi,\Phi)\) is then
\[W_{n,m}^{\mathbb{C}}(\varphi,\Phi)=\left\{A\in I_{\mathbb{C}}^{m\times n}: \begin{array}{l}\left[\mathbf{q}\,A_{j}\right]<\varphi_{j}\left(\max\left\{ \Phi_{1}^{-1}(|q_{1}|),\ldots,\Phi_{m}^{-1}(|q_{m}|)\right\}\right)\;(1\leq j \leq n)\\ \text{for infinitely many }\mathbf{q}\in\mathbb{Z}[i]^{1\times m} \end{array}\right\}.\]
In what follows, we denote the Lebesgue measure on \(\mathbb{C}^{m\times n}\) by \(\mu_{m\times n}^{\mathbb{C}}\).
**Theorem 9**.: _If there are some constants \(N_{0},M\in\mathbb{N}\) with \(M\geq 2\), and \(c_{1},c_{2}>1\) such that for every \(j\in\mathbb{N}_{\geq N_{0}}\) we have_
\[c_{1}\Phi_{k}(M^{j})\leq\Phi_{k}(M^{j+1})\leq c_{2}\Phi_{k}(M^{j})\quad(k=1, \ldots,m), \tag{5.1}\]
_then_
\[\mu_{m\times n}^{\mathbb{C}}\left(W_{n,m}^{\mathbb{C}}(\varphi,\Phi)\right)= \begin{cases}0&\text{if}\quad\sum\limits_{q=1}^{\infty}\frac{1}{q}\left(\prod \limits_{j=1}^{n}\varphi_{j}(q)\prod\limits_{k=1}^{m}\Phi_{k}(q)\right)^{2}< \infty,\\ 1&\text{if}\quad\sum\limits_{q=1}^{\infty}\frac{1}{q}\left(\prod\limits_{j=1} ^{n}\varphi_{j}(q)\prod\limits_{k=1}^{m}\Phi_{k}(q)\right)^{2}=\infty.\end{cases}\]
For \(m=n=1\), Theorem 9 was proved by LeVeque using complex continued fractions [64] in 1952. In 1982, Sullivan [74] used Bianchi groups to prove more general Khintchine theorems for real and complex numbers. For the approximation of complex numbers, the rational approximates were ratios \(p/q\) of integers \(p,q\) from the imaginary quadratic fields \(\mathbb{R}(i\sqrt{d})\), where \(d\) is a square-free natural number. The case \(d=1\) corresponds to the Picard group and approximation by Gaussian rationals. The result was also derived in [9, Theorem 7] as a consequence of ubiquity framework. See also [42] for an analogue of Theorem 9 for small linear forms and for the non-weighted setup. For the one complex dimensions, just like the real case, the set is best studied in terms of Hurwitz Continued Fractions. To this end, the best known results are in [17].
For any vector \(\boldsymbol{\tau}=(\tau_{1},\ldots,\tau_{n})\in\mathbb{R}^{n}\) satisfying
\[\min\limits_{1\leq j\leq n}\tau_{j}>1\;\;\text{and}\;\;\sum\limits_{j=1}^{n} \tau_{j}\geq m+n, \tag{5.2}\]
define the numbers
\[s_{j}(\boldsymbol{\tau}):=2n(m-1)+2\frac{m+n-\sum\limits_{r:\,\tau_{r}<\tau_ {j}}(\tau_{r}-\tau_{j})}{\tau_{j}}\quad(1\leq j\leq n)\]
and the set
\[W_{n,m}^{\mathbb{C}}(\boldsymbol{\tau}):=\left\{\begin{aligned} & A\in I_{\mathbb{C}}^{m \times n}:&\left[\boldsymbol{\mathrm{q}}A_{j}\right]<\frac{1}{\| \boldsymbol{\mathrm{q}}\|^{\tau_{j}-1}}\left(1\leq j\leq n\right)\\ &\text{for infinitely many }\boldsymbol{\mathrm{q}}\in \mathbb{Z}[i]^{m}\end{aligned}\right\}.\]
**Theorem 10**.: _For any vector \(\boldsymbol{\tau}\) satisfying (5.2), we have_
\[\dim_{\mathrm{H}}W_{n,m}^{\mathbb{C}}(\boldsymbol{\tau})=\min\{s_{1}( \boldsymbol{\tau}),\ldots,s_{n}(\boldsymbol{\tau})\}.\]
The only previously known Hausdorff dimension results for \(\Psi\) approximable complex numbers is for \(m=n=1\) from [24] that proves that \(\dim_{H}W_{1,1}^{\mathbb{C}}(\boldsymbol{\tau})=\frac{4}{\tau+1}\) and its generalisation to an arbitrary approximation function by He and Xiong [40].
The core of the proofs is developed in a slightly different context. To be more precise, for each complex number \(z\) define
\[|z|_{\infty}:=\max\{|\Re(z)|,|\Im(z)|\}\;\;\text{and}\;\;|z|_{\infty}:=\min\{|z -p|_{\infty}:p\in\mathbb{Z}[i]\}.\]
First, we show Theorem 9 and Theorem 10 when \(|\cdot|\) and \(|[\![\cdot]\!]_{\infty}\) are replaced by \(|\cdot|_{\infty}\) and \(|[\![\cdot]\!]_{\infty}\), respectively. Afterwards, we use the equivalence between \(|\cdot|_{\infty}\) and \(|\cdot|\) to conclude the results in the original setting.
### A Minkowski-type theorem
We need the following complex version of Minkowski's theorem for linear forms (cfr. [41, Theorem 95]).
**Lemma 6**.: _Let \(\gamma_{1},\ldots,\gamma_{n},\theta_{1},\ldots,\theta_{m}\) be positive numbers satisfying \(\prod_{k=1}^{n}\gamma_{k}\prod_{j=1}^{m}\theta_{j}\geq 1\). For every matrix \(A\in C^{m\times n}\) there exists a vector \((\mathbf{q},\mathbf{p})\in\mathbb{Z}[i]^{1\times(m+n)},\;\mathbf{q}\neq 0\), such that_
\[|\,\mathbf{q}\,A_{k}-p_{k}|_{\infty} <\gamma_{k} (1\leq k\leq n), \tag{5.3}\] \[\big{|}q_{j}\big{|}_{\infty} \leq\theta_{j} (1\leq j\leq m).\]
Proof.: Take any matrix \(A=(a_{j,k})_{1\leq j\leq m,1\leq k\leq n}\in C^{m\times n}\). For \(j\in\{1,\ldots,m\}\) and \(k\in\{1,\ldots,n\}\), write
\[a_{j,k}^{1}:=\Re(a_{j,k})\;\;\text{and}\;\;a_{j,k}^{2}:=\Im(a_{j,k}).\]
For each \(j\in\{1,\ldots,n\}\), define the column vectors \(B_{2j-1}\), \(B_{2j}\) by
\[B_{2j-1}:=\begin{pmatrix}a_{1,j}^{1}\\ -a_{1,j}^{2}\\ \vdots\\ a_{m,j}^{1}\\ -a_{m,j}^{2}\end{pmatrix}\;\;\text{and}\;\;B_{2j}:=\begin{pmatrix}a_{1,j}^{2} \\ a_{1,j}^{1}\\ \vdots\\ a_{m,j}^{2}\\ a_{m,j}^{1}.\end{pmatrix}.\]
Call \(B\in\mathbb{Z}^{2m\times 2n}\) the matrix whose \(r\)-th column is \(B_{r}\). Then, a vector \((\mathbf{q},\mathbf{p})\in\mathbb{Z}[i]^{m+n}\) solves (5.3) if and only if the vector \((\mathbf{Q},\mathbf{P})=(Q_{1},\ldots,Q_{2m},P_{1},\ldots,P_{2n})\in\mathbb{Z} ^{2m+2n}\) given by
\[\mathbf{Q}=(\Re(q_{1}),\Im(q_{1}),\ldots,\Re(q_{m}),\Im(q_{m}))\;\;\text{and} \;\;\mathbf{P}=(\Re(p_{1}),\Im(p_{1}),\ldots,\Re(p_{n}),\Im(p_{n}))\]
solves the system
\[|\,\mathbf{Q}B_{2k-1}-P_{2k-1}| <\gamma_{k} (1\leq k\leq n),\] \[|\,\mathbf{Q}B_{2k}-P_{2k}| <\gamma_{k} (1\leq k\leq n),\] \[\big{|}Q_{2j-1}\big{|} \leq\theta_{j} (1\leq j\leq m), \tag{5.4}\] \[\big{|}Q_{2j}\big{|} \leq\theta_{j} (1\leq j\leq m).\]
Call \(I_{2n}\) (resp. \(I_{2m}\)) the identity matrix of size \(2n\times 2n\) (resp. \(2m\times 2m\)) and let \(O_{2m\times 2n}\) be the matrix of size \(2m\times 2n\) whose entries are all equal to \(0\). Since
\[\prod_{k=1}^{n}\gamma_{k}\prod_{j=1}^{m}\theta_{j}\geq 1\]
and the determinant of the matrix
\[\begin{pmatrix}B&I_{2m}\\ -I_{2n}&O_{2m\times 2n}\end{pmatrix}\]
is \(1\), the system (5.3) has a non-trivial solution by Theorem 4.
### Proof of Theorem 9
Given any \(k\in\mathbb{N}\) and \(\mathbf{z}=(z_{1},\ldots,z_{k})\in\mathbb{C}^{k}\), irrespective of interpreting it as a row or a column, define
\[\|\mathbf{z}\|_{\infty}:=\max\left\{|z_{1}|_{\infty},\ldots,|z_{k}|_{\infty} \right\}.\]
This way, if \(\|\mathbf{z}\|_{2}=\sqrt{|z_{1}|^{2}+\cdots+|z_{k}|^{2}}\) is the Euclidean norm on \(\mathbb{C}^{k}\), we have
\[\frac{1}{\sqrt{2k}}\|\mathbf{z}\|_{2}\leq\|\mathbf{z}\|_{\infty}\leq\| \mathbf{z}\|_{2}. \tag{5.5}\]
Certainly,
\[\frac{1}{\sqrt{2k}}\|\mathbf{z}\|_{2}\leq\frac{1}{\sqrt{2}}\max_{1\leq j\leq k }|z_{j}|\leq\max_{1\leq j\leq k}|z_{j}|_{\infty}=\|\mathbf{z}\|_{\infty}\leq \max_{1\leq j\leq k}|z_{j}|\leq\|\mathbf{z}\|_{2}.\]
Let \(M\in\mathbb{N}_{\geq 2}\) be sufficiently large (determined below). We work with the following objects:
1. \(J:=\{\alpha=(\mathbf{q},\mathbf{p})\in\mathbb{Z}[i]^{m+n}:\|\mathbf{p}\|_{ \infty}\leq 2m\|\mathbf{q}\|_{\infty}\}\),
2. \(\beta:J\to\mathbb{R}_{+}\), \(\alpha=(\mathbf{q},\mathbf{p})\mapsto\beta_{\alpha}=\max\left\{\Phi_{1}^{-1}(| q_{1}|_{\infty}),\ldots,\Phi_{m}^{-1}(|q_{m}|_{\infty})\right\}\),
3. \((u_{j})_{j\geq 1}\) given by \(u_{j}=M^{j}\) for all \(j\in\mathbb{N}\),
4. For each \(\alpha=(\mathbf{q},\mathbf{p})\in J\), define \(R_{\alpha,j}:=\left\{A_{j}\in I_{\mathbb{C}}^{m\times 1}:\mathbf{q}A_{j}=p_{j}\right\}\) for \(j\in\{1,\ldots,n\}\) and the resonant set \(R_{\alpha}\) is \(R_{\alpha}:=\prod_{j=1}^{n}R_{\alpha,j}\),
5. In this setting, we have \(\kappa_{j}=1-\frac{1}{m}=\frac{2m-2}{2m}\) and \(\delta_{j}=2m\) for \(j\in\{1,\ldots,n\}\).
Let us expand on the \(\kappa\)-scaling property. If \(V\) is a finite dimensional complex vector space and if \(W\) is a subspace of \(V\), then both \(V\) and \(W\) are real vector spaces and their dimension as real vector spaces is twice their dimension as complex vector spaces. We then rely upon [2, Section 2.2] to obtain \(\kappa_{j}\).
We also consider the positive functions \(\rho:(\rho_{1},\ldots,\rho_{n}):\mathbb{N}\to\mathbb{R}_{+}^{n}\) given by
\[\rho_{j}(u):=\sqrt{2}M\frac{\varphi_{j}(u)}{\|\Phi(u)\|_{\infty}}\left(\prod _{s=1}^{n}\varphi_{s}(u)\prod_{k=1}^{m}\Phi_{k}(u)\right)^{-1/n}\quad(1\leq j \leq n,\,u\in\mathbb{N})\]
and \(\Psi=(\psi_{1},\ldots,\psi_{n}):\mathbb{N}\to\mathbb{R}_{+}^{n}\) given by
\[\psi_{j}(u):=\frac{\varphi_{j}(u)}{2m\|\Phi(u)\|_{\infty}}\quad(1\leq j\leq n,\,u\in\mathbb{N}).\]
Note that
\[\prod_{j=1}^{n}\frac{\psi_{j}(u)}{\rho_{j}(u)}=\left(2^{3/2}mM\right)^{-n}\prod_{ j=1}^{n}\varphi_{j}(u)\prod_{k=1}^{m}\Phi_{k}(u)\quad(u\in\mathbb{N}). \tag{5.6}\]
When there is a strictly increasing sequence of natural numbers \((r_{s})_{s\geq 1}\) satisfying
\[\prod_{j=1}^{n}\varphi_{s}(r_{s})\prod_{k=1}^{m}\Phi_{k}(r_{s})>1\quad(s\in \mathbb{N}),\]
then, by Lemma 6, we have \(\mu_{m\times n}^{\mathbb{C}}\left(W_{n,m}^{\mathbb{C}}(\varphi,\Phi)\right)= \mu_{m\times n}^{\mathbb{C}}\left(I_{\mathbb{C}}^{m\times n}\right)\). Thus, we assume
\[\prod_{j=1}^{n}\varphi_{j}(u)\prod_{k=1}^{m}\Phi_{k}(u)\leq 1\quad(u\in \mathbb{N}). \tag{5.7}\]
**Lemma 7**.: _The system \(((R_{a})_{\alpha\in J},\beta)\) is a weighted ubiquitous system with respect to the function \(\rho\)._
In what follows, given a vector \(\mathbf{z}\in\mathbb{C}^{m\times 1}\) (resp. \(\mathbf{z}\in\mathbb{C}^{1\times m}\)), we denote by \(\overline{\mathbf{z}}\) the vector in \(\mathbb{C}^{m\times 1}\) (resp. in \(\mathbb{C}^{1\times m}\)) whose \(r\)-th coordinate is the complex conjugate of the \(r\)-th coordinate of \(\mathbf{z}\).
**Proposition 3**.: _For each \(j\in\{1,\ldots,n\}\), let \(f_{j}:\mathbb{N}\to\mathbb{R}_{+}\) be a positive function such that_
\[\prod_{j=1}^{n}f_{j}(u)\prod_{k=1}^{m}\Phi_{k}(u)=1\quad(u\in\mathbb{N}).\]
_For every \(u\in\mathbb{N}\) and \(A\in I_{\mathbb{C}}^{m\times n}\) there is some \(\alpha=(\mathbf{q},\mathbf{p})\in J\) satisfying \(\beta_{\alpha}\leq u\) and_
\[A\in\prod_{j=1}^{n}\Delta\left(R_{\alpha,j};\sqrt{2}\frac{f_{j}(u)}{\| \mathbf{q}\|_{\infty}}\right). \tag{5.8}\]
Proof.: Take \(A\in I_{\mathbb{C}}^{m\times n}\) and \(u\in\mathbb{N}\). By Lemma 6, there is some non-zero \((\mathbf{q},\mathbf{p})\in\mathbb{Z}[i]^{m+n}\) such that
\[\left|\mathbf{q}\overline{A_{j}}-p_{j}\right|_{\infty}<f_{j}(u)\quad(1\leq j \leq n),\]
\[|q_{k}|_{\infty}\leq\Phi_{k}(u)\quad(1\leq k\leq m).\]
The vector \(\alpha=(\mathbf{q},\mathbf{p})\) then satisfies \(\beta_{\alpha}\leq u\) and
\[\left|\mathbf{q}\overline{A_{j}}-p_{j}\right|<\sqrt{2}f_{j}(u)\quad(1\leq j \leq n),\]
\[|q_{k}|\leq\sqrt{2}\Phi_{k}(u)\quad(1\leq k\leq m).\]
Let us verify (5.8). If \(\mathbf{q}^{\top}\) is the transpose of \(\mathbf{q}\), then \(\mathbf{v}=\overline{A_{j}}-\frac{\overline{\mathbf{q}}\overline{A_{j}}}{\| \mathbf{q}\|^{2}}\mathbf{q}^{\top}\) is orthogonal to \(\mathbf{q}^{\top}\) and
\[\overline{A_{j}}=\frac{\overline{\mathbf{q}}\overline{A_{j}}}{\|\mathbf{q}\|_ {2}^{2}}\mathbf{q}^{\top}+\mathbf{v}.\]
For any \(\mathbf{a}\) such that \(\overline{\mathbf{a}}\in R_{\alpha,j}\) write \(\mathbf{w}=\mathbf{a}-\frac{\overline{p_{j}}}{\|\mathbf{q}\|^{2}}\mathbf{q}^{\top}\), then \(\mathbf{w}\) is also orthogonal to \(\mathbf{q}^{\top}\) and, the Pythagoras theorem,
\[\left\|A_{j}-\overline{\mathbf{a}}\right\|_{2}^{2} =\left\|\overline{A_{j}}-\mathbf{a}\right\|_{2}^{2}\] \[=\left\|\left(\frac{\overline{\mathbf{q}}A_{j}}{\|\mathbf{q}\|^{ 2}}-\frac{\overline{p_{j}}}{\|\mathbf{q}\|^{2}}\right)\mathbf{q}^{\top}\right\| _{2}^{2}+\|\mathbf{v}-\mathbf{w}\|_{2}^{2}\] \[\geq\frac{|\overline{\mathbf{q}}A_{j}-\overline{p_{j}}|^{2}}{\| \mathbf{q}\|_{2}^{2}}\] \[=\frac{|\mathbf{q}\overline{A_{j}}-p_{j}|^{2}}{\|\mathbf{q}\|_{2 }^{2}}.\]
This means that
\[\min\left\{\|A_{j}-\mathbf{a}\|_{\infty}:\mathbf{a}\in R_{\alpha,j}\right\} \leq\min\left\{\|A_{j}-\mathbf{a}\|_{2}:\mathbf{a}\in R_{\alpha,j}\right\}= \frac{|\mathbf{q}\overline{A_{j}}-p_{j}|}{\|\mathbf{q}\|_{2}}\leq\frac{\sqrt{ 2}f_{j}(u)}{\|\mathbf{q}\|_{\infty}}.\]
Define the functions \(f_{j}:\mathbb{N}\to\mathbb{R}\), \(j\in\{1,\ldots,N\}\), by
\[f_{j}(u):=\varphi_{j}(u)\left(\prod_{s=1}^{n}\varphi_{s}(u)\prod_{k=1}^{m} \Phi_{k}(u)\right)^{-1/n}\qquad(u\in\mathbb{N}).\]
For each \(s\in\mathbb{N}\), define the set
\[\widetilde{J}_{s}:=\left\{\alpha=(\mathbf{q},\mathbf{p})\in J:\frac{\Phi_{k}( u_{s})}{M}\leq|q_{k}|_{\infty}\leq\Phi_{k}(u_{s})\quad(1\leq k\leq m)\right\}.\]
Observe that \(\beta_{\alpha}\leq u_{s}\) when \(\alpha=(\mathbf{q},\mathbf{p})\in\widetilde{J}_{s}\). Then, since \(\beta_{\alpha}\to\infty\) as \(\alpha\to\infty\), we may choose an adequate \(l_{j}\) ensuring \(\widetilde{J}_{j}\subseteq J_{j}\). For \(k\in\{1,\ldots,m\}\), write
\[J_{s,k}:=\left\{\alpha\in J:|q_{k}|_{\infty}\leq\frac{\Phi_{k}(u_{s})}{M}\text { and }|q_{j}|_{\infty}\leq\Phi_{j}(u_{s})\quad(j\in\{1,\ldots m\}\setminus\{k\}) \right\}.\]
Let \(B=\prod_{k=1}^{n}B(X_{k};r)\) be an arbitrary ball in \(I_{\mathbb{C}}^{m\times n}\). In view of Proposition 3, for any \(s\in\mathbb{N}\) we have
\[B =B\cap\bigcup_{\alpha:\beta_{\alpha}\leq u_{s}}\prod_{j=1}^{n} \Delta\left(R_{\alpha,j},\sqrt{2}\frac{f_{j}(u_{s})}{\beta_{\alpha}}\right)\] \[=\left(B\cap\bigcup_{\alpha\in\widetilde{J}_{s}}\prod_{j=1}^{n} \Delta\left(R_{\alpha,j},\sqrt{2}\frac{f_{j}(u_{s})}{\beta_{\alpha}}\right) \right)\cup\left(B\cap\bigcup_{h=1}^{m}\bigcup_{\alpha\in J_{s,h}}\prod_{j=1} ^{n}\Delta\left(R_{\alpha,j},\sqrt{2}\frac{f_{j}(u_{s})}{\beta_{\alpha}} \right)\right).\]
**Proposition 4**.: _There is some \(N(r)\in\mathbb{N}\) such that for all \(j\in\{1,\ldots,n\}\) and \(\mathbf{q}\in\mathbb{Z}[i]^{1\times m}\), every \(s\in\mathbb{N}_{\geq N(r)}\) satisfies_
\[\#\left\{p\in\mathbb{Z}[i]:B(X_{j},r)\cap\Delta\left(R_{\alpha,j},\sqrt{2} \frac{f_{j}(u_{s})}{\|\mathbf{q}\|_{\infty}}\right)\neq\varnothing\right\} \leq(8rm\|\mathbf{q}\|_{\infty}+2)^{2}.\]
Proof.: Let \(j\in\{1,\ldots,n\}\) and \(0\neq\mathbf{q}\in\mathbb{Z}[i]^{m}\) be arbitrary. Take \(p\in\mathbb{Z}[i]\) be such that for some \(A_{j},Y_{j}\in I_{\mathbb{C}}^{m\times 1}\) we have
\[\|X_{j}-Y_{j}\|_{\infty}<r,\quad\mathbf{q}A_{j}=p,\;\;\text{and}\;\;\|A_{j}-Y_{ j}\|_{\infty}<\sqrt{2}\frac{f_{j}(u_{s})}{\|\mathbf{q}\|_{\infty}}.\]
That is, the \(\frac{f_{j}(u_{s})}{\|\mathbf{q}\|_{\infty}}\)-thickened hyperplane \(\mathbf{q}A_{j}=p\) has non-empty intersection with \(B(X_{j},r)\). By Cauchy's inequality and (5.5), we have
\[|\mathbf{q}X_{j}-p|_{\infty}\leq|\mathbf{q}(X_{j}-A_{j})|\leq\|\mathbf{q}\|_{ 2}\|X_{j}-A_{j}\|_{2}\leq 2m\|\mathbf{q}\|_{\infty}\left(\|X_{j}-Y_{j}\|_{ \infty}+\|Y_{j}-A_{j}\|_{\infty}\right)\]
and hence
\[|\mathbf{q}X_{j}-p|_{\infty}<2m\|\mathbf{q}\|_{\infty}\left(r+\sqrt{2}\frac{f _{j}(u_{s})}{\|\mathbf{q}\|_{\infty}}\right).\]
As a consequence, for every large \(s\in\mathbb{N}\) (depending on \(r\)) we have
\[|\mathbf{q}X_{j}-p|_{\infty}\leq 4mr\|\mathbf{q}\|_{\infty}.\]
The proposition now follows from the next elementary estimate:
\[\#\{z\in\mathbb{Z}[i]:\max\{|\Re(z)|,|\Im(z)|\}\leq R\}\leq(2R+2)^{2}\quad(R>0). \tag{5.9}\]
**Proposition 5**.: _If \(M>2^{5n}3^{m}m^{n+1}\) and writing \(\alpha=(\mathbf{q},\mathbf{p})\), every large \(s\in\mathbb{N}\) satisfies_
\[\mu_{m\times n}^{\mathbb{C}}\left(B\cap\bigcup_{k=1}^{m}\bigcup_{\alpha\in J _{s,k}}\prod_{j=1}^{n}\Delta\left(R_{\alpha,j},\sqrt{2}\frac{f_{j}(u_{s})}{\| \mathbf{q}\|_{\infty}}\right)\right)\leq\frac{1}{2}\mu_{m\times n}^{\mathbb{C }}(B). \tag{5.10}\]
Proof.: Take \(s\in\mathbb{N}\). For each \(k\in\{1,\ldots,m\}\), write
\[J_{s,k}^{\prime}:=\left\{\mathbf{q}\in\mathbb{Z}[i]^{1\times m}:\alpha=( \mathbf{q},\mathbf{p})\in J_{s,k}\right\},\]
so, by (5.9), every large \(s\in\mathbb{N}\) verifies
\[\#J_{n,k}^{\prime}\leq 3^{2m}\left(\frac{1}{M}\prod_{j=1}^{m}\Phi_{j}(u_{n}) \right)^{2}.\]
Let \(G_{\mathbb{C}}\) be the intersection in (5.10). By the \(\kappa\)-scaling property and Proposition 4,
\[\mu_{m\times n}^{\mathbb{C}}(G_{\mathbb{C}})\leq\sum_{k=1}^{m}\sum_{\mathbf{q }\in J_{s,k}^{\prime}}(8mr\|\mathbf{q}\|_{\infty}+2)^{2n}\left(\prod_{j=1}^{n} \sqrt{2}\frac{f_{j}(u_{s})r^{m-1}}{\|\mathbf{q}\|_{\infty}}\right)^{2}.\]
Recall that \((a+b)^{2n}\leq(2a)^{2n}+(2b)^{2n}\) for all \(a,b\geq 0\), then
\[\mu_{m\times n}^{\mathbb{C}}(G_{\mathbb{C}})\leq 2^{9n}m^{2n}r^{2mn}\sum_{k=1}^{m} \sum_{\mathbf{q}\in J_{s,k}^{\prime}}\prod_{j=1}^{n}f_{j}^{2}(u_{s})+2^{5n}r^{2 n(m-1)}\sum_{k=1}^{m}\sum_{\mathbf{q}\in J_{s,k}^{\prime}}\prod_{j=1}^{n}\frac{f_{j}^{2} (u_{s})}{\|\mathbf{q}\|_{\infty}^{2}}. \tag{5.11}\]
We can bound the the first sum in (5.11) using \(M>2^{5n}3^{m}m^{n+1}\) as follows:
\[2^{9n}m^{2n}r^{2mn}\sum_{k=1}^{m}\sum_{\mathbf{q}\in J_{s,j}^{\prime }}\prod_{k=1}^{n}f_{k}^{2}(u_{s}) =\frac{2^{9n}m^{2n}r^{2mn}}{\Phi_{1}^{2}(u_{s})\cdots\Phi_{m}^{2}(u _{s})}\sum_{k=1}^{m}\#J_{s,j}^{\prime}\] \[\leq\frac{2^{9n}3^{2m}m^{2n+1}}{M^{2}}r^{2nm}\] \[<\frac{1}{4}\mu_{m\times n}^{\mathbb{C}}(B).\]
The second sum in (5.11) tends to \(0\) as \(s\) tends to \(\infty\). Indeed, since \(\#\{q\in\mathbb{Z}[i]:|q|_{\infty}=s\}=8s\), we have
\[\sum_{k=1}^{m}\sum_{\mathbf{q}\in J_{s,k}^{\prime}}\prod_{j=1}^{n }\frac{f_{j}^{2}(u_{s})}{\|\mathbf{q}\|^{2}} =\frac{1}{\Phi_{1}^{2}(u_{s})\cdots\Phi_{m}^{2}(u_{s})}\sum_{k=1} ^{m}\sum_{\mathbf{q}\in J_{s,k}^{\prime}}\frac{1}{\|\mathbf{q}\|^{2n}}\] \[\leq\frac{1}{\Phi_{1}^{2}(u_{s})\cdots\Phi_{m}^{2}(u_{s})}\sum_{k =1}^{m}\sum_{\mathbf{q}\in J_{s,k}^{\prime}}\frac{1}{\|\mathbf{q}\|^{2}}\] \[\leq\frac{1}{\Phi_{1}^{2}(u_{s})\cdots\Phi_{m}^{2}(u_{s})}\sum_{k =1}^{m}\sum_{q_{k}=1}^{\Phi_{k}(u_{s})M}\frac{8q_{k}}{q_{k}^{2}}\bigg{(}\frac{ \Phi_{1}(u_{s})\cdots\Phi_{m}(u_{s})}{\Phi_{k}(u_{s})}\bigg{)}^{2}\] \[\leq \frac{8}{\Phi_{1}^{2}(u_{s})\cdots\Phi_{m}^{2}(u_{s})}\sum_{k=1} ^{m}\bigg{(}\frac{\Phi_{1}(u_{s})\cdots\Phi_{m}(u_{s})}{\Phi_{k}(u_{s})} \bigg{)}^{2}(\log\Phi_{k}(u_{s})-\log M)\] \[\leq 8\sum_{k=1}^{m}\frac{\log\Phi_{k}(u_{s})}{\Phi_{k}(u_{s})^{2}} \to 0\text{ as }s\to\infty.\]
Proof of Lemma 7.: It only remains to show the ubiquity condition with respect to \(\rho\). For any \(s\in\mathbb{N}\), the definition of \(\widetilde{J}_{s}\) tells us that
\[\|\mathbf{q}\|_{\infty}\geq\frac{1}{M}\left\|(\Phi_{1}(u_{s}),\ldots,\Phi_{m} (u_{s}))\right\|_{\infty}\quad\left(\alpha=(\mathbf{q},\mathbf{p})\in\widetilde {J}_{s}\right),\]
then
\[\mu_{m\times n}^{\mathbb{C}}\left(B\cap\bigcup_{\alpha\in J_{s}} \Delta\left(R_{\alpha},\rho\right)\right) =\mu_{m\times n}^{\mathbb{C}}\left(B\cap\bigcup_{\alpha\in J_{s} }\prod_{k=1}^{m}\Delta\left(R_{\alpha,k},\sqrt{2}M\frac{f_{k}(u_{s})}{\|\Phi(u _{s})\|_{\infty}}\right)\right)\] \[\geq\mu_{m\times n}^{\mathbb{C}}\left(B\cap\bigcup_{\alpha\in J_ {s}}\prod_{k=1}^{m}\Delta\left(R_{\alpha,k},\sqrt{2}M\frac{f_{k}(u_{s})}{\| \Phi(u_{s})\|_{\infty}}\right)\right)\] \[\geq\mu_{m\times n}^{\mathbb{C}}\left(B\cap\bigcup_{\alpha\in J _{s}}\prod_{k=1}^{m}\Delta\left(R_{\alpha,k},\sqrt{2}\frac{f_{k}(u_{s})}{\| \mathbf{q}\|_{\infty}}\right)\right)\] \[\geq\frac{1}{2}\mu_{m\times n}^{\mathbb{C}}(B).\]
Lemma 8 below allows us to impose an additional assumption on \(\psi\) and \(\Phi\) without losing any generality. We omit its proof for it is shown as Lemma 6.1 in [57]. The only difference is that in the last step we deal with the series \(\sum_{t}(c_{1}^{-2(d-\varepsilon)})^{t}<\infty\). Call \(W_{\infty}(\varphi,\Phi)\) the set of matrices \(A\in I_{\mathbb{C}}^{m\times n}\) such that
\[\big{\|}\mathbf{q}A_{j}\big{\|}_{\infty} <\phi_{j}(u) (1\leq j\leq n),\] \[|q_{k}|_{\infty} \leq\Phi_{k}(u) (1\leq k\leq m).\]
**Lemma 8**.: _Under the assumptions of Theorem 9, there are functions \(\widetilde{\varphi}=(\widetilde{\varphi}_{1},\ldots,\widetilde{\varphi}_{n})\) such that \(W_{n,m}^{\mathbb{C}}(\varphi,\Phi)\subseteq W_{n,m}^{\mathbb{C}}(\widetilde{ \varphi},\Phi)\), \(\mu_{m\times n}^{\mathbb{C}}\left(W_{n,m}^{\mathbb{C}}(\varphi,\Phi)\right)= \mu_{m\times n}^{\mathbb{C}}\left(W_{n,m}^{\mathbb{C}}(\bar{\varphi},\Phi)\right)\), and_
\[\lim_{u\to\infty}\|\Phi(u)\|_{\infty}^{n}\prod_{j=1}^{m}\Phi_{k}(u)\prod_{j=1} ^{n}\varphi_{j}(u)=\infty\quad(1\leq j\leq n).\]
**Lemma 9**.: _Theorem 9 holds if we replace \(|\cdot|\) and \([\cdot]\) with \(|\cdot|_{\infty}\) and \([\cdot]_{\infty}\), respectively._
Proof.: First, we verify the hypotheses of Theorem 5.
1. The system \(((R_{\alpha})_{\alpha\in J},\beta)\) is ubiquitous with respect to \(\rho\) and the sequence \((l_{j})_{j\geq 1}\), \((u_{j})_{j\geq 1}\) defined as above.
2. The function \(\Psi=(\psi_{1},\ldots,\psi_{n})\) is \(c\)-regular. Indeed, by (5.1) and the monotonicity of each \(\varphi_{j}\) and each \(\Phi_{k}\), we have \[\psi_{j}(u_{s+1})=\frac{\varphi_{j}(u_{s+1})}{2m\|\Phi(u_{s+1})\|_{\infty}} \leq\frac{\varphi_{j}(u_{s})}{c_{1}2m\|\Phi(u_{s})\|_{\infty}}=\frac{1}{c_{1} }\psi_{j}(u_{s})\quad(s\in\mathbb{N}).\]
3. By (5.7), for \(j\in\{1,\ldots,n\}\) we have \(\rho_{j}(u)\geq\psi_{j}(u)\) for all \(u\in\mathbb{N}\) and Lemma 8 implies \[\lim_{u\to\infty}\rho_{j}(u)=0.\] Combining (5.6), \(\delta_{j}(1-\kappa_{j})=2\) for all \(j\in\{1,\ldots,n\}\), and Theorem 5, the set \(W_{\infty}(\varphi,\Phi)\) is either of full measure or null depending on the divergence or convergence of the series (5.12) \[\sum_{s=0}^{\infty}\left(\prod_{j=1}^{n}\varphi_{j}(u_{s})\prod_{k=1}^{m}\Phi _{k}(u_{s})\right)^{2}.\] Take any \(s\in\mathbb{N}\). Since each \(\varphi_{j}\) is non-increasing, it follows that \[\sum_{u_{s-1}\leq q<u_{s}}\frac{1}{q}\left(\prod_{j=1}^{n}\varphi _{j}(q)\prod_{k=1}^{m}\Phi_{k}(q)\right)^{2} \leq\sum_{u_{s-1}\leq q<u_{s}}\frac{1}{u_{s-1}}\left(\prod_{j=1} ^{n}\varphi_{j}(u_{s-1})\prod_{k=1}^{m}c_{2}\Phi_{k}(u_{s-1})\right)^{2}\quad \text{(by \eqref{eq:1})}\] \[=\left(\frac{u_{s}}{u_{s-1}}-1\right)c_{2}^{2m}\left(\prod_{j=1} ^{n}\varphi_{j}(u_{s-1})\prod_{k=1}^{m}\Phi_{k}(u_{s-1})\right)^{2}\] \[=\left(M-1\right)c_{2}^{2m}\left(\prod_{j=1}^{n}\varphi_{j}(u_{s- 1})\prod_{k=1}^{m}\Phi_{k}(u_{s-1})\right)^{2};\]
as a consequence,
\[\sum_{q=1}^{\infty}\frac{1}{q}\left(\prod_{j=1}^{n}\varphi_{j}(q) \prod_{k=1}^{m}\Phi_{k}(q)\right)^{2} =\sum_{s=1}^{\infty}\sum_{u_{s-1}\leq q<u_{n}}\frac{1}{q}\left( \prod_{j=1}^{n}\varphi_{j}(q)\prod_{k=1}^{m}\Phi_{k}(q)\right)^{2}\] \[\leq(M-1)c_{2}^{2m}\sum_{s=1}^{\infty}\left(\prod_{j=1}^{n}\varphi _{j}(u_{s-1})\prod_{k=1}^{m}\Phi_{k}(u_{s-1})\right)^{2}.\]
Similarly, we may see that
\[\sum_{u_{s-1}\leq q<u_{s}}\frac{1}{q}\left(\prod_{j=1}^{n}\varphi _{j}(q)\prod_{k=1}^{n}\Phi_{k}(q)\right)^{2} \geq\sum_{u_{s-1}\leq q<u_{s}}\frac{1}{u_{s}}\left(\prod_{j=1}^{n} \varphi_{j}(u_{s})\prod_{k=1}^{n}\Phi_{k}(u_{s-1})\right)^{2}\] \[\geq\frac{1}{c_{2}^{2m}}\left(1-\frac{1}{M}\right)\left(\prod_{j =1}^{n}\varphi_{j}(u_{s})\prod_{k=1}^{n}\Phi_{k}(u_{s})\right)^{2}.\]
Therefore, the convergence of the series in (5.12) is equivalent to that of
\[\sum_{q=1}^{\infty}\frac{1}{q}\left(\prod_{j=1}^{n}\varphi_{j}(q)\prod_{k=1}^ {m}\Phi_{k}(q)\right)^{2}. \tag{5.13}\]
The divergence part of Theorem 9 follows from Theorem 5. In order to prove the convergence part, for each \(u\in\mathbb{N}\), let \(\mathcal{A}(u;\varphi,\Phi)\) be the set of matrices \(\boldsymbol{A}\in I_{\mathbb{C}}^{m\times n}\) for which there is some non-zero \(\mathbf{q}\in\mathbb{Z}[i]^{m}\) satisfying
\[\left[\mathbf{q}\boldsymbol{A}_{j}\right]_{\infty}<\varphi_{j}(u)\quad(1 \leq j\leq n),\]
\[\left[\boldsymbol{q}_{k}\right]_{\infty}\leq\Phi_{k}(u)\quad(1\leq k\leq m).\]
If \(\mathbf{q}\in\mathbb{Z}[i]^{m}\), \(\mathbf{q}\neq 0\), verifies \(\left|q_{k}\right|_{\infty}\leq\Phi_{k}(u)\) for \(k\in\{1,\ldots,m\}\), define
\[\mathcal{A}_{\mathbf{q}}(u;\varphi,\Phi):=\left\{A\in I_{\mathbb{C}}^{m\times n }:[\mathbf{q}\boldsymbol{A}_{j}]_{\infty}<\varphi_{j}(u)\quad(1\leq j\leq n)\right\}\]
and, for all \(\mathbf{p}\in\mathbb{Z}[i]^{n}\),
\[\mathcal{A}_{\mathbf{q},\mathbf{p}}(u;\varphi,\Phi):=\left\{A\in I_{\mathbb{ C}}^{m\times n}:|\mathbf{q}\boldsymbol{A}_{j}-p_{j}|_{\infty}<\varphi_{j}(u) \quad(1\leq j\leq n)\right\}.\]
The following estimates hold for some constants depending on \(m\) and \(n\):
\[\#\left\{\mathbf{q}\in\mathbb{Z}[i]^{m}:|q_{k}|_{\infty}\leq\Phi_ {k}(u)\quad(1\leq k\leq m)\right\} \ll_{m,n}\left(\prod_{k=1}^{m}\Phi_{k}(u)\right)^{2},\] \[\#\left\{\mathbf{p}\in\mathbb{Z}[i]^{n}:\mathcal{A}_{\mathbf{q}, \mathbf{p}}(u;\varphi,\Phi)\neq\varnothing\right\} \ll_{m,n}\|\mathbf{q}\|_{\infty}^{2n}\] \[\mu_{m\times n}^{\mathbb{C}}(\mathcal{A}_{\mathbf{q},\mathbf{p}} (u;\varphi,\Phi)) \ll_{m,n}\frac{1}{\|\mathbf{q}\|_{\infty}^{2n}}\left(\prod_{j=1}^{n} \varphi_{j}(u)\right)^{2}.\]
Therefore, we have
\[\mu_{m\times n}^{\mathbb{C}}(\mathcal{A}(u;\varphi,\Phi))\ll_{m,n}\left(\prod_ {j=1}^{n}\varphi_{j}(u)\prod_{k=1}^{m}\Phi_{k}(u)\right)^{2}.\]
Choose \(s\in\mathbb{N}\) such that \(u_{s-1}\leq u<u_{s}\), so \(\varphi(u_{s})\leq\varphi(u)\leq\varphi(u_{s-1})\) and \(\Phi(u_{s-1})\leq\Phi(u)\leq\Phi(u_{s})\). Hence, by (5.1),
\[\mathcal{A}(u;\varphi,\Phi)\subseteq\mathcal{A}(u_{s-1};\varphi,c_{2}\Phi),\]
which implies
\[\limsup_{u\to\infty}\mathcal{A}(u;\varphi,\Phi)\subseteq\limsup_{s\to\infty} \mathcal{A}(u_{s};\varphi,c_{2}\Phi).\]
Finally, since the series in (5.12) converges if and only if the series in (5.13) converges, the result follows from the Borel-Cantelli lemma.
Proof of Theorem 9.: Since \(|z|_{\infty}\leq|z|\leq\sqrt{2}|z|_{\infty}\) for all \(z\in\mathbb{C}\), we have
\[W_{\infty}(\varphi,\Phi)\subseteq W(\varphi,\Phi)\subseteq W_{\infty}(\sqrt{ 2}\,\varphi,\sqrt{2}\,\Phi).\]
When the series in (5.13) converges, the set \(W_{\infty}(\sqrt{2}\,\varphi,\sqrt{2}\,\Phi)\) is null and \(W(\varphi,\Phi)\) is also null. The divergence of the series implies the full measure of \(W_{\infty}(\varphi,\Phi)\) and, hence, the full measure of \(W(\varphi,\Phi)\).
### Proof of Theorem 10.
Consider again the norm \(\|\cdot\|_{\infty}\). Pick \(\tau=(\tau_{1},\ldots,\tau_{n})\in\mathbb{R}^{n}\) satisfying (5.2). Let \(\Phi=(\Phi_{1},\ldots,\Phi_{m})\) be determined by
\[\Phi_{k}(q)=q\quad(1\leq k\leq m,\,q\in\mathbb{N})\]
and let \(\varphi=(\varphi_{1},\ldots,\varphi_{n})\) be given by
\[\varphi_{j}(q)=\frac{1}{q^{\tau_{j}-1}}\quad(1\leq j\leq n,\,q\in\mathbb{N}).\]
If \(\eta=\frac{1}{2}(\tau_{1}+\cdots+\tau_{n}-n-m)>0\), we have for all \(q\in\mathbb{N}\)
\[\frac{1}{q}\left(\prod_{j=1}^{n}\varphi_{j}(q)\prod_{k=1}^{m}\Phi_{k}(q) \right)^{2}=\frac{1}{q^{1+\eta}},\]
and Theorem 9 implies \(\mu_{m\times n}^{\mathbb{C}}(W(\varphi,\Phi))=0\). It is trivial to choose \(\tau=(\tau_{1},\ldots,\tau_{n})\) such that for some \(j\in\{1,\ldots,n\}\) we have
\[\tau_{1}+\cdots+\tau_{n}>n\tau_{j}+n+m;\]
(for example, \(\tau_{1}=\cdots=\tau_{n-1}=3(n+m)\) and \(\tau_{n}=2\)). For such \(\tau\) and \(j\), the function \(\rho_{j}\) defined as in the previous section does not converge to \(0\) when its argument tends to \(\infty\). However, we may find a suitable \(n\)-tuple of positive functions \(\bar{\rho}=(\bar{\rho}_{1},\ldots,\bar{\rho}_{n})\) and a sequence \((\bar{l}_{s})_{s\geq 1}\) such that, for a sufficiently large \(M\), the system \(((R_{\alpha})_{\alpha\in J},\beta)\) is ubiquitous with respect to \(\bar{\rho}\) and \((\bar{l}_{s})_{s\geq 1}\), \((u_{s})_{s\geq 1}\).
Let \(M\in\mathbb{N}\) large (we determine how large \(M\) should be below) and take \(J\), \(R_{\alpha}\) for \(\alpha\in J\), \(\beta\), and \((u_{s})_{s\geq 1}\) as in Lemma 7. Call \((\bar{l}_{s})_{s\geq 1}\) the sequence given by \(l_{s}=M^{s-1}\) for all \(s\in\mathbb{N}\). In this context, we have
\[\beta_{\alpha}=\|\mathbf{q}\|_{\infty}\quad(\alpha=(\mathbf{q},\mathbf{p}) \in J).\]
Given any \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathbb{R}^{n}\) such that
\[\min_{1\leq j\leq n}a_{j}\geq 1\;\;\text{and}\;\;\sum_{j=1}^{n}a_{j}=m+n, \tag{5.14}\]
define \(\tilde{\rho}=(\tilde{\rho}_{1},\ldots,\tilde{\rho}_{n}):\mathbb{N}\to\mathbb{R} ^{n}\) by
\[\tilde{\rho}_{j}(u)=\sqrt{2}\frac{1}{u^{a_{j}-1}}\quad(1\leq j\leq n,\,u\in \mathbb{N}).\]
**Lemma 10**.: _For any \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathbb{R}^{n}\) satisfying (5.14), the system \(((R_{\alpha})_{\alpha\in J},\beta)\) is ubiquitous with respect to \(\tilde{\rho}\) and \((\tilde{l}_{s})_{s\geq 1}\), \((u_{s})_{s\geq 1}\)._
We omit several steps in the forthcoming proof, for it resembles that of Lemma 7.
Proof.: We may show-as we did with (5.8)-that for every \(A\in I_{\mathbb{C}}^{m\times n}\) and any \(s\in\mathbb{N}\) there is some \(\alpha=(\mathbf{q},\mathbf{p})\in J\) such that \(1\leq\|\mathbf{q}\|_{\infty}\leq u_{s}\) and
\[A\in\prod_{j=1}^{n}\Delta\left(R_{\alpha,j},\frac{\sqrt{2}}{\|\mathbf{q}\|_{ \infty}u_{s}^{a_{j}-1}}\right).\]
Similar to (5.10), we may choose a sufficiently large \(M\) for which any ball \(B=\prod_{j=1}^{n}B_{j}\subseteq X\) and any large \(s\in\mathbb{N}\) (depending on the radius of \(B\)) satisfy
\[\mu_{m\times n}^{\mathbb{C}}\left(B\cap\bigcup_{\begin{subarray} {c}\alpha=(\mathbf{q},\mathbf{p})\in J\\ \tilde{l}_{s}\leq\beta_{s}\end{subarray}}\prod_{j=1}^{n}\Delta\left(R_{\alpha,j},\frac{\sqrt{2}}{\|\mathbf{q}\|_{\infty}u_{s}^{a_{j}-1}}\right)\right) =\mu_{m\times n}^{\mathbb{C}}\left(\bigcup_{q=1}^{\tilde{l}_{s} -1}\bigcup_{\begin{subarray}{c}\alpha\in J\\ \beta_{s}=q\end{subarray}}\prod_{j=1}^{n}B_{j}\cap\Delta\left(R_{\alpha,j}, \frac{\sqrt{2}}{\|\mathbf{q}\|_{\infty}u_{s}^{a_{j}-1}}\right)\right)\] \[\leq\frac{1}{2}\mu_{m\times n}^{\mathbb{C}}(B),\]
which implies
\[\mu_{m\times n}^{\mathbb{C}}\left(B\cap\bigcup_{\begin{subarray} {c}\alpha=(\mathbf{q},\mathbf{p})\in J\\ \tilde{l}_{s}\leq\beta_{s}\end{subarray}}\prod_{j=1}^{n}\Delta\left(R_{\alpha,j},\frac{\sqrt{2}}{u_{s}^{a_{j}-1}}\right)\right)\geq\frac{1}{2}\mu_{m\times n }^{\mathbb{C}}(B).\]
For any \(\tau=(\tau_{1},\ldots,\tau_{n})\in\mathbb{R}^{n}\) satisfying (5.2), define the sets
\[W_{\infty}^{\mathbb{C}}(\tau):=\left\{A\in I_{\mathbb{C}}^{m\times n}:\left[ \mathbf{q}A_{j}\right]_{\infty}<\frac{1}{\|\mathbf{q}\|_{\infty}^{\tau_{j}-1}} \quad(1\leq j\leq n)\text{ for i. m. }\mathbf{q}\in\mathbb{Z}[i]^{m}\right\}\]
and
\[\widetilde{W}_{\infty}^{\mathbb{C}}(\gamma,\tau):=\bigcap_{Q\in\mathbb{N}} \bigcup_{\begin{subarray}{c}\mathbf{q}\in\mathbb{Z}[i]^{m}\\ \|\mathbf{q}\|_{\infty}=Q\end{subarray}}\bigcup_{\begin{subarray}{c}\mathbf{p }\in\mathbb{Z}[i]^{n}\\ \alpha=(\mathbf{q},\mathbf{p})\in J\end{subarray}}\prod_{j=1}^{n}\Delta\left(R_ {\alpha,j},\frac{\gamma}{\|\mathbf{q}\|_{\infty}^{\tau_{j}}}\right)\quad( \gamma>0).\]
Clearly, when we consider \(\|\cdot\|_{\infty}\) and \(|\cdot|_{\infty}\) and the function \(\Psi=(\psi_{1},\ldots,\psi_{n})\) given by \(\psi_{j}(u)=\gamma u^{-\tau_{j}}\), the set \(\widetilde{W}_{\infty}^{\mathbb{C}}(\gamma,\tau)\) is precisely \(W_{\infty}^{I_{\mathbb{C}}^{m\times n}}(\Psi)\) as defined in Section 2.
**Lemma 11**.: _If \(\tau=(\tau_{1},\ldots,\tau_{n})\in\mathbb{R}_{+}^{n}\) satisfies (5.2) and \(\gamma>0\), then_
\[\dim_{\mathrm{H}}\widetilde{W}_{\infty}(\gamma,\tau)=\min\{s_{1}(\tau),\ldots, s_{n}(\tau)\}.\]
Proof.: Let us further assume that \(\gamma=1\).
**Upper bound.** First, for all \(Q\in\mathbb{N}\) we have
\[\#\{\mathbf{q}\in\mathbb{Z}[i]^{m}:\|\mathbf{q}\|_{\infty}=Q\}\asymp_{m}Q^{2m -1},\]
because
\[\#\left\{\mathbf{q}\in\mathbb{Z}[i]^{m}:\|\mathbf{q}\|_{\infty} =Q\right\} =\#\left\{\mathbf{q}\in\mathbb{Z}^{2m}:\|\mathbf{q}\|_{\infty}=Q\right\}\] \[=\#\left\{\mathbf{q}\in\mathbb{Z}^{2m}:\|\mathbf{q}\|_{\infty} \leq Q\right\}-\#\left\{\mathbf{q}\in\mathbb{Z}^{2m}:\|\mathbf{q}\|_{\infty} \leq Q-1\right\}\] \[=(2Q+1)^{2m}-(2Q-1)^{2m}\] \[=2\sum_{j=0}^{2m-1}(2Q+1)^{2m-1-j}(2Q-1)^{j},\]
which is a polynomial of degree \(2m-1\) in \(Q\). For all \(j,l\in\{1,\ldots,m\}\) the number of balls of radius \(\|\mathbf{q}\|^{-\tau_{j}}\) required to cover \(\triangle(R_{\alpha,l},\|\mathbf{q}\|_{\infty}^{-\tau_{l}})\) is asymptotically equivalent to
\[\left(\max\left\{1,\frac{\|\mathbf{q}\|_{\infty}^{-\tau_{l}}}{\|\mathbf{q}\|_{ \infty}^{-\tau_{j}}}\right\}\|\mathbf{q}\|_{\infty}^{\tau_{j}(m-1)}\right)^{2}.\]
Hence, for all \(s>0\), we have
\[\mathcal{H}^{s}\left(\widetilde{W}_{\infty}(1,\tau)\right) \ll\liminf_{N\to\infty}\sum_{Q\geq N}Q^{2m-1}Q^{2n}Q^{-s\tau_{j}} Q^{2\tau_{j}n(m-1)}Q^{2\sum\limits_{\tau_{l}\leq\tau_{j}}\tau_{j}-\tau_{l}}\] \[=\liminf_{N\to\infty}\sum_{Q\geq N}Q^{2m+2n-1+2\tau_{j}n(m-1)+2 \sum\limits_{\tau_{l}\leq\tau_{j}}\tau_{j}-\tau_{l}}Q^{-s\tau_{j}}.\]
The last series above converges if and only if the exponent of \(Q\) is strictly less than \(-1\), or equivalently if \(s>s_{j}(\tau)\), so \(\dim_{\mathrm{H}}W_{\infty}(1,\tau)\leq s_{j}\).
**Lower bound.** Let \(J\), \((R_{\alpha})_{\alpha\in J}\), \(\beta\), \((\tilde{l}_{s})_{s\geq 1}\) and \((u_{s})_{s\geq 1}\) as in Lemma 10. We only establish the setup, the computations are quite similar to the \(p\)-adic case. Suppose without loss of generality that \(\tau_{1}\geq\tau_{2}\geq\ldots\geq\tau_{n}>1\) and recall that \(\delta_{k}=2m\) for \(k\in\{1,\ldots,n\}\). First, assume that \(\tau_{n}\geq\frac{n+m}{n}\). For \(j\in\{1,\ldots,n\}\) define
\[a_{j}:=\frac{n+m}{n}\;\;\text{and}\;\;t_{j}:=\tau_{j}-a_{j}.\]
Then, the order of \(\mathcal{A}\) is
\[a_{1}+t_{1}\geq a_{2}+t_{2}\geq\ldots\geq a_{n}+t_{n}\geq a_{1}=\ldots=a_{n}.\]
Suppose that \(\tau_{n}<\frac{n+m}{n}\) and let \(K\in\{1,\ldots,n\}\) be the largest integer such that
\[\tau_{K}>\frac{m+n-(\tau_{K+1}+\cdots+\tau_{n})}{K}.\]
For each \(j\in\{1,\ldots,n\}\), write
\[a_{j}:=\begin{cases}\tau_{j},&(K+1\leq j\leq n),\\ \frac{m+n-(\tau_{K+1}+\cdots+\tau_{d})}{K},&(1\leq j\leq K).\end{cases}\]
Then, \(\mathcal{A}\) is ordered as follows:
\[a_{1}+t_{1}\geq a_{2}+t_{2}\geq\ldots\geq a_{K}+t_{K}>a_{1}=\ldots=a_{K}>a_{K+ 1}=a_{K+1}+t_{K+1}\geq\ldots\geq a_{n}=a_{n}+t_{n}.\]
By Theorem 6, \(\dim_{\mathrm{H}}\widetilde{W}_{\infty}(1,\tau)=\min\{s_{1}(\tau),\ldots,s_{n }(\tau)\}\).
Now assume that \(\gamma>0\) is arbitrary and take \(0<\varepsilon<\max_{1\leq j\leq n}\tau_{j}-1\). Since \(\|\mathbf{q}\|_{\infty}^{\varepsilon}>\gamma\) for all but finitely many \(\mathbf{q}\in\mathbb{Z}[i]^{m}\), we have
\[\widetilde{W}_{\infty}(1,\tau_{1}+\varepsilon,\ldots,\tau_{d}+\varepsilon) \subseteq\widetilde{W}_{\infty}(\gamma,\tau)\subseteq\widetilde{W}_{\infty}( 1,\tau_{1}-\varepsilon,\ldots,\tau_{d}-\varepsilon).\]
The lemma follows by letting \(\varepsilon\to 0\), because \(\tau\mapsto\dim_{\mathrm{H}}\widetilde{W}_{\infty}(1,\tau)\) is continuous.
**Lemma 12**.: _If \(\tau\) satisfies (5.2), then_
\[\dim_{\mathrm{H}}W_{\infty}(\tau)=\min\{s_{1}(\tau),\ldots,s_{n}(\tau)\}.\]
Proof.: Arguing as in the proof of (5.8) and using (5.5), we obtain
\[\widetilde{W}_{\infty}\big{(}(2m)^{-1},\tau\big{)}\subseteq W_{\infty}(\tau) \subseteq\widetilde{W}_{\infty}\Big{(}\sqrt{2},\tau\Big{)}\,.\]
The result now follows from Lemma 11.
In a similar fashion, we may show that the corresponding set obtained from the usual complex absolute value \(|\cdot|\) has the same Hausdorff dimension:
\[\dim_{\mathrm{H}}W_{n,m}^{\mathbb{C}}(\tau)=\min\{s_{1}(\tau),\ldots,s_{n}( \tau)\}. \tag{5.15}\]
## 6. Quaternion approximation
In this section we study sets of linear forms in quaternion space. We refer the reader to [38, Section 20] for the elementary algebraic aspects of quaternions and Hurwitz integers, to [22] for a beautiful account on metrical Diophantine approximation aspects of quaternions, and to [19] for an overview of the theory. It is worth stressing that no metrical results whatsoever are known in the higher (quaternion) dimensions, other than what is presented below.
Fix two natural numbers \(m\) and \(n\). Let \(i,j\) be two symbols and define the operations
\[i^{2}=-1,\,j^{2}=-1,\,ij=-ji \tag{6.1}\]
We write \(k:=ij\). The _ring of quaternions_\(\mathbb{H}\) is the skew field whose elements are the objects \(a+bi+cj+dk\) with \(a,b,c,d\in\mathbb{R}\) along with the sum defined coordinate-wise and the product obtained by following the usual rules and (6.1). Given any \(\xi=a+bi+cj+dk\in\mathbb{H}\), its _conjugate_\(\overline{\xi}\) is
\[\overline{\xi}=a-bi-cj-dk\]
and its _norm_\(|\xi|\) is
\[|\xi|=\sqrt{a^{2}+b^{2}+c^{2}+d^{2}},\]
so \(|\xi|=\sqrt{\xi\overline{\xi}}\). The definition of the product implies that for any \(\xi,\zeta\in\mathbb{H}\) we have \(\overline{\xi}\overline{\zeta}=\overline{\zeta}\overline{\xi}\) and, hence, \(|\xi\zeta|=|\xi||\zeta|\).
When regarded as a real vector space, \(\mathbb{H}\) is isomorphic to \(\mathbb{R}^{4}\) and an isomorphism is determined by
\[1\mapsto(1,0,0,0),\,i\mapsto(0,1,0,0),\,j\mapsto(0,0,1,0),\,k\mapsto(0,0,0,1).\]
Under this identification between \(\mathbb{H}\) and \(\mathbb{R}^{4}\), the real bi-linear map \(\langle\cdot,\cdot\rangle:\mathbb{H}\times\mathbb{H}\to\mathbb{R}\) given by
\[\langle\zeta,\xi\rangle:=\frac{1}{2}\left(\overline{\zeta}\xi-\zeta\overline{ \xi}\right)\quad(\zeta,\xi\in\mathbb{H})\]
is the usual inner product. As a consequence, the function from \(\mathbb{H}^{n}\times\mathbb{H}^{n}\) to \(\mathbb{R}\) given by
\[\langle(\xi_{1},\ldots,\xi_{n}),(\zeta_{1},\ldots,\zeta_{n})\rangle_{\mathbb{H }^{n}}\mapsto\sum_{j=1}^{n}\langle\xi_{j},\zeta_{j}\rangle_{\mathbb{H}}\]
for all \((\xi_{1},\ldots,\xi_{n})\), \((\zeta_{1},\ldots,\zeta_{n})\in\mathbb{H}^{n}\) is the usual inner product on \(\mathbb{R}^{4n}\).
The set of _Lipschitz integers_, given by
\[\{a+ib+cj+dk:a,b,c,d\in\mathbb{Z}\},\]
is the obvious choice for integers in \(\mathbb{H}\). However, it is customary to work instead with the _Hurwitz integers_\(\mathbb{Z}_{\mathbb{H}}\), defined as
\[\mathbb{Z}_{\mathbb{H}}:=\left\{\frac{a}{2}+\frac{b}{2}i+\frac{c}{2}j+\frac{d }{2}k:a,b,c,d\in\mathbb{Z}\text{ and }a\equiv b\equiv c\equiv d\pmod{2}\right\}.\]
Clearly, the Lipschitz integers are contained in \(\mathbb{Z}_{\mathbb{H}}\). The reason Hurwitz integers are preferred over Lipschitz integers is of an algebraic kind: Hurwitz integers are a Euclidean domain while Lipschitz integers are not [19, Chapter 5]. Hurwitz integers have \(24\) invertible elements or _units_:
\[\pm 1,\,\pm i,\,\pm j,\,\pm k,\,\frac{1}{2}(\pm 1+\pm i+\pm j+\pm k).\]
As noted in [22], \(\mathbb{Z}_{\mathbb{H}}\) is the additive subgroup of \(\mathbb{H}\) generated by \(i\), \(j\), \(k\), \(\frac{1+i+j+k}{2}\) and, as a sub-lattice of \(\mathbb{R}^{4}\), the determinant of \(\mathbb{Z}_{\mathbb{H}}\) is \(\frac{1}{2}\). Let \(I_{\mathbb{H}}\) be the Voronoi region for \(\mathbb{Z}_{\mathbb{H}}\) containing \(0\), that is
\[I_{\mathbb{H}}:=\{\xi\in\mathbb{H}:|\xi|\leq|\xi-\zeta|\text{ for all }\zeta\in \mathbb{Z}_{\mathbb{H}}\}.\]
If \([\xi]_{\mathbb{H}}\) denotes the distance between \(\xi\in\mathbb{H}\) and its nearest Hurwitz integer, then \(I_{\mathbb{H}}\) is precisely the set of quaternions \(\xi\) for which \([\xi]_{\mathbb{H}}=|\xi|\). Observe that \(I_{\mathbb{H}}\) is the convex hull of \((\pm\frac{1}{2},\pm\frac{1}{2}i,\pm\frac{1}{2}j,\pm\frac{1}{2}k)\) and that its Lebesgue measure is \(\frac{1}{2}\). Call \(I_{\mathbb{H}}^{m\times n}\) the set of \(m\times n\) matrices \(A\) with entries in \(I_{\mathbb{H}}\).
Given an \(n\)-tuple of non-increasing positive functions \(\varphi=(\varphi_{1},\ldots,\varphi_{n})\colon\mathbb{N}\to\mathbb{R}^{n}\) such that
\[\lim_{q\to\infty}\phi_{j}(q)=0\quad(1\leq j\leq n)\]
and an \(m\)-tuple of non-decreasing positive functions \(\Phi=(\Phi_{1},\ldots,\Phi_{m})\colon\mathbb{N}\to\mathbb{R}^{m}\) such that
\[\lim_{q\to\infty}\Phi_{k}(q)=\infty\quad(1\leq k\leq m),\]
we call \(W_{n,m}^{\mathbb{H}}(\varphi,\Phi)\subseteq I_{\mathbb{H}}^{m\times n}\) the set of \(m\times n\) matrices \(A\) with the following property:
there are infinitely many integers \(u\geq 1\) such that the system
\[\left[\mathbf{q}A_{j}\right] <\varphi_{j}(u)\quad(1\leq j\leq n)\] \[|q_{k}| \leq\Phi_{k}(u)\quad(1\leq k\leq m)\] has a non-zero solution \[\mathbf{q}\in\mathbb{Z}_{\mathbb{H}}^{1\times m}.\]
We denote by \(\mu_{m\times n}^{\mathbb{H}}\) the Lebesgue measure on \(\mathbb{H}^{m\times n}\).
**Theorem 11**.: _If there are some constants \(N_{0},M\in\mathbb{N}\), \(M\geq 2\), and \(c_{1},c_{2}>1\) such that for every \(j\in\mathbb{N}_{\geq N_{0}}\) we have_
\[c_{1}\Phi_{k}(M^{j})\leq\Phi_{k}(M^{j+1})\leq c_{2}\Phi_{k}(M^{j})\quad(1\leq k \leq m), \tag{6.2}\]
_then,_
\[\mu_{m\times n}^{\mathbb{H}}\left(W_{n,m}^{\mathbb{H}}(\varphi,\Phi)\right)= \begin{cases}0,&\text{if}\quad\sum_{q=1}^{\infty}\frac{1}{q}\left( \prod_{j=1}^{n}\varphi_{j}(q)\prod_{k=1}^{m}\Phi_{k}(q)\right)^{4}<\infty,\\ \mu_{m\times n}^{\mathbb{H}}(I_{\mathbb{H}}^{m\times n}),&\text{if}\quad\sum _{q=1}^{\infty}\frac{1}{q}\left(\prod_{j=1}^{n}\varphi_{j}(q)\prod_{k=1}^{m} \Phi_{k}(q)\right)^{4}=\infty.\end{cases}\]
In [22], Dodson and Everitt solved the one dimensional case using the theory of ubiquitous systems of balls as introduced in [9]. More precisely, given a non-increasing function \(\psi:\mathbb{R}_{+}\to\mathbb{R}_{+}\) such that \(\psi(t)=\psi(\lfloor t\rfloor)\) for all \(t>0\), they proved that the set
\[V(\psi):=\left\{\xi\in I_{\mathbb{H}}:|\xi q-p|<|q|\psi(|q|)\text{ for i.m. }p,q\in\mathbb{Z}_{\mathbb{H}}\right\}\]
is of either zero or full measure according to the convergence or divergence of
\[\sum_{q=1}^{\infty}q^{7}\psi(q)^{4}.\]
When \(m=n=1\), the function \(\Phi_{1}\) is the identity map and \(\varphi\) is decreasing, Theorem 11 tells us that the set
\[\left\{\xi\in I_{\mathbb{H}}:|q\,\xi-p|<\varphi(|q|)\text{ for i.m. }p,q\in \mathbb{Z}_{\mathbb{H}}\right\}\]
is of zero or full measure according to the convergence or divergence of
\[\sum_{q=1}^{\infty}q^{3}\varphi(q)^{4}.\]
Although quaternions are not commutative, the proof of Theorem 11 also allows us to conclude the 0-1 dichotomy for the set
\[\left\{\xi\in I_{\mathbb{H}}:|\xi q-p|<\varphi(|q||)\text{ for i.m. }p,q\in \mathbb{Z}_{\mathbb{H}}\right\}.\]
The result of Dodson and Everitt now follows.
Let \(\boldsymbol{\tau}=(\tau_{1},\ldots,\tau_{n})\in\mathbb{R}^{n}\) be any vector for which (5.2) holds. Define
\[s_{j}(\boldsymbol{\tau}):=4n(m-1)+4\,\frac{m+n-\sum\limits_{r:\,\tau_{r}<\tau_ {j}}(\tau_{r}-\tau_{j})}{\tau_{j}}\quad(1\leq j\leq n)\]
and
\[W^{\mathbb{H}}_{n,m}(\boldsymbol{\tau}):=\left\{A\in I^{m\times n}_{\mathbb{H }}:\left[\boldsymbol{q}A_{j}\right]<\frac{1}{\|\boldsymbol{q}\|^{\tau_{j}-1}} \,(1\leq j\leq n)\text{ for i. m. }\boldsymbol{q}\in\mathbb{Z}^{m}_{\mathbb{H}} \right\}.\]
**Theorem 12**.: _If \(\tau\in\mathbb{R}^{n}\) satisfies (5.2), then_
\[\dim_{\mathbb{H}}W^{\mathbb{H}}_{n,m}(\boldsymbol{\tau})=\min\{s_{1}( \boldsymbol{\tau}),\ldots,s_{n}(\boldsymbol{\tau})\}.\]
_Moreover, \(\mathscr{H}^{s}\left(W^{\mathbb{H}}_{n,m}(\boldsymbol{\tau})\right)=\infty\)._
Theorems 11 and 12 can be shown as their complex counterpart. We thus omit a considerable amount of detail in their proofs. First, as before, we replace \(|\cdot|\) with the more manageable norm \(|\cdot|_{\infty}\) given by
\[|\xi|_{\infty}:=\max\{|a|,|b|,|c|,|d|\}\quad(\xi=a+bi+cj+dk\in\mathbb{H})\]
and \(|\cdot|\) with the function \(|\cdot|_{\infty}:\mathbb{H}\to\mathbb{R}_{+}\) given by
\[|\xi|_{\infty}:=\min\{|\xi-\zeta|_{\infty}:\zeta\in\mathbb{Z}_{\mathbb{H}} \}\quad(\xi\in\mathbb{H}).\]
The quaternion version of Minkowski's theorem for linear forms, Lemma 13 below, follows from Theorem 4 and \(\det(\mathbb{Z}_{\mathbb{H}})=\frac{1}{2}\).
**Lemma 13**.: _Let \(\gamma_{1},\ldots,\gamma_{n},\theta_{1},\ldots,\theta_{m}\) be positive numbers satisfying \(\prod_{j=1}^{n}\gamma_{j}\prod_{k=1}^{m}\theta_{k}\geq\frac{1}{2^{m+n}}\). For every matrix \(A\in\mathbb{H}^{m\times n}\) there exists a vector \((\mathbf{q},\mathbf{p})\in\mathbb{Z}_{\mathbb{H}}^{1\times(m+n)}\) with \(\mathbf{q}\neq 0\) such that_
\[|\mathbf{q}\,A_{t}-p_{t}|_{\infty} <\gamma_{t} (1\leq t\leq n), \tag{6.3}\] \[|q_{r}|_{\infty} \leq\theta_{r} (1\leq r\leq m).\]
### Ubiquity for quaternions
Given \(k\in\mathbb{N}\), for any \(\xi=(\xi_{1},\ldots,\xi_{k})\in\mathbb{H}^{k}\) define
\[\|\xi\|_{\infty}:=\max\left\{|z_{1}|_{\infty},\ldots,|z_{k}|_{\infty}\right\}.\]
Hence, if \(\|\xi\|_{2}=\sqrt{|\xi_{1}|^{2}+\cdots+|\xi_{k}|^{2}}\), we have
\[\frac{1}{2\sqrt{k}}\|\xi\|_{2}\leq\|\xi\|_{\infty}\leq\|\xi\|_{2}. \tag{6.4}\]
Consider a sufficiently large \(M\) and the next objects:
1. \(J:=\left\{\alpha=(\mathbf{q},\mathbf{p})\in\mathbb{Z}_{\mathbb{H}}^{m+n}:\| \mathbf{p}\|_{\infty}\leq 8m\|\mathbf{q}\|_{\infty}\right\}\),
2. \(\beta:J\to\mathbb{R}_{+}\), \(\alpha=(\mathbf{q},\mathbf{p})\mapsto\beta_{\alpha}:=\max\left\{\Phi_{1}^{-1}(| q_{1}|_{\infty}),\ldots,\Phi_{m}^{-1}(|q_{m}|_{\infty})\right\}\),
3. \((u_{s})_{s\geq 1}\) given by \(u_{s}=M^{s}\) for all \(s\in\mathbb{N}\),
4. For each \(\alpha\in J\), write \(R_{\alpha,t}:=\left\{A_{j}\in I_{\mathbb{H}}^{m\times 1}:\mathbf{q}\,A_{t}=p_{t}\right\}\) for \(t\in\{1,\ldots,n\}\) and the resonant set \(R_{\alpha}\) is \(R_{\alpha}:=\prod_{t=1}^{n}R_{\alpha,t}\),
5. In this context, \(\kappa_{t}=1-\frac{1}{m}\) and \(\delta_{t}=4m\) for \(t\in\{1,\ldots,n\}\).
Let \(\rho:(\rho_{1},\ldots,\rho_{n}):\mathbb{N}\to\mathbb{R}_{+}^{n}\) be given by
\[\rho_{j}(u):=2\,M\frac{\varphi_{j}(u)}{\|\Phi(u)\|_{\infty}}\left(\prod_{s=1} ^{n}\varphi_{s}(u)\prod_{k=1}^{m}\Phi_{k}(u)\right)^{-1/n}\qquad(1\leq j\leq n,\,u\in\mathbb{N})\]
and \(\Psi=(\psi_{1},\ldots,\psi_{n}):\mathbb{N}\to\mathbb{R}_{+}^{n}\) by
\[\psi_{j}(u):=\frac{\varphi_{j}(u)}{\|\Phi(u)\|_{\infty}}\quad(1\leq j\leq n,\, u\in\mathbb{N}).\]
Recall that, by Lemma 2, we can replace \(\Psi\) with any of its multiples without altering the measure. In view of Lemma 13, we may assume
\[\prod_{t=1}^{n}\varphi_{t}(u)\prod_{r=1}^{m}\Phi_{r}(u)\leq\frac{1}{2^{m+n}} \quad(u\in\mathbb{N}).\]
In order to prove Theorem 12, take \(J\), \(R_{\alpha}\) for \(\alpha\in J\), \(\beta\), and \((u_{s})_{s\geq 1}\) as above, pick a sufficiently large \(M\) and put \(l_{s}=M^{s-1}\) for all \(s\in\mathbb{N}\). Given \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathbb{R}^{n}\) satisfying (5.14), define \(\tilde{\rho}=(\tilde{\rho}_{1},\ldots,\tilde{\rho}_{n}):\mathbb{N}\to\mathbb{R} ^{n}\) by
\[\tilde{\rho}_{j}(u)=\frac{2}{u^{a_{t}-1}}\quad(u\in\mathbb{N})\]
for each \(t=1,\ldots,N\).
**Lemma 14**.: _The system \(((R_{\alpha})_{\alpha\in J},\beta)\) is ubiquitous with respect to \(\rho\) and \((l_{s})_{s\geq 1}\), \((u_{s})_{s\geq 1}\). The same system is also ubiquitous with respect to \(\tilde{\rho}\) and \((\tilde{l}_{s})_{s\geq 1}\), \((u_{s})_{s\geq 1}\) for a suitable \((\tilde{l}_{s})_{s\geq 1}\)._
The proof of Lemma 14 follows closely the proofs of lemmas 7 and 10. We leave the details to the reader.
### Proof of Theorem 11
Condition (6.2) and the Cauchy condensation imply that the next series are either both convergent or both divergent:
\[\sum_{q=1}^{\infty}\frac{1}{q}\left(\prod_{t=1}^{n}\varphi_{t}(q)\prod_{r=1}^{m }\Phi_{r}(q)\right)^{4}\quad\text{and}\quad\sum_{s=0}^{\infty}\left(\prod_{t=1 }^{n}\varphi_{t}(u_{s})\prod_{r=1}^{m}\Phi_{r}(u_{s})\right)^{4}.\]
Hence, the divergence case is a consequence of Lemma 14 and Theorem 5. In order to prove the convergence case, for each \(u\in\mathbb{N}\), let \(\mathcal{A}^{\mathbb{H}}(u;\varphi,\Phi)\) be the collection of matrices \(A\in I_{\mathbb{H}}^{m\times n}\) such that for some non-zero \(\mathbf{q}\in\mathbb{Z}[i]^{1\times m}\) we have
\[[\mathbf{q}A_{t}]_{\infty}<\varphi_{t}(u), (1\leq t\leq n),\] \[|q_{r}|_{\infty}\leq\Phi_{r}(u), (1\leq r\leq m).\]
Therefore,
\[\mu_{m\times n}^{\mathbb{H}}\left(\mathcal{A}^{\mathbb{H}}(u;\varphi,\Phi) \right)\ll_{m,n}\left(\prod_{t=1}^{n}\varphi_{t}(u)\prod_{r=1}^{m}\Phi_{r}(u) \right)^{4}\]
and, as in the proof of Theorem 9,
\[\limsup_{u\to\infty}\mathcal{A}^{\mathbb{H}}(u;\varphi,\Phi)\subseteq\limsup_ {u\to\infty}\mathcal{A}^{\mathbb{H}}(u;\varphi,c_{2}\Phi).\]
Finally, the Borel-Cantelli lemma gives the theorem.
### Proof of Theorem 12
As in the complex setting, for any \(\boldsymbol{\tau}=(\tau_{1},\ldots,\tau_{n})\in\mathbb{R}^{n}\) such that (5.2) holds, define the sets
\[W_{\infty}^{\mathbb{H}}(\boldsymbol{\tau}):=\left\{A\in I_{\mathbb{H}}^{m \times n}:\left[\mathbf{q}A_{t}\right]_{\infty}<\frac{1}{\|\mathbf{q}\|_{ \infty}^{\tau_{t}-1}}\quad(1\leq t\leq n),\ \text{for i. m.}\ \mathbf{q}\in\mathbb{Z}_{ \mathbb{H}}^{m}\right\}\]
and, for any \(\gamma>0\),
\[\widehat{W}_{\infty}^{\mathbb{H}}(\gamma,\boldsymbol{\tau}):=\bigcap_{ \mathbf{q}\in\frac{1}{2}\mathbb{N}}\bigcup_{\begin{subarray}{c}\mathbf{q} \in\mathbb{Z}_{\mathbb{H}}^{m}\\ \|\mathbf{q}\|_{\infty}=Q\end{subarray}}\bigcup_{\begin{subarray}{c}\mathbf{ p}\in\mathbb{Z}_{\mathbb{H}}^{n}\\ \alpha=(\mathbf{q},\mathbf{p})\in J\end{subarray}}\prod_{t=1}^{n}\Delta\left(R _{\alpha,t},\frac{\gamma}{\|\mathbf{q}\|_{\infty}^{\tau_{t}}}\right).\]
The core of the argument is done under the assumption \(\gamma=1.\) For the upper bound, note that
\[\#\left\{\mathbf{q}\in\mathbb{Z}_{\mathbb{H}}^{m}:\|\mathbf{q}\|_{\infty}=Q \right\}\asymp_{m}Q^{4m-1}\quad\left(Q\in\tfrac{1}{2}\mathbb{N}\right),\]
Indeed, for any \(\mathbf{q}\in\mathbb{Z}_{\mathbb{H}}\), the quaternion \(2\mathbf{q}\) is a Lipschitz integer and, for all \(Q\in\tfrac{1}{2}\mathbb{N}\), we have \(\|\mathbf{q}\|_{\infty}=Q\) if and only if \(\|2\mathbf{q}\|_{\infty}=2Q\).
Also, for \(t,l\in(1,\ldots,m)\), we need
\[\asymp\left(\max\left\{1,\frac{\|\mathbf{q}\|_{\infty}^{\tau_{l}}}{\|\mathbf{q }\|_{\infty}^{\tau_{l}}}\right\}\|\mathbf{q}\|_{\infty}^{\tau_{j}(m-1)}\right)^ {4}\]
balls of radius \(\|\mathbf{q}\|^{-\tau_{t}}\) to cover \(\triangle(R_{\alpha,J},\|\mathbf{q}\|_{\infty}^{-\tau_{I}})\). Hence, if \(s>0\), we have
\[\mathcal{H}^{s}\left(\widehat{W}_{\infty}^{\mathbb{H}}(1,\mathbf{r })\right)\ll \liminf_{N\to\infty}\sum_{Q\geq N}Q^{4m-1}Q^{2n}Q^{-s\tau_{t}}Q^{4 \tau_{t}n(m-1)}Q^{4\sum\limits_{\tau_{I}\leq\tau_{t}}\tau_{t}-\tau_{I}}\] \[= \liminf_{N\to\infty}\sum_{Q\geq N}Q^{4m+4n-1+4\tau_{t}n(m-1)+4 \sum\limits_{\tau_{I}\leq\tau_{j}}\tau_{t}-\tau_{I}}Q^{-s\tau_{t}}.\]
The series converges if and only if \(s>s_{t}(\mathbf{r})\).
From this point, the argument follows verbatim that of the complex case. That is, we reorder the coefficients \(\tau\) to have \(\tau_{1}\geq\ldots\geq\tau_{2}\geq\ldots\geq\tau_{n}>1\) and we consider two cases: \(\tau_{n}\geq\frac{n+m}{m}\) and \(\frac{n+m}{m}>\tau_{n}\). Afterwards, we apply Theorem 2 to conclude
\[\dim_{H}\widehat{W}_{\infty}^{\mathbb{H}}(1,\mathbf{r})=\min\left\{s_{1}( \mathbf{r}),\ldots,s_{n}(\mathbf{r})\right\}.\]
The theorem for an arbitrary \(\gamma\) follows from the continuity of the dimension as a function of \(\tau\). Finally, we conclude the theorem for \(W_{\infty}^{\mathbb{H}}(\mathbf{r})\) by appealing to the equivalence of any two norms in a finite dimensional vector space.
## 7. Formal power series approximation
In this section we study sets of linear forms over the field of formal power series. Let \(\mathbb{F}\) be the finite field with \(t=p^{r}\) elements for some prime \(p\) and \(r\in\mathbb{N}\). We define _the field of Laurent series with coefficients from \(\mathbb{F}\)_ or _the field of formal power series with coefficients from \(\mathbb{F}\)_ to be
\[\mathcal{L}=\left\{\sum_{i=-n}^{\infty}a_{-i}X^{-i}:n\in\mathbb{Z},\;a_{i}\in \mathbb{F},\;a_{n}\neq 0\right\}\cup\{0\}. \tag{7.1}\]
An absolute value \(\|\cdot\|\) on \(\mathcal{L}\) can be defined as
\[\left\|\sum_{i=-n}^{\infty}a_{-i}X^{-i}\right\|=t^{n},\quad\|0\|=0.\]
For any \(\mathbf{x}=(x_{1},\ldots,x_{h})\in\mathcal{L}^{h}\), we define the _height of_\(\mathbf{x}\) to be
\[\|\mathbf{x}\|_{\infty}=\max\{\|x_{1}\|,\ldots,\|x_{h}\|\}.\]
Note that for both \(\|\cdot\|\) and \(\|\cdot\|_{\infty}\), we have
\[\|x+y\|\leq\max(\|x\|,\|y\|)\quad\text{and}\quad\|\mathbf{x}+\mathbf{y}\|_{ \infty}\leq\max(\|\mathbf{x}\|_{\infty},\|\mathbf{y}\|_{\infty}).\]
In \(\mathcal{L}\), the polynomial ring \(\mathbb{F}[X]\) plays a role analogous to the one played by the integers in the field of real numbers. We define _the polynomial part_ of a non-zero element by
\[\left[\sum_{i=-n}^{\infty}a_{-i}X^{-i}\right]=\sum_{i=-n}^{0}a_{-i}X^{-i}.\]
Define the distance to \(\mathcal{L}[X]^{h}\) for a point \(\mathbf{x}\in\mathcal{L}^{h}\) as
\[|\langle\mathbf{x}\rangle|=\min_{\mathbf{p}\in\mathbb{F}[X]^{h}}\|\mathbf{x}- \mathbf{p}\|_{\infty}.\]
Let
\[I_{\mathcal{L}}=\{x\in\mathcal{L}:[x]=0\}=B(0,1)=\{x\in\mathcal{L}:\|x\|<1\}.\]
Fix \(n,m\in\mathbb{N}\) and let \(\mathcal{L}^{m\times n}\) to be the set of \(m\times n\) dimensional matrices with entries from \(\mathcal{L}\) and \(I_{\mathcal{L}}^{m\times n}\) to be the \(m\times n\) dimensional matrices with entries from \(I_{\mathcal{L}}\).
We will also make use of the fact about the number of polynomials of fixed degree, namely
\[\#\{\mathbf{q}\in\mathbb{F}[X]^{m}:\|\mathbf{q}\|_{\infty}=t^{r}\}=m(r-1)t^{m- 1}t^{tm}. \tag{7.2}\]
Denote by \(\mu_{m\times n}^{\mathcal{L}}\) the \(mn\)-dimensional Haar measure on \(\mathcal{L}^{m\times n}\) normalised by \(\mu_{m\times n}^{\mathcal{L}}(I_{\mathcal{L}}^{m\times n})=1\).
As in the previous sections, we have an \(n\)-tuple \(\{\phi_{i}\}_{1\leq i\leq n}\) and an \(m\)-tuple \(\{\Phi_{k}\}_{1\leq k\leq m}\) of positive functions defined on \(\mathbb{N}\) such that
\[\phi_{i}(u)\to 0\quad\text{ as }\quad u\to\infty\quad(1\leq i\leq n)\]
and
\[\Phi_{k}(u)\to\infty\quad\text{ as }\quad u\to\infty\quad(1\leq k\leq m).\]
Define the set
\[W_{n,m}^{\mathcal{L}}(\phi,\Phi):=\left\{\begin{aligned} & A\in I_{\mathcal{L}}^{m \times n}:&&\text{the system}\left\{\begin{aligned} &|\langle\mathbf{q}A_{i}\rangle|<\phi_{i}(u) \quad(1\leq i\leq n),\\ &\|q_{k}\|\leq\Phi_{k}(u)\quad(1\leq k\leq m),\end{aligned} \right\}\\ &\text{has a solution }\mathbf{q}\in\mathbb{F}[X]^{m}\setminus\{0\} \text{ for i.m. }u\in\mathbb{N}\end{aligned}\right\}.\]
We prove the following weighted version of the Khintchine-Groshve type theorem for the formal power series.
**Theorem 13**.: _Assume that there are constants \(N_{0},M\in\mathbb{N}_{>1}\) and \(c_{1},c_{2}>1\) such that_
\[c_{1}\Phi_{k}(M^{j})\leq\Phi_{k}(M^{j+1})\leq c_{2}\Phi_{k}(M^{j}),\quad 1 \leq k\leq m,\quad\forall j\in\mathbb{N}_{\geq N_{0}}. \tag{7.3}\]
_Then_
\[\mu_{m\times n}^{\mathcal{L}}(W_{n,m}^{\mathcal{L}}(\phi,\Phi))=\left\{ \begin{aligned} & 0\quad\text{if}\quad\sum\limits_{r=1}^{\infty} \frac{1}{r}\prod_{k=1}^{m}\Phi_{k}(r)\prod_{i=1}^{n}\phi_{i}(r)<\infty,\\ & 1\quad\text{if}\quad\sum\limits_{r=1}^{\infty}\frac{1}{r}\prod_{k=1}^{m }\Phi_{k}(r)\prod_{i=1}^{n}\phi_{i}(r)=\infty.\end{aligned}\right.\]
Below we briefly highlight the results preceding this. These include but are not limited to
* \(n=m=1\), \(\psi\) monotonic, de Mathan [21].
* \(nm\geq 1\), \(\Psi\) univariable monotonic, Kristensen [59]
* \(n\geq 1\), \(m\geq 2\), \(\Psi\) univariable monotonic inhomogeneuos, Kristensen [61].
To sum up, nothing is known prior to our result in the weighted settings. However, it is worth remarking that, an asymptotic formula for the number of solutions to the Diophantine inequalities \(|\langle\mathbf{q}A\rangle|<\psi(\|\mathbf{q}\|)\) was proven in [25].
For the Hausdorff dimension result we consider a slightly different setup. For any vector \(\tau=(\tau_{1},\ldots,\tau_{n})\in\mathbb{R}^{n}\), satisfying
\[\min_{1\leq j\leq n}\tau_{j}>1\quad\text{and}\quad\sum_{j}\tau_{j}\geq n+m \tag{7.4}\]
define the set
\[W^{\mathcal{L}}_{n,m}(\tau):=\left\{A\in I^{m\times n}_{\mathcal{L}}:|\langle \mathbf{q}A_{j}\rangle|<\|\mathbf{q}\|_{\infty}^{-\tau_{j}}\|\mathbf{q}\|_{ \infty}\quad(1\leq j\leq n)\quad\text{ for i.m. }\mathbf{q}\in\mathbb{F}[X]^{m}\right\}\]
and quantities
\[s_{j}(\tau):=n(m-1)+\frac{m+n-\sum\limits_{r:\tau_{r}<\tau_{j}}(\tau_{r}-\tau_ {j})}{\tau_{j}}\quad(1\leq j\leq n)\,.\]
Then we have
**Theorem 14**.: _For any vector \(\tau\in\mathbb{R}^{n}\), satisfying (7.4), we have_
\[\dim_{\mathrm{H}}W^{\mathcal{L}}_{n,m}(\tau)=\min\{s_{1}(\tau),\ldots,s_{n}( \tau)\}.\]
Previously known results regarding the Hausdorff dimension of this set include:
* \(n=m=1\), Kristensen [59].
* \(nm\geq 1\), \(\Psi\) non-monotonic, Kristensen [60].
Our weighted result is completely new.
### Proof of Theorem 13
First note that the convergence case is a simple consequence of the Borel-Cantelli lemma. The proof for the divergence case is similar to the complex setup, so we will only highlight the differences. As in the other applications, we will make use of the following form of Minkowski's theorem on linear forms.
**Lemma 15** ([73]).: _Suppose that for some \(H\in\mathbb{N}\) one has_
\[\prod_{i=1}^{n}\phi_{i}(H)\prod_{j=1}^{m}\phi_{j}(H)\geq t^{n+m}.\]
_Then for any \(A\in I^{m\times n}_{\mathcal{L}}\), the system_
\[\|\mathbf{q}A_{i}-p_{i}\|<\phi_{i}(H)\qquad(1\leq i\leq n),\]
\[\|q_{j}\|\leq\Phi_{j}(H)\qquad\qquad(1\leq j\leq m)\]
_has a non-trivial solution \((\mathbf{q},\mathbf{p})\in\mathbb{F}[X]^{m+n}\)._
We work with the following objects:
1. \(J:=\{\alpha=(\mathbf{q},\mathbf{p})=(q_{1},\ldots,q_{m},p_{1},\ldots,p_{n})\in \mathbb{F}[X]^{m+n}\,:\,\|\mathbf{p}\|_{\infty}\leq\|\mathbf{q}\|_{\infty}\},\)
2. \(\beta:J\to\mathbb{R}_{+}\), \(\alpha\mapsto\beta_{\alpha}=\max\left\{\Phi_{1}^{-1}(\|q_{1}\|),\ldots,\Phi_{m }^{-1}(\|q_{m}\|)\right\}\),
3. \(u_{k}=M^{k}\) for some \(M\in\mathbb{N}_{\geq 2}\) to be chosen later,
4. \(J_{k}=\{\alpha\in J:l_{k}\leq\beta_{\alpha}\leq u_{k}\}\) for some suitable sequence \(l_{k}<u_{k}\) for all \(k\in\mathbb{N}\).,
5. Resonant sets as \(R_{\mathbf{q}}=\prod_{i=1}^{n}R_{\mathbf{q},i}\), where \(R_{\mathbf{q},i}=\{A_{i}\in I_{\mathcal{L}}^{m\times 1}:\mathbf{q}A_{i}=p_{i}\) for some \(p_{i}\in\mathbb{F}[X]\}\),
6. In the current setting \(\kappa_{i}=\frac{m-1}{m}\) and \(\delta_{i}=m\) for each \(i=1,\ldots,n\).
Finally, we define functions
\[\rho_{j}(u)=\frac{\phi_{j}(u)}{\|\Phi(u)\|_{\infty}}\left(t^{-(m+n)}\prod_{s= 1}^{n}\phi_{s}(u)\prod_{k=1}^{m}\Phi_{k}(u)\right)^{-1/n}\]
and
\[\psi_{j}(u)=\frac{\phi_{j}(u)}{\|\Phi(u)\|_{\infty}}.\]
Also note that with these choices of functions we have
\[\prod_{j=1}^{n}\frac{\psi_{j}(u)}{\rho_{j}(u)}=t^{m+n}\prod_{j=1}^{n}\phi_{j}( u)\prod_{k=1}^{m}\Phi_{k}(u)\quad(u\in\mathbb{N}).\]
As in the other applications, using Minkowski's theorem, we can make an additional assumption on \(\phi\) and \(\Phi\). If there is a strictly increasing sequence of natural numbers \((n_{j})_{j\geq 1}\) such that
\[\prod_{s=1}^{n}\phi_{s}(n_{j})\prod_{k=1}^{m}\Phi_{k}(n_{j})\geq t^{m+n}\quad (j\in\mathbb{N}),\]
then \(\mu_{m\times n}^{\mathcal{L}}(W_{n,m}^{\mathcal{L}}(\phi,\Phi))=1\) by Lemma 15. Therefore, we will assume that
\[\prod_{j=1}^{n}\phi_{j}(u)\prod_{k=1}^{m}\Phi_{k}(u)<t^{m+n}\,\text{ for large }u\in\mathbb{N}. \tag{7.5}\]
The main ingredient of the proof is the formal power series analogue of a weighted ubiquitous system.
**Lemma 16**.: _The system \((\{R_{\alpha}\}_{\alpha\in J},\beta)\) is a weighted ubiquitous system with respect to the function \(\rho\) given above._
For each \(j\in\{1,\ldots,n\}\), let \(f_{j}:\mathbb{N}\to\mathbb{R}_{+}\) be the function given by
\[f_{j}(u):=\phi_{j}(u)\left(t^{-(m+n)}\prod_{s=1}^{n}\phi_{s}(u)\prod_{k=1}^{m} \Phi_{k}(u)\right)^{-1/n}\quad(u\in\mathbb{N}).\]
Observe that \(\prod_{j}f_{j}(u)\prod_{k}\Phi_{k}(u)=t^{m+n}\) for all \(u\in\mathbb{N}\).
**Proposition 6**.: _For each \(u\in\mathbb{N}\) and each \(A\in I_{\mathcal{L}}^{m\times n}\) there is some \(\alpha=(\mathbf{q},\mathbf{p})\in J\) such that_
\[A\in\prod_{j=1}^{n}\Delta\left(\mathcal{R}_{\alpha,j},\,\frac{f_{j}(u)}{\| \mathbf{q}\|_{\infty}}\right)\text{ and }\ \beta_{\alpha}\leq u.\]
Proof.: By Lemma 15, for every \(H\in\mathbb{N}\) and \(A\in I_{\mathcal{L}}^{m\times n}\) there exists \((\mathbf{q},\mathbf{p})\in\mathbb{F}[X]^{m+n}\setminus\{0\}\), such that
\[\|\mathbf{q}A_{i}-p_{i}\|<f_{i}(H) (1\leq i\leq n),\] \[\|q_{j}\|\leq\Phi_{j}(H) (1\leq j\leq m).\]
Then in each coordinate \(i\) we have
\[\|\mathbf{q}\|_{\infty}\operatorname{dist}_{\infty}(A_{i},R_{\mathbf{q},i}) \leq\inf_{A_{i}^{\prime}\in R_{\mathbf{q},i}}\|\mathbf{q}A_{i}-\mathbf{q}A_{i }^{\prime}\|\leq\|\mathbf{q}A_{i}-p_{i}\|<f_{i}(H).\]
Dividing everything by \(\|\mathbf{q}\|_{\infty}\) and noting that \(\|q_{k}\|\leq\Phi_{k}(u)\) for all \(k\) finishes the proof.
For each \(j\in\mathbb{N}\), define
\[\tilde{J}_{j}:=\left\{(\mathbf{q},\mathbf{p})\in\mathbb{F}[X]^{m+n}:\frac{ \Phi_{k}(u_{j})}{M}\leq\|q_{k}\|\leq\Phi_{k}(u_{j})\quad(1\leq k\leq m)\right\}.\]
If \(\alpha=(\mathbf{q},\mathbf{p})\in\tilde{J}_{j}\), then \(\beta_{\alpha}\preceq u_{j}\). Since \(\beta_{\alpha}\to\infty\) as \(\alpha\to\infty\), we may pick a sequence \(l_{j}\) ensuring \(\tilde{J}_{j}\subseteq J_{j}\). For each \(j\in\mathbb{N}\) and \(k\in\{1,\ldots,m\}\), define the set
\[J_{j,k}:=\left\{\alpha\in J:\|q_{k}\|\leq\frac{\Phi_{k}(u_{j})}{M}\quad\text{ and}\quad\|q_{s}\|\leq\Phi_{s}(u_{j})\quad(s\in\{1,\ldots m\}\setminus\{k\})\right\}.\]
Let \(B=\prod_{k=1}^{n}B(X_{k};r)\) be an arbitrary ball. Then, for all \(s\in\mathbb{N}\),
\[B= B\cap\bigcup_{\alpha:\beta_{\alpha}\leq u_{s}}\prod_{k=1}^{n} \Delta\left(R_{\alpha,k},\,\frac{f_{k}(u_{s})}{\|\mathbf{q}\|_{\infty}}\right)\] \[= \left(B\cap\bigcup_{\alpha\in J_{s}}\prod_{k=1}^{n}\Delta\left(R _{\alpha,k},\,\frac{f_{k}(u_{s})}{\|\mathbf{q}\|_{\infty}}\right)\right) \cup\left(B\cap\bigcup_{h=1}^{m}\bigcup_{\alpha\in J_{s,h}}\prod_{k=1}^{n} \Delta\left(R_{\alpha,k},\,\frac{f_{k}(u_{s})}{\|\mathbf{q}\|_{\infty}}\right) \right).\]
For any fixed \(\mathbf{q}=(q_{1},\ldots,q_{m})\), the number of \(p_{i}\) such that the intersection is non-empty is not greater than \(4\|\mathbf{q}\|_{\infty}r\).
Finally, we can bound the Haar measure of the second term in the union from above. This is an analogue of Proposition 5 from complex setup.
**Proposition 7**.: _If \(M\geq 2^{2n+1}t^{2m+n}m\), then for every large \(s\in\mathbb{N}\) we have_
\[\mu_{m\times n}^{\mathcal{L}}\left(B\cap\bigcup_{h=1}^{m}\bigcup_{a\in J_{s, h}}\prod_{j=1}^{n}\Delta\left(R_{\alpha,j},\,\frac{f_{j}(u_{s})}{\|\mathbf{q}\|_{ \infty}}\right)\right)\leq\frac{1}{2}\mu_{m\times n}^{\mathcal{L}}(B).\]
Proof.: For each \(s\in\mathbb{N}\) and \(k\in\{1,\ldots,m\}\), write
\[J^{\prime}_{s,k}:=\left\{\mathbf{q}\in\mathbb{F}[X]^{m}:\alpha=(\mathbf{q}, \mathbf{p})\in J_{s,k}\right\},\]
so we can bound the number of elements as
\[\#J^{\prime}_{s,k}\leq\frac{t^{m}}{M}\prod_{j=1}^{m}\Phi_{j}(u_{s}).\]
If \(G_{\mathscr{L}}\) is the intersection in the statement of Proposition 7, then
\[\mu^{\mathscr{L}}_{m\times n}(G_{\mathscr{L}}) \leq\sum_{k=1}^{m}\sum_{\mathbf{q}\in J^{\prime}_{s,k}}(4r\| \mathbf{q}\|_{\infty})^{n}\prod_{j=1}^{n}\frac{f_{j}(u_{s})r^{m-1}}{\|\mathbf{ q}\|_{\infty}}\] \[\leq 4^{n}\sum_{k=1}^{m}\sum_{\mathbf{q}\in J^{\prime}_{s,k}}r^{ nm}\prod_{j=1}^{n}f_{j}(u_{s})\] \[\leq\frac{4^{n}r^{nm}t^{m+n}}{\Phi_{1}(u_{s})\ldots\Phi_{m}(u_{s} )}\sum_{k=1}^{m}\#J^{\prime}_{s,k}\] \[\leq\frac{4^{n}t^{2m+n}m}{M}r^{nm}\] \[\leq\frac{1}{2}\mu^{\mathscr{L}}_{m\times n}(B).\]
The definition of \(\widetilde{J}_{s}\) tells us that for each \(\alpha=(\mathbf{q},\mathbf{p})\in\widetilde{J}_{s}\) one has
\[\|\mathbf{q}\|_{\infty}\geq\frac{1}{M}\left\|\Phi_{1}(u_{s}),\ldots,\Phi_{m}(u _{s})\right\|_{\infty},\]
so
\[\mu^{\mathscr{L}}_{m\times n}\left(B\cap\bigcup_{\alpha\in J_{s} }\prod_{k=1}^{m}\Delta\left(R_{\alpha,k},M\frac{f_{k}(u_{s})}{\|\Phi(u_{s})\|_ {\infty}}\right)\right) \geq\mu^{\mathscr{L}}_{m\times n}\left(B\cap\bigcup_{\alpha\in \widetilde{J}_{n}}\prod_{k=1}^{m}\Delta\left(R_{\alpha,k},M\frac{f_{k}(u_{s})} {\|\Phi(u_{s})\|_{\infty}}\right)\right)\] \[\geq\mu^{\mathscr{L}}_{m\times n}\left(B\cap\bigcup_{\alpha\in \widetilde{J}_{s}}\prod_{k=1}^{m}\Delta\left(R_{\alpha,k},\frac{f_{k}(u_{s})}{ \|\mathbf{q}\|_{\infty}}\right)\right)\] \[\geq\frac{1}{2}\mu^{\mathscr{L}}_{m\times n}(B).\]
So the ubiquity property is proven.
The rest of the proof of Theorem 13 is very similar to the complex setup, but with the difference that we are considering
\[\prod_{j=1}^{n}\varphi_{j}(q)\prod_{k=1}^{m}\Phi_{k}(q)\]
instead of
\[\left(\prod_{j=1}^{n}\varphi_{j}(q)\prod_{k=1}^{m}\Phi_{k}(q)\right)^{2}\]
and therefore we skip it.
### Proof of Theorem 14
#### 7.2.1. Upper bound
We make a natural cover of the limsup set and then use the Hausdorff-Cantelli lemma.
\[W_{n,m}^{\mathscr{L}}(\tau)\subseteq\bigcap_{\begin{subarray}{c}\mathbf{q}\in \mathbb{N}\end{subarray}}\bigcup_{\begin{subarray}{c}\mathbf{q}\in\mathbb{F} [\mathbf{X}]^{m}\\ \|\mathbf{q}\|_{\infty}=\ell^{Q}(\mathbf{q},\mathbf{p})\in J\end{subarray}}\prod _{j=1}^{n}\triangle\left(R_{\alpha,j},\|\mathbf{q}\|_{\infty}^{-\tau_{j}} \right).\]
Also, observe that for all \(j,l\in\{1,\ldots,n\}\) the number of balls of radius \(\|\mathbf{q}\|_{\infty}^{-\tau_{j}}\) to cover \(\triangle(R_{\alpha,l},\|\mathbf{q}\|_{\infty}^{-\tau_{l}})\) is asymptotically equivalent to
\[\max\left\{1,\frac{\|\mathbf{q}\|_{\infty}^{-\tau_{l}}}{\|\mathbf{q}\|_{ \infty}^{-\tau_{j}}}\right\}\|\mathbf{q}\|_{\infty}^{\tau_{j}(m-1)}.\]
Hence, for all \(s>0\),
\[\begin{split}\mathcal{H}^{s}(W(\tau))&\ll\liminf_{ N\to\infty}\sum_{\begin{subarray}{c}\mathbf{q}\in\mathbb{F}[\mathbf{X}]^{m}\\ \|\mathbf{q}\|_{\infty}\geq t^{N}\end{subarray}}\|\mathbf{q}\|_{\infty}^{m-1} \|\mathbf{q}\|_{\infty}^{n}\|\mathbf{q}\|_{\infty}^{-s\tau_{j}}\|\mathbf{q}\|_ {\infty}^{\tau_{j}n(m-1)}\|\mathbf{q}\|_{\infty}^{\sum\limits_{l:\,\,\tau_{j}> \tau_{l}}(\tau_{j}-\tau_{l})}\\ &=\liminf_{N\to\infty}\sum_{\begin{subarray}{c}\mathbf{q}\in \mathbb{F}[\mathbf{X}]^{m}\\ \|\mathbf{q}\|_{\infty}\geq t^{N}\end{subarray}}\|\mathbf{q}\|_{\infty}^{m+n- 1+\tau_{j}n(m-1)+\sum\limits_{l:\,\tau_{j}>\tau_{l}}(\tau_{j}-\tau_{l})}\| \mathbf{q}\|_{\infty}^{-s\tau_{j}}\\ &=m(t-1)t^{m-1}\liminf_{N\to\infty}\sum_{r=N}^{\infty}\left(t^{m+ n-1+\tau_{j}n(m-1)+\sum\limits_{l:\,\tau_{j}>\tau_{l}}(\tau_{j}-\tau_{l})-s\tau_{j}} \right)^{r}.\end{split}\]
The series above converges if and only if the exponent of \(e\) is strictly less than \(0\), or equivalently, \(s>s_{j}(\tau)\). The result follows by taking the infimum over \(j\).
#### 7.2.2. Lower bound
The proof of the lower bound is similar to the complex case with obvious modifications. Namely, given any \(\boldsymbol{l}=(l_{1},\ldots,l_{n})\in\mathbb{R}^{n}\) such that
\[\min_{1\leq j\leq n}l_{j}\geq 1\;\;\text{and}\;\;\sum_{j=1}^{n}l_{j}=m+n, \tag{7.6}\]
we define \(\tilde{\rho}=(\tilde{\rho}_{1},\ldots,\tilde{\rho}_{n}):\mathbb{N}\to \mathbb{R}^{n}\) by
\[\tilde{\rho}_{j}(u)=\frac{1}{u^{l_{j}-1}}\quad(1\leq j\leq n,\,u\in\mathbb{N}).\]
and then prove the ubiquity statement for this set of functions as in Lemma 16. For the last part of the proof, we reorder the coefficients \(\tau\) to have \(\tau_{1}\geq\ldots\geq\tau_{2}\geq\ldots\geq\tau_{n}>1\) and then we consider two cases: \(\tau_{n}\geq\frac{n+m}{m}\) and \(\frac{n+m}{m}>\tau_{n}\) and then apply Theorem 2 to get the
statement of the theorem. We refer reader to the Section 4.3, as the choice of parameters and calculations are the same as in the \(p\)-adic case.
## 8. Uniformly distributed sequences
Fix a natural number \(n\). Let \(\omega=(\omega_{j})_{j\geq 1}\) be a sequence of points in \([0,1]^{n}\) (that is, \(\omega_{j}=(\omega_{j,1},\ldots,\omega_{j,n})\in[0,1]^{n}\) for each \(j\in\mathbb{N}\)) and let \(\psi_{i}:\mathbb{N}\to\mathbb{R}_{+}\) be a monotonic decreasing function for each \(1\leq i\leq n\). Define \(\Psi=(\psi_{1},\ldots,\psi_{n})\). Consider the set
\[W_{\omega}(\Psi):=\left\{\mathbf{x}=(x_{1},\ldots,x_{n})\in[0,1]^{n}:|x_{i}- \omega_{j,i}|<\psi_{i}(j)\quad(1\leq i\leq n)\quad\text{ for i.m. }j\in\mathbb{N}\right\}.\]
By a _rectangle_\(R\) in \([0,1]^{n}\) we mean a set of the form
\[R=\left\{\mathbf{x}\in[0,1]^{n}:a_{i}\leq x_{i}<b_{i}\quad(1\leq i\leq n)\right\}\]
for some \((a_{1},\ldots,a_{n}),(b_{1},\ldots,b_{n})\in[0,1]^{n}\) with \(a_{i}<b_{i}\) for each \(i=1,\ldots,n\). For any rectangle \(R\) and any \(N\in\mathbb{N}\), define
\[A(R;N,\omega):=\#\{1\leq j\leq N:\omega_{j}\in R\}\,.\]
Recall that \(\mu_{n}^{\mathbb{R}}\) is the Lebesgue measure on \([0,1]^{n}\).
**Definition 3** (Uniformly distributed sequence).: A sequence \(\omega=(\omega_{j})_{j\geq 1}\) of points in \([0,1]^{n}\) is a _uniformly distributed sequence on \([0,1]^{n}\)_, denoted \(\omega\) is a \(u.d.s\) on \([0,1]^{n}\), if for any \(n\)-dimensional rectangle \(R\subseteq[0,1]^{n}\) we have
\[\lim_{N\to\infty}\frac{A(R;N,\omega)}{N}=\mu_{n}^{\mathbb{R}}(R)\,.\]
Much is known when \(\omega\) is a sequence of independent identically distributed uniform random variables. Initiated by Fan and Wu [32], who considered the one dimensional case, the topic has since been investigated and generalised by numerous authors, see for example [28, 29, 30, 34]. These results strongly depend on the randomness of the sequence \(\omega\). As a classical example in Diophantine approximation of a uniformly distributed sequence that is certainly not random consider the sequence \(\omega=((n\alpha))_{n\in\mathbb{N}}\) for some \(\alpha\in\mathbb{R}\), where \(\{x\}\) denotes the fractional part of \(x\in\mathbb{R}\). The metric properties of \(W_{(\{n\alpha\})_{n\in\mathbb{N}}}(\Psi)\) in one dimension have been studied in [16, 33, 36, 53, 54, 63, 75] and in higher dimensions in [8, 44, 55, 68, 72]. In this setting, it was shown by Chebyshev (in one dimension) and by Khintchine (in higher dimensions) that for \(\alpha\in\mathbb{R}\setminus\mathbb{Q}\) the sequence \(((n\alpha))_{n\in\mathbb{N}}\) is well distributed [51]. Recently Moshchevitin proved metric properties on well distributed sequences [67]. In the following section we begin with a general statement in which we consider no other conditions on \(\omega\) other than being uniformly distributed.
In our previous applications of weighted ubiquity, we used an adequate version of Minkowski's theorem for linear forms to ensure that certain neighbourhoods of the resonant sets intersect a large portion of any ball. In our current setting, we replace Minkowski's theorem with a condition on the discrepancy of \(\omega\).
**Definition 4** (Discrepancy, Star-discrepancy).: Let \(\omega\) be a sequence of points in \([0,1]^{n}\).
1. The _discrepancy_ of \(\omega\) is the function \(D:\mathbb{N}\to\mathbb{R}_{+}\), \(N\mapsto D_{N}(\omega)\), given by \[D_{N}=D_{N}(\omega):=\sup_{R\subseteq[0,1]^{n}}\left|\frac{A(R;N,\omega)}{N}- \mu_{n}^{\mathbb{R}}(R)\right|,\] where the supremum is taken over all rectangles \(R\subseteq[0,1]^{n}\).
2. The _star-discrepancy_ of \(\omega\) is the function \(D^{*}:\mathbb{N}\to\mathbb{R}_{+}\), \(N\mapsto D_{N}^{*}(\omega)\), given by \[D_{N}^{*}=D_{N}^{*}(\omega):=\sup_{R\subseteq[0,1]^{n}}\left|\frac{A(R;N, \omega)}{N}-\mu_{n}^{\mathbb{R}}(R)\right|,\] where the supremum is taken over all rectangles \(R\subseteq[0,1]^{n}\) of the form \([0,t_{1})\times\cdots[0,t_{n})\) for some \(t_{1},\ldots,t_{n}>0\).
It is well known [62, Chapter 2] that for any sequence \(\omega\) in \([0,1]^{n}\) and any \(N\in\mathbb{N}\) we have
\[D_{N}^{*}(\omega)\leq D_{N}(\omega)\leq 2^{n}D_{N}^{*}(\omega).\]
**Definition 5** (\((\mathcal{N},v)\)-Discrepancy satisfying sequence).: Let \(\omega=(\omega_{j})_{j\geq 1}\) be a sequence of points in \([0,1]^{n}\). We say that \(\omega\) is a _\((\mathcal{N},v)\)-Discrepancy satisfying sequence_, which we write as \((\mathcal{N},v)\)-d.s.s, if there exist a sequence \(\mathcal{N}=(N_{i})_{i\geq 1}\) of strictly increasing integers and a monotonic decreasing function \(v:\mathbb{N}\to\mathbb{R}_{+}\) with \(v(N)\to 0\) as \(N\to\infty\) such that
\[D_{N_{i}}(\omega)<v(N_{i})\quad\text{ and }\quad N_{i-1}v(N_{i})<2^{-(n+3)} \quad(i\in\mathbb{N}).\]
Clearly, every uniformly distributed sequence is a \((\mathcal{N},v)\)-d.s.s. for some \(\mathcal{N}\) and \(v\). This follows from the observation that \(\omega\) is a u.d.s if and only if the discrepancy tends to zero [62, Chapter 2]. Hence one can choose an increasingly sparse sequence \(\mathcal{N}\) satisfying the conditions of Definition 5.
Under this setting, we prove the following Khintchine-type.
**Theorem 15**.: _Let \(\Psi=(\psi_{1},\ldots,\psi_{n})\) be an \(n\)-tuple of monotonic decreasing functions. Suppose that \((\omega_{i})_{i\geq 1}\) is a \((\mathcal{N},v)\)-d.s.s. for \(\mathcal{N}=(N_{i})_{i\geq 1}\) and \(v:\mathbb{N}\to\mathbb{R}_{+}\), that \(\Psi\) is c-regular for \(\mathcal{N}\) and that for some vector \(\tau=(\tau_{1},\ldots,\tau_{n})\in\mathbb{R}_{+}\) verifying_
\[\min\{\tau_{1},\ldots,\tau_{n}\}>0,\quad\sum_{j=1}^{n}\tau_{j}=1\]
every large \(N\in\mathbb{N}\) satisfies_
\[\psi_{j}(N)\leq D_{N}^{\tau_{j}}\quad(1\leq j\leq n).\]
_Then_
\[\mu_{n}^{\mathbb{R}}(W_{\omega}(\Psi))=1\quad\text{if}\quad\sum_{j=1}^{\infty}v (N_{j})^{-1}\prod_{i=1}^{n}\psi_{i}(N_{j})=\infty.\]
_Moreover, if \(n\geq 2\), then_
\[\mu_{n}^{\mathbb{R}}(W_{\omega}(\Psi))=0\quad\text{if}\quad\sum_{j=1}^{\infty }v(N_{j})^{-1}\prod_{i=1}^{n}\psi_{i}(N_{j})<\infty.\]
Our proof uses Roth's lower bound on the discrepancy (see below), which holds for \(n\geq 2\). However, a convergence result for \(n=1\) seems plausible if we consider an appropriate lower bound. The regularity condition imposed on the approximation functions in Theorem 15 is rather restrictive. It might be possible to relax it but, we believe, it might not be possible within the methods used in this paper.
In [52], Kiefer showed that for almost every sequence \(\omega\) we have
\[\limsup_{N\to\infty}D_{N}^{*}(\omega)\sqrt{\frac{2N}{\log\!N}}=1. \tag{8.1}\]
Here and in the following, when we talk about properties holding for almost every sequence in \([0,1]^{n}\), we have in mind the probability space of sequences in \([0,1]^{n}\) along with its Borel \(\sigma\)-algebra and the product measure \(\mu_{\infty}\) induced by the Lebesgue measure on each factor (see [47, Theorem 8.23]). The discrepancy of a sequence cannot converge too fast to zero. A uniform lower bound on the discrepancy was obtained by K. Roth (see [62, Theorem 2.1]): if \(n\in\mathbb{N}_{\geq 2}\), then every sequence \(\omega\) in \([0,1]^{n}\) verifies
\[D_{N}(\omega)\gg_{n}\frac{(\log\!N)^{\frac{n-1}{2}}}{N}. \tag{8.2}\]
Moreover, by a theorem of van Aardenne-Ehrenfest [62, Corollary 2.1], we have
\[\limsup_{N\to\infty}ND_{N}^{*}(\omega)=\infty.\]
**Theorem 16**.: _Let \((\tau_{1},\ldots,\tau_{n})\in\mathbb{R}_{+}\) satisfy_
\[\min\{\tau_{1},\ldots,\tau_{n}\}>0\quad\text{ and }\quad\sum_{j=1}^{n}\tau_{j}=1.\]
_Consider \(\Psi=(\psi_{1},\ldots,\psi_{n})\) such that for every large \(N\in\mathbb{N}\)_
\[\psi_{j}(N)\leq D_{N}^{\tau_{j}}\quad(1\leq j\leq n).\]
_For almost every uniformly distributed sequence \(\omega\), if there is some \(M>1\) such that_
\[\sum_{j=1}^{\infty}\frac{M^{\frac{3^{j}}{2}}}{\sqrt{j}}\prod_{i=1}^{n}\psi_{i }\left(M^{3^{j}}\right)=\infty,\]
_then \(\mu_{n}^{\mathbb{R}}(W_{\omega}(\Psi))=1\)._
**Definition 6** (Low discrepancy).: A sequence \(\omega\) has _low discrepancy_ if
\[D_{N}(\omega)\ll\frac{(\log N)^{n}}{N}.\]
The sequence \((M^{3^{j}})_{j\geq 1}\) can be replaced with \((M^{j^{2}})_{j\geq 1}\) when \(\omega\) is a low-discrepancy sequence. For modern aspects of low discrepancy sequences, see [65, 78] and the references therein.
**Theorem 17**.: _Let \((\tau_{1},\ldots,\tau_{n})\in\mathbb{R}_{+}\) satisfy_
\[\min\{\tau_{1},\ldots,\tau_{n}\}>0\quad\text{ and }\quad\sum_{j=1}^{n}\tau_{j}=1.\]
_Consider \(\Psi=(\psi_{1},\ldots,\psi_{n})\) such that for every large \(N\in\mathbb{N}\)_
\[\psi_{j}(N)\leq D_{N}^{\tau_{j}}\quad(1\leq j\leq n).\]
_If \(\omega\) is a low-discrepancy sequence and for some \(M>1\) we have_
\[\sum_{j=1}^{\infty}\frac{M^{j^{2}}}{j^{2n}}\prod_{i=1}^{n}\psi_{i}\left(M^{j^{ 2}}\right)=\infty,\]
_then \(\mu_{n}^{\mathbb{R}}(W_{\omega}(\Psi))=1\)._
A similar problem was addressed by Boshernitzan and Chaika in the non-weighted case [14, Example 8]. As they point out, part of the proof Theorem 18 may be obtained from the work of Beresnevich, Dickinson, and Velani on ubiquitous systems [9].
**Theorem 18** ([14, Theorem 1]).: _Take \(n\in\mathbb{N}\). Let \(\psi:\mathbb{N}\to\mathbb{R}_{+}\) be any positive function and define the \(n\)-tuple \(\Psi:=(\psi,\ldots,\psi)\). Almost every sequence \(\omega=(\omega_{j})_{j\geq 1}\) on \([0,1]^{n}\) satisfies_
\[\mu_{n}^{\mathbb{R}}(W_{\omega}(\Psi))=\begin{cases}0\quad\text{if}\quad\sum _{j=1}^{\infty}\psi^{n}(j)<\infty,\\ 1\quad\text{if}\quad\sum_{j=1}^{\infty}\psi^{n}(j)=\infty.\end{cases}\]
There are examples of uniformly distributed sequences \(\omega=(\omega_{j})_{j\geq 1}\) for which there is some positive function \(\psi:\mathbb{N}\to\mathbb{R}_{+}\) satisfying
\[\sum_{j=1}^{\infty}\psi^{n}(j)=\infty\quad\text{ and }\quad\mu_{n}^{\mathbb{R}} \big{(}W_{\omega}(\psi,\ldots,\psi)\big{)}<1.\]
In fact, this is the case when \(n=1\), \(\alpha\) is an irrational number which is not badly approximable, and \(\omega_{j}=\{j\alpha\}\) for \(j\in\mathbb{N}\)[14, Remark 6].
We believe that the weighted version of Theorem 18 is true.
**Conjecture 1**.: _For \(\mu_{\infty}\)-almost every sequence \(\omega\) in \([0,1]^{n}\), for every \(n\)-tuple of non-increasing functions \(\Psi=(\psi_{1},\ldots,\psi_{n})\) we have_
\[\mu_{n}^{\mathbb{R}}(W_{\omega}(\Psi))=\begin{cases}0&\text{if}\quad\sum_{j=1} ^{\infty}\prod_{i=1}^{n}\psi_{i}(j)<\infty,\\ 1&\text{if}\quad\sum_{j=1}^{\infty}\prod_{i=1}^{n}\psi_{i}(j)=\infty.\end{cases}\]
The convergence half is a straightforward consequence of the Borel-Cantelli lemma. The divergence part is expected in view of Theorem 18 and since we can interpret a uniformly distributed sequence as a generic realization of a sequence of independent identically distributed uniform random variables.
We compute the Hausdorff dimension of \(W_{\omega}(\Psi)\) for certain functions \(\Psi\) and sequences \(\omega\) with low discrepancy.
**Theorem 19**.: _Let \(\omega\) be a low discrepancy sequence and let \(\tau=(\tau_{1},\ldots,\tau_{n})\in\mathbb{R}_{+}^{n}\) satisfy_
\[\sum_{i=1}^{n}\tau_{i}>1\,.\]
_Then, for \(\Psi(N)=(N^{-\tau_{1}},\ldots,N^{-\tau_{n}})\), we have that_
\[\dim_{\mathrm{H}}W_{\omega}(\Psi)=\min_{1\leq j\leq n}\left\{\frac{1+\sum_{i: \tau_{j}\geq\tau_{i}}(\tau_{j}-\tau_{i})}{\tau_{j}}\right\}.\]
We have the following result for exceptional uniformly distributed sequences.
**Theorem 20**.: _Let \(n=1\) and \(\omega\) be a uniformly distributed sequence such that_
\[D_{N}(\omega)\ll\frac{1}{N}\]
_for infinitely many \(N\in\mathbb{N}\) with the implied constant independent of \(N\). Let \(\tau>1\). Then, for \(\Psi(N)=N^{-\tau_{1}}\), we have that_
\[\dim_{\mathrm{H}}W_{\omega}(\Psi)=\frac{1}{\tau}.\]
The result of Theorem 20 holds in higher dimensions but, as stated earlier, for \(n\geq 2\) Roth's Theorem implies that
\[D_{N}(\omega)\gg\frac{(\log N)^{\frac{n-1}{2}}}{N}\,,\]
for all large \(N\in\mathbb{N}\). From this theorem one can deduce, in combination with the Three distance theorem and a result of Khintchine [48] that only \(\mathbb{Q}\) is singular in one dimension, the following theorem of Bugeaud [16, Theorem 1].
**Corollary** (Bugeaud [16]).: _Let \(\alpha\in\mathbb{R}\setminus\mathbb{Q}\). Then, for any \(\tau>1\),_
\[\dim_{\mathrm{H}}W_{([n\alpha])_{n\geq 1}}(\Psi)=\frac{1}{\tau}.\]
Our results on uniformly distributed sequences are far from being optimal. A refinement of our strategy may certainly improve our theorems. However, when \(n\geq 2\), Roth's lower bound on the discrepancy (8.2) and our condition on the \((\mathcal{N},v)\)-discrepancy imply that the growth of \(\mathcal{N}=(N_{j})_{j\geq 1}\) is faster than exponential. As a consequence, if we are after a \(0\)-\(1\) dichotomy, we will have to impose some conditions on the approximation functions. A more promising landscape appears when we replace the discrepancy of the sequence \(\omega\) with its dispersion [27, Section 1.1.3.].
### The ubiquity property
In this subsection we formulate a key statement which will allow us to construct ubiquitous systems.
**Proposition 8**.: _Let \(\omega\) be a uniformly distributed sequence. Then there exists a \((\mathcal{N},v)\)-Discrepancy satisfying sequence. Let \(\rho=(\rho_{1},\ldots,\rho_{n})\) be an \(n\)-tuple of functions \(\rho_{i}:\mathcal{N}\to\mathbb{R}_{+}\) such that_
1. _each_ \(\rho_{i}(N)\to 0\) _as_ \(N\to\infty\) _,_
2. \(\prod_{i=1}^{n}\rho_{i}(N)=v(N)\) _for all_ \(N\in\mathbb{N}\) _._
_Then for any ball \(B\subset[0,1]^{n}\) there exists sufficiently large \(k_{0}\in\mathbb{N}\) such that for all \(k\geq k_{0}\)_
\[\mu_{n}^{\mathbb{R}}\left(B\cap\bigcup_{N_{k-1}<j\leq N_{k}}\Delta\left( \omega_{j},\rho(N_{k})\right)\right)\geq\frac{1}{4}\mu_{n}^{\mathbb{R}}(B).\]
### Proof of Theorem 15
Let us start with the divergence part. By Proposition 8 and the definition of \((\mathcal{N},v)\)-d.s.s., we obtain a ubiquitous system \((\{R_{j}\}_{j\in\mathbb{N}},\rho)\) with respect to \((\mathcal{N},v)\) when considering
1. \(J=\mathbb{N}\),
2. \(\beta:\mathbb{N}\to\mathbb{N}\), \(\beta_{j}=j\) for all \(j\in\mathbb{N}\),
3. \(N_{0}=1\) and \(l_{j}=N_{j-1}\) and \(u_{j}=N_{j}\) for \(j\in\mathbb{N}\),
4. the function \(v:\mathbb{N}\to\mathbb{R}_{+}\) given by \(v(N)=D_{N}\) for all \(N\in\mathbb{N}\),
5. the functions \(\rho_{i}=v^{\tau_{i}}\) for each \(i\in\{1,\ldots,n\}\).
The result follows from Theorem 5.
By the definition of \(W_{\omega}(\Psi)\) and the Borel-Cantelli lemma, the convergence part follows from the convergence of the series
\[\sum_{j=1}^{\infty}\prod_{i=1}^{n}\psi_{i}(j). \tag{8.3}\]
Since each \(\psi_{1},\ldots,\psi_{n}\) is non-increasing and \(c\)-regular with respect to \(\mathcal{N}\), for any \(j\in\mathbb{N}\) we have
\[\sum_{k=N_{j}}^{N_{j+1}-1}\prod_{i=1}^{n}\psi_{i}(k)<N_{j+1}\prod_{i=1}^{n} \psi_{i}(N_{j})\ll v(N_{j+2})^{-1}\prod_{i=1}^{n}\psi_{i}(N_{j})\ll v(N_{j+2} )^{-1}\prod_{i=1}^{n}\psi_{i}(N_{j+2}).\]
As a consequence, the convergence of
\[\sum_{j=1}^{\infty}v(N_{j})^{-1}\prod_{i=1}^{n}\psi_{i}(N_{j})\]
implies that of (8.3) and the result follows.
### Proof of Theorem 16
Let \(\tilde{v}:\mathbb{N}\to\mathbb{R}\) be the function given by
\[\tilde{v}(N):=\sqrt{\frac{\log\log N}{2N}}\quad(N\in\mathbb{N}).\]
Then, we have
\[\tilde{v}\left(M^{3^{j}}\right)=\frac{j^{1/2}}{M^{3/2}}\sqrt{\frac{\log 3+ \frac{\log\log M}{j}}{2}}\quad(j\in\mathbb{N}),\]
so
\[M^{3^{j-1}}v\left(M^{3^{j}}\right)\asymp_{M}M^{3^{j-1}}\frac{j^{1/2}}{M^{3/2} }=\frac{j^{1/2}}{(M^{1/6})^{3^{j}}}\to 0\;\;\text{as}\;\;j\to\infty\]
and
\[\frac{\tilde{v}\left(M^{3^{j+1}}\right)}{\tilde{v}\left(M^{3^{j}}\right)} \asymp_{M}\frac{1}{M^{3^{j}}}\to 0\quad\text{as}\quad\;j\to\infty. \tag{8.4}\]
Observe that under the assumption (8.1), every large \(j\in\mathbb{N}\) satisfies
\[D^{*}_{M^{3^{j}}}<\frac{3}{2}\sqrt{\frac{\log\log M^{3^{j}}}{2M^{3^{j}}}}.\]
For each \(i\in\{1,\ldots,n\}\), define \(\rho_{i}:\mathbb{N}\to\mathbb{R}\) by
\[\rho_{i}(N)=\left(2^{n-1}3\tilde{v}(N)\right)^{\tau_{i}}\quad(N\in\mathbb{N}).\]
Write \(\omega=(\omega_{j})_{j\geq 1}\), \(\omega_{j}=(\omega_{j,1},\ldots,\omega_{j,n})\) for all \(j\in\mathbb{N}\), and consider
1. \(J=\mathbb{N}\),
2. \(\beta_{j}=j\) for all \(j\in\mathbb{N}\),
3. For each \(j\in\mathbb{N}\) put \(R_{j,i}=\{x_{j,i}\}\) whenever \(1\leq i\leq n\) and \(R_{j}=\{\omega_{j}\}\),
4. \(u_{j}:=M^{3^{j}}\) and \(l_{j}:=M^{3^{j-1}}\) for \(j\in\mathbb{N}\),
5. \(\kappa_{i}=0\) and \(\delta_{i}=1\) for all \(i\in\{1,\ldots,n\}\).
By Proposition 8, the pair \((\{R_{\alpha}\}_{\alpha\in J},\beta)\) is a ubiquitous system with respect to \(\rho\), \((l_{j})_{j\geq 1}\), and \((u_{j})_{j\geq 1}\). Moreover, by (8.4), each \(\rho_{i}\) is \(c\)-regular. The result is a consequence of Theorem 5. For \(n=1\) and \(n=2\) we might need to apply Lemma 2 for \(\psi_{i}(M^{3^{j}})\leq\rho_{i}(M^{3^{j}})\) to hold for large \(j\in\mathbb{N}\).
### Proof of Theorem 17
Theorem 17 is shown as Theorem 16. In this case, however, we work with the function \(\tilde{v}:\mathbb{N}\to\mathbb{R}\) given by
\[\tilde{v}(N)=\frac{(\log N)^{n}}{N}\quad(N\in\mathbb{N})\]
and the sequences \(l_{j}=M^{(j-1)^{2}}\), \(u_{j}=M^{j^{2}}\) for all \(j\in\mathbb{N}\). In view of
\[N_{j-1}\tilde{v}(N_{j})=M^{(j-1)^{2}}\frac{(j^{2}\log M)^{n}}{M^{j^{2}}}=j^{2n }(\log M)^{n}M^{-(2j-1)}\to 0\quad\text{ as }j\to\infty,\]
we may define \(v=C\tilde{v}\) for a some positive constant \(C=C(\omega)>0\) in such a way that \(\omega\) is a \(((M^{j^{2}})_{j\geq 1},v)\)-d.s.s. Finally, the functions
\[\rho_{i}(N)=\left(\frac{(\log N)^{n}}{N}\right)^{\tau_{i}}\quad(1\leq i\leq n) \tag{8.5}\]
are non-increasing and tend to \(0\). The theorem now follows from Proposition 8.
### Proof of Theorem 19 and 20
#### 8.5.1. Upper bound
For any \(K\in\mathbb{N}\) consider the cover
\[\bigcup_{j>K}\Delta(\omega_{j},\Psi(j))\supset W_{\omega}(\Psi)\,.\]
Fix \(1\leq k\leq n\). Each rectangle \(\Delta(\omega_{j},\Psi(j))\) can be covered by
\[\simeq\prod_{i=1}^{n}\max\left\{1,j^{\tau_{k}-\tau_{i}}\right\}=j^{\sum_{i: \tau_{k}>\tau_{i}}(\tau_{k}-\tau_{i})}\]
balls of radius \(j^{-\tau_{k}}\). So
\[\mathcal{H}^{s}\left(W_{\omega}(\Psi)\right) \ll\sum_{j>K}j^{\sum_{i:\tau_{k}>\tau_{i}}(\tau_{k}-\tau_{i})}(j^ {-\tau_{k}})^{s}\] \[\leq\sum_{j>K}j^{-s\tau_{k}+\sum_{i:\tau_{k}>\tau_{i}}(\tau_{k}- \tau_{i})}\to 0\]
as \(K\to\infty\) for any
\[s\geq\frac{1+\sum_{i:\tau_{k}>\tau_{i}}(\tau_{k}-\tau_{i})}{\tau_{k}}\,.\]
Hence
\[\dim_{\rm H}W_{\omega}(\Psi)\leq\frac{1+\sum_{i:\tau_{k}>\tau_{i}}(\tau_{k}- \tau_{i})}{\tau_{k}}\]
Since the above argument remains true for each \(1\leq k\leq n\) we have the required upper bound. This gives the required upper bound for both Theorem 19 and 20.
#### 8.5.2. Lower bound
As stated above, for any low discrepancy sequence we can associate the above pair \((\mathcal{N},v)\) to ensure that \(\omega\) is a \((\mathcal{N},v)\)-d.s.s and so by Proposition 8\((\omega,\rho)\) is a local ubiquitous system for functions (8.5) hence Theorem 6 is applicable in the lower bound of Theorem 19. Note that strictly speaking the functions \(\Psi\) in Theorem 19 should be of the form
\[\Psi(N)=\left(\left(\frac{(\log N)^{n}}{N}\right)^{\tau_{1}},\ldots,\left( \frac{(\log N)^{n}}{N}\right)^{\tau_{n}}\right).\]
However, it is easily seen that for any \(\theta>0\) and any \(0<\varepsilon<\theta\) we have
\[\left(\frac{1}{N}\right)^{\theta}<\left(\frac{(\log N)^{n}}{N}\right)^{\theta }<\left(\frac{1}{N}\right)^{\theta-\varepsilon}\]
for \(N\) sufficiently large, and so the Hausdorff dimension bound is the same.
From the conditions of Theorem 20 we can choose a sequence of natural numbers over which \(D_{N}(\omega)\ll N^{-1}\). Take a suitably sparse subsequence of these integers to be \(\mathcal{N}\) such that \(N_{k-1}N_{k}^{-1}\to 0\) as \(k\to\infty\). Paired with the function \(v(N)=cN^{-1}\), where \(c\) is the implied constant in the condition of Theorem 20, we have that \(\omega\) is a \((\mathcal{N},v)\)-d.s.s for such pair. Hence we have a weighted ubiquitous system of rectangles for functions
\[\rho_{i}(N)=\rho(N)^{a_{i}}=N^{-a_{i}}\quad(1\leq i\leq n),\]
for vector \(\boldsymbol{a}=(a_{1},\ldots,a_{n})\in\mathbb{R}_{+}\) such that
\[\sum_{i=1}^{n}a_{i}=1.\]
We now consider both lower bound results in tandem. It remains to show the formula given in Theorem 6 produces the lower bound of Theorem 19 and 20. In order to apply Theorem 6, for any \(\boldsymbol{a}=(a_{1},\ldots,a_{n})\in\mathbb{R}_{+}\) such that \(a_{1}+\ldots+a_{n}=1\) and \(\boldsymbol{\tau}=(\tau_{1},\ldots,\tau_{n})\) as in Theorem 19 and 20, let us choose
\[\boldsymbol{t}=(t_{1},\ldots,t_{n})=(\tau_{1}-a_{1},\ldots,\tau_{n}-a_{n}).\]
Consider the two cases:
1. Suppose that \(\tau_{i}>\frac{1}{n}\) for all \(1\leq i\leq n\). Then set \[a_{i}=\frac{1}{n}\quad(1\leq i\leq n)\,.\] Now, if \(A=a_{i}\) for any \(1\leq i\leq n\) we have that \(\mathcal{K}_{1}=\{1,\ldots,n\}\) and so the lower bound formula is \(n\) in this case. If \(A=\tau_{j}\) for some \(1\leq j\leq n\) then \[\mathcal{K}_{1}=\emptyset,\quad\mathcal{K}_{2}=\{i:\tau_{j}\geq\tau_{i}\}, \quad\mathcal{K}_{3}=\mathcal{K}_{2}^{c}\,,\]
and so \[\dim_{\mathrm{H}}W_{\omega}(\Psi) \geq\min_{1\leq j\leq n}\left\{\#\mathcal{K}_{2}+\frac{\sum_{i\in \mathcal{K}_{2}^{c}}a_{i}-\sum_{i\in\mathcal{K}_{2}}t_{k}}{\tau_{j}}\right\}\] \[\geq\min_{1\leq j\leq n}\left\{\#\mathcal{K}_{2}+\frac{\sum_{i=1}^ {n}a_{i}-\sum_{i\in\mathcal{K}_{2}}(a_{i}+t_{i})}{\tau_{j}}\right\}\] \[\geq\min_{1\leq j\leq n}\left\{\frac{\sum_{i=1}^{n}a_{i}+\sum_{i \in\mathcal{K}_{2}}(\tau_{j}-(a_{i}+t_{i})}{\tau_{j}}\right\}\] \[\geq\min_{1\leq j\leq n}\left\{\frac{1+\sum_{i:\tau_{j}\geq\tau_ {i}}(\tau_{j}-\tau_{i})}{\tau_{j}}\right\}.\]
2. Assume that there exists \(\tau_{j}\) such that \(\tau_{j}<\frac{1}{n}\). Then without loss of generality suppose that \[\tau_{1}>\tau_{2}>\cdots>\tau_{n}\,.\] We want to choose \(1\leq u\leq n\) that solves \[u\times\widetilde{D}+\sum_{u<i\leq n}\tau_{i}=1\] for some \(\widetilde{D}>0\) with \(\tau_{u}>\widetilde{D}\). Pick \[\widetilde{D}=\frac{1-\sum_{u\leq i\leq n}\tau_{i}}{u}\] and note that \[\tau_{1}>\tau_{2}>\cdots>\tau_{u}>\widetilde{D}\geq\tau_{u+1}>\cdots>\tau_{n}.\] Hence pick \[a_{i}=\begin{cases}\widetilde{D}&(1\leq i\leq u),\\ \tau_{i}&(u+1\leq i\leq n).\end{cases}\] For \(A=a_{i}\) with \(1\leq i\leq u\) we have \[\mathcal{K}_{1}=\{1,\ldots,u\},\quad\mathcal{K}_{2}=\{u+1,\ldots,n\},\quad \mathcal{K}_{3}=\emptyset,\] and for \(A=a_{i}=\tau_{i}\) with \(u+1\leq i\leq n\) \[\mathcal{K}_{1}=\{1,\ldots,i\},\quad\mathcal{K}_{2}=\{i+1,\ldots,n\},\quad \mathcal{K}_{3}=\emptyset.\] In both cases we have a lower bound formula of \(n\). For each \(A=a_{j}+t_{j}=\tau_{j}\) with \(1\leq j\leq u\) we have that \[\mathcal{K}_{1}=\emptyset,\quad\mathcal{K}_{2}=\{j,\ldots,n\}=\{i:\tau_{j}\geq \tau_{i}\},\quad\mathcal{K}_{3}=\mathcal{K}_{2}^{c},\] Notice these are the same sets as in the conclusion of case \(i\)), and so we obtain the same lower bound, thus the proof of Theorem 19 and 20 is complete.
### Proof of Proposition 8
Proposition 8 follows from combining Lemma 17 and Lemma 18 below.
**Lemma 17**.: _Let \(\rho=(\rho_{1},\ldots,\rho_{n})\) be an \(n\)-tuple of functions \(\rho_{i}:\mathbb{N}\to\mathbb{R}_{+}\) with each \(\rho_{i}(N)\to 0\) as \(N\to\infty\), and suppose that_
\[\prod_{i=1}^{n}\rho(N)=v(N)\quad(N\in\mathbb{N}).\]
_Then, for any ball \(B\subset[0,1]^{n}\) and all sufficiently large \(k\in\mathbb{N}\)_
\[\mu_{n}^{\mathbb{R}}\left(B\cap\bigcup_{1\leq j\leq N_{k}}\Delta\left(\omega_ {j},\rho(N_{k})\right)\right)\geq\frac{1}{2}\mu_{n}^{\mathbb{R}}(B).\]
Proof.: Since \(\omega\) is uniformly distributed on \([0,1]^{n}\) for any \(\varepsilon>0\) there exists \(k_{\varepsilon}\in\mathbb{N}\) such that for all \(k>k_{\varepsilon}\)
\[\#A_{N_{k}}(\omega,B):=\#\left\{1\leq j\leq N_{k}:\omega_{j}\in B\right\}\geq N _{k}(\mu_{n}^{\mathbb{R}}(B)-\varepsilon). \tag{8.6}\]
From now on, assume \(k>k_{0}\) is such that
(8.6) holds for all \(\varepsilon<\frac{\mu_{n}^{\mathbb{R}}(B)}{2}\).
Choose \(\varepsilon>0\) small enough such that
\[\#A_{N_{k}}(\omega,B)\geq\frac{N_{k}}{2}\mu_{n}^{\mathbb{R}}(B).\]
Now cover \(\frac{1}{2}B\) by disjoint rectangles \(\Delta(x_{j},\frac{1}{2}\rho(N_{k}))\) for some set \(\{x_{j}\}_{j\in K_{1}}\) for \(K_{1}\) a subset of \(A_{N_{k}}(\omega,B)\) such that each \(x_{j}\in\frac{1}{2}B\). Note
\[\mu_{n}^{\mathbb{R}}\left(\Delta(x_{j},\tfrac{1}{2}\rho(N_{k})\right)=\prod_{ i=1}^{n}\rho_{i}(N_{k})=v(N_{k}).\]
Since \(\omega\) is a Dirichlet \((\mathcal{N},v)\)-Discrepancy satisfying sequence note that
\[\left|\frac{\#A_{N_{k}}\left(\omega,\Delta(x_{j},\tfrac{1}{2}\rho(N_{k})) \right)}{N_{k}}-\mu_{n}^{\mathbb{R}}\left(\Delta(x_{j},\tfrac{1}{2}\rho(N_{k}) )\right)\right|<v(N_{k}),\]
hence \(\#A_{N_{k}}\left(\omega,\Delta(x_{j},\tfrac{1}{2}\rho(N_{k}))\right)\geq 1\) for each \(j\in K_{1}\). Thus to each \(x_{j}\) we can associate a \(\omega_{\ell(j)}\) for some \(1\leq\ell(j)\leq N_{k}\). Then note that
\[\Delta\left(\omega_{\ell(j)},\rho(N_{k})\right)\supset\Delta\left(x_{j}, \tfrac{1}{2}\rho(N_{k})\right),\]
and so
\[\mu_{n}^{\mathbb{R}}\left(B\cap\bigcup_{1\leq j\leq N_{k}}\Delta(\omega_{j}, \rho(N_{k}))\right)\geq\frac{1}{2}\mu_{n}^{\mathbb{R}}(B)\,.\]
**Lemma 18**.: _For any \(B\subset[0,1]^{n}\) and all sufficiently large \(k\in\mathbb{N}\) (with the size of \(k\) dependent only on \(B\))._
\[\mu_{n}^{\mathbb{R}}\left(B\cap\bigcup_{1\leq j\leq N_{k-1}}\Delta(\omega_{j}, \rho(N_{k}))\right)\leq\frac{1}{4}\mu_{n}^{\mathbb{R}}(B).\]
Proof.: As in the proof of Lemma 17, we have that \(\omega\) is uniformly distributed and so for any \(\varepsilon>0\) there exists \(k_{\varepsilon}\in\mathbb{N}\) such that for all \(k>k_{\varepsilon}\)
\[\#A_{N_{k}}(\omega,B):=\#\left\{1\leq j\leq N_{k}:\omega_{j}\in B\right\}\leq N _{k}(\mu_{n}^{\mathbb{R}}(B)+\varepsilon).\]
Choose \(\varepsilon<\mu_{n}^{\mathbb{R}}(B)\), and so
\[\#A_{N_{k}}(\omega,B):=\#\left\{1\leq j\leq N_{k}:\omega_{j}\in B\right\}\leq 2 N_{k}\mu_{n}^{\mathbb{R}}(B).\]
for all \(k>k_{\varepsilon}\). Then, by a standard covering argument we have that
\[\mu_{n}^{\mathbb{R}}\left(B\cap\bigcup_{1\leq j\leq N_{k-1}} \Delta\left(\omega_{j},\rho(N_{k})\right)\right) \leq\#A_{N_{k-1}}(B)\mu_{n}^{\mathbb{R}}\left(\Delta\left( \omega_{j},\rho(N_{k})\right)\right)\] \[=\#A_{N_{k-1}}(B)2^{n}v(N_{k})\] \[\leq N_{k-1}2^{n+1}\mu_{n}^{\mathbb{R}}(B)v(N_{k})\] \[\leq 2^{-2}\mu_{n}^{\mathbb{R}}(B).\]
**Acknowledgments:** We thank Johannes Schleischitz and Dong Han Kim for useful discussions.
|
2309.06739 | MCNS: Mining Causal Natural Structures Inside Time Series via A Novel
Internal Causality Scheme | Causal inference permits us to discover covert relationships of various
variables in time series. However, in most existing works, the variables
mentioned above are the dimensions. The causality between dimensions could be
cursory, which hinders the comprehension of the internal relationship and the
benefit of the causal graph to the neural networks (NNs). In this paper, we
find that causality exists not only outside but also inside the time series
because it reflects a succession of events in the real world. It inspires us to
seek the relationship between internal subsequences. However, the challenges
are the hardship of discovering causality from subsequences and utilizing the
causal natural structures to improve NNs. To address these challenges, we
propose a novel framework called Mining Causal Natural Structure (MCNS), which
is automatic and domain-agnostic and helps to find the causal natural
structures inside time series via the internal causality scheme. We evaluate
the MCNS framework and impregnation NN with MCNS on time series classification
tasks. Experimental results illustrate that our impregnation, by refining
attention, shape selection classification, and pruning datasets, drives NN,
even the data itself preferable accuracy and interpretability. Besides, MCNS
provides an in-depth, solid summary of the time series and datasets. | Yuanhao Liu, Dehui Du, Zihan Jiang, Anyan Huang, Yiyang Li | 2023-09-13T06:15:37Z | http://arxiv.org/abs/2309.06739v1 | # MCNS: Mining Causal Natural Structures Inside Time Series via A Novel Internal Causality Scheme
###### Abstract
Causal inference permits us to discover covert relationships of various variables in time series. However, in most existing works, the variables mentioned above are the dimensions. The causality between dimensions could be cursory, which hinders the comprehension of the internal relationship and the benefit of the causal graph to the neural networks (NNs). In this paper, we find that causality exists not only _outside_ but also _inside_ the time series because it implies the succession of events in the real world. It inspires us to seek the relationship between internal subsequences. However, the challenges are the hardship of discovering causality from subsequences and utilizing the causal natural structures to improve Neural Networks. To address these challenges, we propose a novel framework called Mining Causal Natural Structure (MCNS), which is _automatic_ and _domain-agnostic_ and helps to find the causal natural structures inside time series via the internal causality scheme. We evaluate the MCNS framework and integrate NN with MCNS on time series classification tasks. Experimental results illustrate that our impregnation, by refining attention, shape selection classification, and pruning datasets, drives NN, even the data itself preferable accuracy and interpretability. Besides, MCNS provides an in-depth, solid summary of the time series and datasets.
## 1 Introduction
Time series data, such as medical electrocardiograms and financial data, have played an essential role in society. Furthermore, the possibility of making causal inferences [16, 17] in time series data greatly appeals to social and behavioural scientists and has been widely used in a plethora of applications. However, classical causal discovery [1] approaches in time series usually treat the time series as a whole, and problematic to find causal relationships inside time series.
A rich body of research has been proposed to seek causal relations in structured multivariate time series data. For example, most works suggested leveraging the concept of _Granger_ causality [14, 15, 16], and some other works proposed to rely on the idea of _Pearl_ causality in i.i.d multivariate time series data [1, 18]. But those works focused on relations between dimensions, and we find that the causal relationships need to be more profound and in-depth. There is causality not only outside (as a whole) but also inside the time series(in the subsequence). The causal natural structure inside the time series is crucial for causal inference.
Actually, discovering causal relationships inside time series is also valuable or vital for making decisions. For instance, when a medical AI system assists doctors in dealing with the classification of diseases in the fetal electrocardiogram (ECG), causal inference could help to figure out the exact distinguishable subsequences (symptons) crucial for accurate and explainable diagnosis. As shown in Figure 1, if the system can spot two crucial points, (1) the cause chain of the disease from a given specific fetal ECG, and (2) obtaining causal natural structures from the fetal ECG database, then the prediction disease can be more convincing and helpful, also straightforward to locate errors in the AI, rather than a label from a black box. In practice, we expect a medical AI system to provide human-readable and sound explanations to support doctors in making the right decisions. It is worthwhile, especially for underdeveloped areas, where such techniques could help the doctors of rural areas with more reliable
Figure 1: An example of a causal natural structure obtained from MCNS for the fetal ECG. The above is from a specific fetal ECG, and the below is from the whole dataset.
references from previous cases. Furthermore, our approach is domain agnostic, which would be applied to other domains, such as autonomous driving, the financial field, etc. Therefore, an intuitive idea we want to explore is, can we discover causality, not from the relation between dimensions, but from inside specific time series of one dimension? However, there are two challenges: (1) How to discover causality inside time series? (2) If there is a causal relationship inside the time series, how to leverage it to benefit neural networks?
To deal with these issues, we propose a novel framework called Mining Causal Natural Structures (MCNS). We discover representative subsequences called snippets from the time series and utilize snippets to encode the initial time series into a binary sequence for discretizing a continuous time series. Then, we use a Greedy Fast Causal Inference (GFCI) algorithm to seek causal relations between snippets and construct an inside causal graph. It is worth mentioning that, unlike most related work that requires domain knowledge, MCNS is _domain-agnostic_, which greatly enhanced the generalization of our approach.
Based on the above explorations, we do not follow the existing causal graph construction approach that requires pruning or constructing by domain experts, which is a non-automated causal discovery. We impose two restrictions on the GFCI algorithm so that it can _automatically_ prune causal graphs. After that, we determine the final causal natural structure using the Bayesian Information Criterion (BIC) and calculate causal strength on edges using Propensity Score Matching (PSM) [12] and Average Treatment Effect (ATE) [13].
For the second challenge, we impregnate Deep Neural Networks (DNNs) with causal graphs generated by MCNS. The first usage is inspired by [14, 15], which confirmed that attention could not correctly communicate the relative importance of inputs. Hence, we employ causal strength to refine attention to be more precise. Secondly, we leverage the MCNS to select shapes to classify time series, similar to the shapelets-based classification method but more explainable and accurate. Additionally, we prune the dataset with the portion containing causality, which leaves the most critical part and results in more accuracy and efficiency.
Our evaluation based on the PyTorch framework with the UCR dataset demonstrates that our MCNS can successfully inject the extracted causal knowledge into deep neural networks and improve NN's performance extensively, especially accuracy and interpretability.
In summary, our main contributions are as follows:
* We propose a novel framework for mining causal natural structures (MCNS) inside time series, which is both domain-agnostic and automatic.
* We investigate training popular neural network models with our causal natural structures obtained from MCNS. It boosts neural network models to realize causal knowledge emanating from MCNS.
* Experimental results illustrate that our MCNS can effectively enhance NN models for better performance in time series of various domains and scales. It can also help improve the interpretability of neural networks and the time series itself.
## 2 Related Work
**Causal Inference in Time Series.** There has been a propensity toward creating algorithms for causal inference on time series data. A mainstream of works is based on domain knowledge, artificially constructing causal graphs to solve time series problems in a particular field [11, 12]. Despite the success of these works in their respective fields, they involved bringing in domain experts to build causality relations rather than automating the discovery of causality in time series. Moreover, for mining the causality in the time series, some works use Granger causality to analyze the time series [12, 13, 14]. Since these works use it to explore interactions inside time series, aside from the fact that it is actually investigating the causality between time series dimensions, Granger causality only means causality in the statistical sense, and it can not judge the internal mechanism between time series. To our knowledge, it is the first work on univariate time series causal discovery. We take causality between subsequences into account, in other words, mining causal natural structures inside time series, which is the main novelty of our work.
**Time Series Natural Structure.** Finding natural structure in time series is a significant issue. Some works attempted to solve this problem using probabilistic models, such as Autoplait [16] and TICC [10]. Some works also used change point detection [1, 11, 12, 13]. However, the existing work needs to illustrate profound relations between subsequences. This paper utilizes causality to address the time series natural structure discovery problem. We propose a novel causal discovery framework to construct causal natural structures from univariate time series.
**Causal Inference for Neural Networks.** Recently, researchers have attempted to study relations between causality and neural networks [1, 14, 15, 16]. Since causality is widely used, various studies have applied causal structure as a part of NN or apply NN for causal discovery. However, most directly proposed a causal graph or utilize causal thinking. In this paper, we use inside causal graphs obtained from MCNS to enhance NN models in time series, which can boost their accuracy and interpretability.
## 3 Approach
Our mining causal natural structures framework has three components: finding critical data in time series, constructing inside causal graph, and calculating causal strength. Figure 2 shows the overall architecture of our approach.
### Problem Definition
MCNS is used to find a causal natural structure \(\mathbb{S}\) in subsequences \(T_{i,m}\) from given time series \(T\) as follows:
* A time series \(T\) is a sequence of real-valued numbers \(t_{i}:T=t_{1},t_{2},\ldots,t_{n}\), (with an optional label \(l_{T}\) for classification tasks), where \(n\) is the length of \(T\).
* A subsequence \(T_{i,m}\) of a time series \(T\) is a continuous subset of the values from \(T\) of length \(m\) starting from position \(i\). Formally, \(T_{i,m}=t_{i},t_{i+1},\ldots,t_{i+m-1}\), where \(1\leq i\leq n-m+1\).
* A causal natural structure \(\mathbb{S}\) inside time series \(T\) is a 4-tuple \(<S_{sub},l_{T},\psi,C>\), which is composed of subsequences set \(S_{sub}\), optional label \(l_{T}\), causal relations \(\psi\), and causal strength \(C\).
### Finding Critical Data in Time Series
First, we should find critical data representing the entire time series to discover the event in the real world behind it.
To begin with, we need to determine how to set the subsequence length \(l\). Since the subsequence length corresponds to the time span of events occurring, and often similar real-world events have periodicity, it is desirable that \(l\) is equal to the length of the intrinsic period of the time series \(T\). For example, concerning fetal ECG data shown in Figure 1, \(l\) should be around the duration of a single fetal heartbeat. We adopt the popular Fast Fourier Transform (FFT) [13] as a solution. Time series \(T\) is converted into the frequency domain, extracting the dominant frequency \(f\). The subsequence length \(l\) is guided by \(1/f\). In section 5, we will explore the effectiveness of our approach.
Additionally, to determine the same subsequence length of the complete dataset, we calculate the subsequence length \(l_{T}\) for each time series \(T\) in the dataset using FFT, and employ the maximum value in \(l_{T}\) as the unified subsequence length.
To extract representative subsequences, we discover \(k\) snippets \(s_{T}\) from each time series \(T\) using the time series snippets algorithm [16], which is domain agnostic to guarantee MCNS can be applied to datasets in any domain.
### Constructing Inside Causal Graph
In order to construct the causal graph, we should determine the factors, assemble the edges between factors, impose the constraints on edges, and obtain the final causal graph.
#### 3.3.1 Determine the Factors
To merge similar subsequences, we cluster the subsequences obtained in the previous step into \(n\) classes using \(k\)-shape clusters algorithm [10]. These classes represent events mentioned above, as reflected by this dataset. \(n\) classes factors and (optional) \(l_{T}\) labels constitute the factors of the causal graph. Each time series \(T\) can be expressed as a binary sequence. If \(T\) contains the corresponding factors, the value of this factor is 1, and not vice versa [15]. This binary sequence represents the time series by events (e.g., Bradycardia, Arrhythmia, Fetal Distress in Figure 1).
#### 3.3.2 Assemble the Edges Between Factors
The following is to establish the edges, which denote the causal relationship between factors. We choose the GFCI algorithm [10] to detect causal relations and infer without causal sufficiency. GFCI permits us to make causal inferences when having confounding variables and output a Partial Ancestral Graph (PAG). It offers us a preliminary inside causality between factors.
#### 3.3.3 Impose the Constraints on Edges
Additionally, we put two kinds of constraints on the graph to refine the causal graph. The first is banning edge from label factors to other factors because classification labels are not involved in actual events. Moreover, the second is an effect that does not precede its cause [1]. What happens earlier in the time series leads to what happens later. For most time series, provided that factor \(X\) appears after \(Y\), we should ban the edge from \(X\) to \(Y\).
#### 3.3.4 Obtain the Final Causal Graph
The PAG obtained in the last part contains four edge types [11], which are \(\rightarrow,\leftrightarrow,\circ\rightarrow,\circ-\circ\). Among them, \(X\to Y\) denotes \(X\) causes \(Y\), and \(X\leftrightarrow Y\) denotes that there is an unobserved confounder of \(X\) and \(Y\). So \(\rightarrow\) edges are retained and \(\leftrightarrow\) edges are removed. For the remaining two cases, \(X\circ\to Y\) denotes either \(X\to Y\) or \(X\leftrightarrow Y\), and \(X\circ-\circ Y\) denotes either \(X\to Y\), \(Y\to X\) or \(X\leftrightarrow Y\). There's no way to get a true probability, so we operate bootstrapping algorithm to determine the final causal graph. Each
Figure 2: Overview of the MCNS approach. In step 1, time series snippets as representative data (thick black circles) are mined from the entire time series. In step 1(a), snippets are clustered as different factors, and add an optional label at the end. In steps 1(b) and 1(c), edges are assembled by the GFCI algorithm, and edges containing a hollow circle(e.g., \(\circ\rightarrow\) and \(\circ-\circ\)) are determined as arrows without hollow circles (e.g., \(\rightarrow\) and \(none\)). Edges are pruned by constraints. The final causal natural structure is pruned by the bayesian information criterion in step 1(d). Causal strength is calculated in the step 3.
case is given the same probability for the two uncertainties mentioned above. We employ Bayesian Information Criterion (BIC) [23] to estimate the quality of each graph \(G_{n}\) is measured by its fitness with time series \(T\).
### Calculating Causal Strength
Even after we have gone through the above steps, the resulting inside causal graph is still noisy. We calculate causal strength on edges to further refine the causal graph. High strength is allocated to edges with a high causal effect. Similarly, meager strength is allocated to edges with low causal effects.
We utilize propensity score matching to measure average treatment effect \(\phi_{T,Y}\), which denotes the causal strength of \(T\to Y\). In this paper, it represents the effect of changing the \(1\) in our binary sequence to \(0\) through the do-calculus on the classification result, that is to say, the effect of subsequence on classification:
\[\phi_{T,Y} =E[Y\mid do(T=1)]-E[Y\mid do(T=0)] \tag{1}\] \[=\left[\sum_{t_{i}=1}\Delta_{i,j}-\sum_{t_{i}=0}\Delta_{i,j} \right]/N \tag{2}\]
where the do-calculus \(do(T=1)\) shows intervention on \(T\) and altering the value of \(T\) to 1. \(j\) represents the most similar instance in a different set than \(i\), and \(\Delta_{i,j}\) means the difference between the outcome value of instance \(i\) and \(j\).
### Impregnation DNN with MCNS
Recently, some effort has been made to exploit the DNNs, especially recurrent neural networks (RNNs), for different sizes of time series prediction and classification. However, the application of causal graphs to benefit deep neural networks in time series has been limited. We proposed three methods to impregnate DNN with MCNS as shown in Figure 3.
**Refine Attention with Causal Strength**
Attention has become an effective mechanism for superior results, as demonstrated in time series prediction and classification. However, some prior work substantiates that there is some distance away from attention and the relative importance of inputs [17, 23]. Attention can not wholly explain the relative importance of inputs.
Causal strength can be exploited to improve it. We utilize a Long Short-Term Memory (LSTM) with attention model [22]. The input vector \(X_{t-n}\cdots X_{t-3},X_{t-2},X_{t-1},X_{t}\) is the \(n\) multi-dimensional feature vectors up to the time to be predicted. The hidden layer processes the input vector in the LSTM in some intermediate states. The attention coefficient is obtained from the hidden layer of the last moment of another LSTM network in the decoder. Finally, vector \(C\) passes the fully connected layer to calculate the predicted result vector \(\varrho\).
\[e_{ij}=\nu\tanh\left(W\cdot h_{j}+U\cdot h_{i-1}^{\prime}+b\right) \tag{3}\]
\[a_{ij}=\frac{\exp\left(e_{ij}\right)}{\sum_{k=t-n}^{t}\exp\left(e_{ik}\right)} \tag{4}\]
\[C=\sum_{j=t-n}^{t}a_{ij}h_{j} \tag{5}\]
where \(e_{ij}\) is the relation score between \(h_{i-1}^{\prime}\) and \(h_{j}\). \(a_{ij}\) is the attention coefficient corresponding to \(e_{ij}\). After that, the obtained attention coefficient is assigned to different middle layer states \(h_{j}\) and summed to obtain vector \(C\) input to the decoder.
We refine attention using additional loss function \(H_{cau}\). The extra loss function guides attention to the causal strength from MCNS. Specifically, \(\sum_{q=1}^{Q}\mathrm{BIC}\left(G_{q},\mathbf{X}\right)\times\phi_{T,Y}\) is the causal strength corresponding to each factors, and \(\zeta_{i}\) is the normalized strength over the whole time series. For the initial LSTM model, \(H(p,q)\) means cross-entropy loss on \(\varrho\). Hence, \(H_{cau}\) and \(H(p,q)\) can be denoted as follows:
\[H(p,q)=-\sum_{i=1}^{n}p\left(x_{i}\right)\log\left(q\left(x_{i}\right)\right) \tag{6}\]
\[H_{cau}=\sum_{i=1}^{n}\lvert a_{ij}-\zeta_{i}\rvert \tag{7}\]
To sum up, we set the updated loss function of the model as follows:
\[L=\alpha H(p,q)+\beta H_{cau} \tag{8}\]
where \(\alpha+\beta=1\). What is worth mentioning is that each factor represents a subsequence. The strength of all the time steps in this subsequence is treated as the causal strength of the factors.
**Shape Causal Selection Based Classification**
Causal inference explores how changes in variable \(X\) affect another variable \(Y\). When we set variable \(X\) as the shapes inside the time series and variable \(Y\) as the classification label, we can recognize that shapes affect the classification results.
Figure 3: Three usages of impregnating neural networks with causal inference. _Refine Attention with Causal Strength_ (Above I), _Shape Causal Selection Based Classification_ (Middle II), and _Dataset Prune with Causality_(Below III). The step with an asterisk is the core step of impregnation DNN with MCNS.
Hence, our causal natural structure from MCNS depicts the classification process of time series.
In other words, MCNS may contain crucial information for time series classification. Hence, we operate the factors and causal relations in MCNS to guide the classification process in the neural network. Inspired by [14], we leverage causal relations and snippets [12] to classify time series.
For input time series \(T\), we discover \(k\) snippets \(s_{iT}\), and concat snippets as a representation \(r_{T}\) of a time series like shapelets:
\[r_{T}=\operatorname{concat}\left(s_{1T},s_{2T},\dots,s_{kT}\right) \tag{9}\]
Furthermore, we draw on the causal graph to select snippets. If any snippets of time series belong to causal graph factors that affect the label, we choose them to represent the real content related to classification. If not, we utilize the initial \(r_{T}\):
\[r_{T}=\begin{cases}\operatorname{concat}\left(s_{jT}\right)&\text{ if }s_{jT}\in S_{sub},j\geq 1\\ r_{T}&\text{ otherwise}\end{cases} \tag{10}\]
We mask the parts less than the maximum length and use them as input to the LSTM or other classifiers like \(k\) nearest neighbor as the experimental setting. Hence, the neural network or traditional classifier can comprehend the nature of the input.
**Dataset Pruning with Causality**
Every time series in the dataset only sometimes help NN to learn their features. Some data may be redundant or harmful [1]. However, causality can reveal the time series that matters for classification. We employ causality to prune the dataset.
To prune a time series dataset \(\varsigma=\{T_{1},T_{2},\dots,T_{m}\}\), we first discover MCNS\(S_{\varsigma}=<S_{sub},l_{T},\psi,C>\) on the whole dataset and \(S_{T_{i}}=<S_{i},l_{T_{i}},\psi_{i},C_{i}>\) on each time series \(T_{i}\).
Each time series \(T_{i}\) in the dataset is treated as follows:
\[T_{i}=\begin{cases}T_{i}&\text{if }<a_{ij},l_{T}>\in\psi,a_{ij}\in S_{i},i \leq m\\ None&\text{otherwise}\end{cases} \tag{11}\]
The equation (11) means that some time series are abandoned because their causal factors do not affect the classification label. The dataset is already pruned by the operations above.
Afterward, we got the essence data and input it to the LSTM or other neural networks. Therefore, the neural network can comprehend the entire dataset precisely through the essence data.
## 4 Experiment
### Datasets
To illustrate that MCNS can be applied to datasets of different scales and multiple domains, we explore several experiments on six benchmark datasets introduced in the UCR time-series [1], which come from the electric power, biology, behavior, food spectrograph, and automotive subsystem domains. The details of the datasets are given in Table 1.
### Experimental Setup
**Our Models.** In this paper, we evaluate our MCNS framework as described in Section 3.2-3.4 and three models impregnate NN with MCNS (LSTM+Att+MCNS, MCNS, and CausalPrune) as described in Section 3.5.
**Parameter Settings.** We employ LSTM as the main body, and 2 hidden layers, 128 neurons in each layer, and one fully connected layer connected to the output function are uniformly set. Moreover, we find 5 snippets for each time series and the length determined by the FFT-based method.
**Comparison Models.** Because no previous work has found causality in univariate time series, we compare three MCNS-based models with NN baselines and shape-based methods, including LSTM, LSTM+Att, and Shapelets [1]. LSTM+Att is a standard model of processing time series. Since the prior knowledge may result in unfair comparison, we do not add expert knowledge to keep our MCNS without domain experts involvement.
**Other Settings.** The experiments are conducted on Windows 10, coming with an Intel Xeon Silver 4210R CPU and a NVIDIA Tesla T4 GPU.
### Main Results
In this section, we investigate the classification accuracy and f1-score of our applications. Each set of experiments was repeated five times for MCNS, which is randomized. The main experimental results are shown in Table 2.
**Attention vs. No Attention.** We can see that LSTM+Att outperforms LSTM by around 3-4% on average Acc and F1. However, sometimes the addition of attention will make the model less effective. That is because attention can only sometimes enhance the features that affect the results. This above suggests that attention is helpful for neural network models to capture part of crucial information in the time series, but sometimes something else is needed.
**Attention vs. Attention + MCNS.** Furthermore, we can find out that LSTM+Att+MCNS transcends LSTM+Att by around 6-8% on average Acc and F1. The performance gap is related to the size of the dataset. The above illustrates that
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**Dataset** & **Type** & **Abbr** & **Brief Description** & **Train** & **Test** & **Prune** & **Class** \\ \hline PowerCons & Power & PC & Electric power consumption & 180 & 180 & 171 & 2 \\ ECGFiveDays & ECG & EFD & ECG in two days & 23 & 861 & 14 & 2 \\ FordA & Sensor & FA & A car certain symptom(without noise) & 3601 & 1320 & 2056 & 2 \\ FordB & Sensor & FB & A car certain symptom(with noise) & 3636 & 810 & 2190 & 2 \\ Strawberry & Spectro & Sb & Food spectrographs & 613 & 370 & 442 & 2 \\ SmallKitchenAppliances & Device & SKA & Behavioural data & 375 & 375 & 368 & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: A summary of the benchmark datasets. The _Prune_ column states the size of the pruned train set for the CausalPrune method.
causal strength is helpful for attention-based models to discover core content in the time series. What is not explained in the table is that we find that _LSTM+Att+MCNS_ converges much faster than _LSTM+Att_, which may be because attention has received the correct guidance.
**Causal Inference vs. Neural Networks.** Comparing MCNS with NN baselines LSTM and LSTM+Att, we observe in few-shot settings \((1\%,5\%)\), MCNS outperforms NNs by about 6-7% on average Acc and 12-18% on average F1 since NNs tend to underfit in few-shot settings. However, with the increase in training data, the performance gap becomes narrower, and consequently, NNs outperform MCNS in several cases. Compared with MCNS, NNs have the advantage of learning from large amounts of data.
**MCNS vs. Shapelets.** Similarly, MCNS and Shapelets are both discriminative subsequences for time series classification. Comparing MCNS with baselines Shapelets in the case where the other settings are the same except for the shape selection, we observe that MCNS outperforms Shapelets by about 14% on average Acc and 17.31% on average F1 since our MCNS is better than Shapelets at capturing subsequences' affection of classification results.
**CausalPrune vs. No Prune.** The pruned size of the train set is shown in Table 1. We observe under different scales dataset settings by comparing CausalPrune with non-CausalPrune. Datasets are cropped in different proportions,
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline
**Models** & **Ratio** & **PC** & **EFD** & **FA** & **FB** & **Sb** & **SKA** \\ \hline \multirow{8}{*}{LSTM} & 1\% & 61.67\%/55.06\% & \(-\) & 49.16\%/43.77\% & 50.32\%/42.89\% & 57.30\%/67.08\% & 32.80\%/16.46\% \\ & 5\% & 73.67\%/62.84\% & \(-\) & 52.27\%/46.18\% & 52.47\%/37.93\% & 62.97\%/68.65\% & 33.05\%/16.56\% \\ & 10\% & 85.56\%/85.41\% & 49.36\%/33.04\% & 55.83\%/55.04\% & 57.03\%/59.72\% & 57.03\%/51.96\% & 49.60\%/40.83\% \\ & 30\% & 82.78\%/82.58\% & 51.45\%/37.30\% & 51.59\%/34.03\% & 62.96\%/60.64\% & 67.03\%/78.89\% & 38.40\%/28.22\% \\ & 50\% & 91.11\%/91.09\% & 60.16\%/52.68\% & 59.69\%/66.16\% & 59.38\%/58.93\% & 68.38\%/76.17\% & 53.06\%/51.50\% \\ & 80\% & 83.33\%/83.08\% & 80.60\%/80.16\% & 64.24\%/66.71\% & 61.39\%/61.72\% & 64.32\%/78.29\% & 53.60\%/51.84\% \\ & 100\% & 92.77\%/92.77\% & 82.81\%/82.75\% & 60.23\%/64.21\% & 65.80\%/58.41\% & 72.97\%/77.97\% & 57.33\%/50.62\% \\ & Prune & **97.44\%/97.44\%** & **88.39\%/88.22\%** & **86.06\%/86.04\%** & **81.36\%/82.26\%** & **87.56\%/90.25\%** & **74.93\%/75.04\%** \\ \hline \multirow{8}{*}{LSTM+Att} & 1\% & 50.00\%/33.33\% & \(-\) & 49.77\%/35.96\% & 53.82\%/51.04\% & 61.89\%/74.03\% & 33.06\%/16.59\% \\ & 5\% & 50.00\%/33.33\% & \(-\) & 51.59\%/34.03\% & 56.67\%/55.21\% & 64.32\%/39.14\% & 50.40\%/40.24\% \\ & 10\% & 88.33\%/88.32\% & 49.71\%/33.20\% & 68.26\%/67.75\% & 57.65\%/61.59\% & 70.54\%/70.13\% & 60.53\%/60.27\% \\ & 30\% & 87.22\%/87.16\% & 57.72\%/57.40\% & **68.71\%/68.68\%** & 56.79\%/61.87\% & 65.94\%/65.93\% & 62.93\%/58.79\% \\ & 50\% & 92.22\%/92.21\% & 74.09\%/72.99\% & 67.12\%/72.36\% & 68.20\%/63.09\% & 75.67\%/75.32\% & 66.13\%/65.34\% \\ & 80\% & 93.89\%/93.88\% & 78.04\%/77.94\% & 71.21\%/74.34\% & 69.75\%/61.45\% & 76.75\%/74.92\% & 40.80\%/29.29\% \\ & 100\% & 91.11\%/91.11\% & 84.32\%/84.08\% & 71.74\%/75.51\% & 69.50\%/65.69\% & 76.48\%/76.27\% & 39.73\%/33.55\% \\ \hline \multirow{8}{*}{LSTM+Att} & 1\% & 68.33\%/66.42\% & \(-\) & 49.84\%/48.01\% & **54.32\%/58.52\%** & **64.32\%/39.14\%** & 43.20\%/34.86\% \\ & 5\% & **90.00\%/89.99\%** & \(-\) & 54.84\%/52.49\% & **59.75\%/54.17\%** & **64.86\%/41.44\%** & **60.00\%/48.00\%** \\ & 10\% & **91.67\%/91.66\%** & **63.47\%/60.30\%** & **81.97\%/81.48\%** & **58.47\%/63.24\%** & **77.56\%/76.85\%** & **64.00\%/63.41\%** \\ & 30\% & **91.67\%/91.66\%** & **68.29\%/64.95\%** & 65.34\%/61.28\% & **66.41\%/69.91\%** & **76.48\%/75.59\%** & **65.06\%/62.62\%** \\ & 50\% & **93.89\%/93.88\%** & **75.49\%/75.45\%** & **69.84\%/68.72\%** & **68.89\%/70.91\%** & **80.81\%**/79.26\% & **67.73\%/65.76\%** \\ & 80\% & **94.44\%/94.44\%** & **83.51\%/83.48\%** & **74.62\%/74.94\%** & **69.74\%/69.87\%** & **82.43\%/80.55\%** & 63.20\%/56.26\% \\ & 100\% & **96.67\%/96.66\%** & **89.54\%/89.34\%** & **75.00\%/76.16\%** & **72.67\%/73.19\%** & **87.02\%/85.76\%** & 57.33\%/47.56\% \\ \hline \multirow{8}{*}{MCNS} & 1\% & **73.13**\%/**69.32\% & \(-\) & **55.30\%/52.34\%** & 52.22\%/61.03\% & 60.00\%/**70.98\%** & **43.78\%/38.62**\% \\ & 5\% & 75.56\%/76.59\% & \(-\) & **57.65\%/60.38\%** & 52.29\%/42.61\% & 62.60\%/65.35\% & 53.85\%/**49.43**\% \\ & 10\% & 74.56\%/71.74\% & 50.29\%/66.92\% & 57.72\%/58.17\% & 56.17\%/55.79\% & 67.30\%/74.20\% & 58.67\%/55.49\% \\ \cline{1-1} & 30\% & 75.56\%/69.86\% & 53.42\%/65.93\% & 57.12\%/61.34\% & 55.67\%/52.19\% & 74.59\%/**79.20\%** & 62.67\%/61.50\% \\ \cline{1-1} & 50\% & 73
which is related to the size of the dataset. Furthermore, after the prune, the Acc and F1 on the LSTM have increased about 13-15% on average Acc and F1, which illustrates that our CausalPrune method can discard harmful and redundant data.
**MCNS as Presentations.** Additionally, our MCNS can represent time series datasets or specific time series. As shown in Figure 4, one significant use of MCNS is to replace standard folder icons with MCNS graphs that show critical data and relations reflecting the dataset's content. For labeled time series datasets, we can see why different time series are categorized into different classes and essential features. For unlabeled time series datasets, we can see representative sub-sequences and causality among them, allowing an analyst to spot patterns and anomalies at a glance. Furthermore, by discarding some factors that do not exist in the specific time series, we have similar representations as in Figure 5.
## 5 Influence of Parameters
We evaluate the impact of parameter settings on MCNS. Specifically, parameters \(l\) and \(k\) represent the length and number of snippets. Without cherry-picking, we randomly chose two datasets such as FordA and Strawberry, and explored the effectiveness of our approach. All results are averaged over five runs.
We propose a novel metric \(CIR\) to evaluate the above parameters, which represents the causal information ratio as follows:
\[CIR=\frac{\tau}{n},\tau\in n \tag{12}\]
where \(n\) is the number of clusters, and \(\tau\) is the number of causal factors pointing to the classification label. The \(CIR\) stands for the ability of a causal graph to represent the original data or dataset.
### The Length of Snippets
Recall that we set \(l\) using an FFT-based method. Let the value recommended by this method be \(L\). We set \(l\) to \(0.5L:0.5L\) : \(5L\) with all other parameters fixed. Then, we evaluate the ratio \(CIR\). Figure 6 shows the \(CIR\) with varying \(l\), where the asterisk is the recommended length. As is shown, when \(l\) increases beyond \(L\), the causal information ratio gradually decreases and fluctuates, with the best accuracy obtained when \(l=L\). Additionally, in \(Strawberry\), we can not find snippets with \(5L\) length because it is too large. The above validates the effectiveness of our FFT-based approach.
### The Number of Snippets
Recall that we set an uncertain number of \(k\) for snippets discovery. For a given time series length \(n\), we varied the value \(k\) to test the effectiveness of MCNS against \(k\). As is shown in Figure 7, our method is relatively sensitive to \(k\), where the asterisk is the recommended number. That is because the specific value of the parameter \(k\) affects the amount of information. Hence, setting an appropriate \(k\) is essential. Inspired by our empirical results, for most real-world datasets, like the two we choose at random, it is proper to set \(k\) to around 5.
## 6 Conclusion
Mining causal natural structures inside time series is a challenging problem. To find out the causal natural structures inside time series data, We propose a novel framework called MCNS. It benefits neural networks by refining attention, shape causal selection based classification, and dataset pruning. Extensive experimental results on six real-world datasets from various domains and scales have demonstrated the feasibility and generalization of our approach. The future work will apply MCNS to multidimensional time series and integrate MCNS into diverse NNs. Furthermore, our MCNS can naturally benefit other fields, such as reinforcement learning, adversarial attack, etc.
Figure 4: MCNS representation of labeled time series datasets (left) and unlabelled time series datasets (right), which allows researchers to discover the features and relationships of datasets at a glance.
Figure 5: Two examples of specific labeled (left) and unlabeled (right) data represent the simple representation of our causal natural structures from MCNS on each time series. |
2309.07427 | Measuring Higher-Order Rationality with Belief Control | Determining an individual's strategic reasoning capability based solely on
choice data is a complex task. This complexity arises because sophisticated
players might have non-equilibrium beliefs about others, leading to
non-equilibrium actions. In our study, we pair human participants with computer
players known to be fully rational. This use of robot players allows us to
disentangle limited reasoning capacity from belief formation and social biases.
Our results show that, when paired with robots, subjects consistently
demonstrate higher levels of rationality and maintain stable rationality levels
across different games compared to when paired with humans. This suggests that
strategic reasoning might indeed be a consistent trait in individuals.
Furthermore, the identified rationality limits could serve as a measure for
evaluating an individual's strategic capacity when their beliefs about others
are adequately controlled. | Wei James Chen, Meng-Jhang Fong, Po-Hsuan Lin | 2023-09-14T04:50:35Z | http://arxiv.org/abs/2309.07427v1 | # Measuring Higher-Order Rationality with Belief Control+
###### Abstract
Determining an individual's strategic reasoning capability based solely on choice data is a complex task. This complexity arises because sophisticated players might have non-equilibrium beliefs about others, leading to non-equilibrium actions. In our study, we pair human participants with computer players known to be fully rational. This use of robot players allows us to disentangle limited reasoning capacity from belief formation and social biases. Our results show that, when paired with robots, subjects consistently demonstrate higher levels of rationality and maintain stable rationality levels across different games compared to when paired with humans. This suggests that strategic reasoning might indeed be a consistent trait in individuals. Furthermore, the identified rationality limits could serve as a measure for evaluating an individual's strategic capacity when their beliefs about others are adequately controlled.
JEL Classification Numbers: C72, C92, D83, D90
Keywords: Ring Game, Guessing Game, Level-\(k\), Higher-Order Rationality
Introduction
Understanding whether individuals make optimal choices in strategic environments is a fundamental question in economics. Unlike individual decision-making, a game involves multiple players whose payoffs depend on each other's choice. In this setting, achieving equilibrium requires a player to exhibit both first-order rationality and higher-order rationality. This necessitates that players are not merely rational themselves but also operate under the assumption that their counterparts are rational. Furthermore, they must believe that other participants consider them to be rational, and this belief cascades infinitely. As a result, in equilibrium, each player's assumptions about the strategies of their peers match the actual strategies employed, allowing them to optimally respond.
However, expecting players to engage in iterative reasoning and demonstrate infinite levels of rationality is notably demanding, especially when viewed empirically. This is evidenced by well-documented instances of players diverging from equilibrium play, as highlighted in works such as Camerer (2003). Given these empirical discrepancies, a significant volume of research has been dedicated to determining the extent of iterative reasoning an individual can realistically execute within different contexts.
Apart from exploring the extent of iterative reasoning an individual can undertake, this paper delves into another crucial, related query: Is there consistency in an individual's depth of strategic reasoning across various games? Measuring strategic reasoning abilities of interacting individuals can facilitate our understanding and predictions of individuals' behavioral patterns. It also helps us evaluate whether the observed non-equilibrium actions are driven by bounded rationality or by other factors. Nevertheless, if we observe no regularity when measuring one's depth of strategic reasoning in different environments, there may not even exist such a persistent trait called "strategic thinking ability."
The main challenge behind inferring individual strategic reasoning ability from choice data is that the strategic sophistication revealed by one's choices does not directly imply the maximum steps of iterative reasoning one is able to perform. As noted by Ohtsubo & Rapoport (2006),1 a player's observed depth of reasoning is determined not only by her reasoning capability but also by her beliefs about the opponents' (revealed) sophistication, a notion supported by empirical evidence in Agranov et al. (2012) and Alaoui & Penta (2016). An individual who can carry out more than \(k\) steps of reasoning would act as a \(k\)th-order rational player when she believes that her opponent exhibits \((k-1)\)th-order rationality. In other words, measuring an individual's revealed strategic sophistication only yields a lower-bound estimate of her actual sophistication. In addition, psychological factors other than bounded rationality such as lying aversion and fairness concern may also motivate a player to deviate from an equilibrium (Cooper & Kagel, 2016). Without controlling for a player's beliefs and social preferences, the estimation of her strategic reasoning ability could be unstable and lack external validity.
Footnote 1: Subjects who go through several levels of reasoning and figure out the equilibrium solution to the game, will in general not invoke the maximum depth of reasoning precisely because they do not assume—and perhaps should not assume—that the other \(n-1\) players are as smart as they are” (Ohtsubo & Rapoport, 2006, p. 45).
In a study on bounded strategic sophistication by Georganas et al. (2015), a question similar to the
one posed in this paper was explored. In their research, participants played two distinct families of games. Although their study did not extensively control for participants' beliefs, it revealed a limited persistence of individual strategic sophistication between the two games.
In this paper, we demonstrate a method to test the stability of individual strategic sophistication and to possibly pin down the upper bound of an individual's depth of strategic reasoning in the lab: having human subjects interact with equilibrium-type _computer_ players induced by infinite order of rationality. By informing human players that they are facing fully rational computer players, we are able to unify players' expectations about their opponents. Additionally, introducing computer players precludes the possible effect of social preferences (Houser & Kurzban, 2002; Johnson et al., 2002; Van den Bos et al., 2008). Thus, human players with an infinite order of rationality are expected to select an equilibrium strategy. In this setting, out-of-equilibrium actions would provide us a solid ground to identify an individual's order of rationality for inferring her strategic reasoning ability since those actions are likely driven by bounded rationality.
To investigate the stability of individual strategic sophistication across games, we conduct an experiment with two classes of dominance-solvable games, ring games and guessing games. Proposed by Kneeland (2015) for identifying higher-order rationality, an \(n\)-player ring game can be characterized by \(n\) payoff matrices and has the following ring structure: the \(k\)th player's payoff is determined by the \(k\)th player's and \((k+1)\)th player's actions, and the payoff of the last (\(n\)th) player, who has a strictly dominant strategy, is determined by the last and the first player's actions. We employ guessing games that represent a symmetric variant of the two-person guessing games previously studied by Costa-Gomes & Crawford (2006), in which a player's payoff is single-peaked and maximized if the player's guess equals its opponent's guess times a predetermined number.2
Footnote 2: The guessing game we implement in this paper diverges from the standard beauty contest game, primarily because the standard beauty contest game is not strictly dominant solvable. However, it is worth noting that if the beauty contest game involves only two players, then it becomes dominant solvable (Grosskopf & Nagel, 2008; Chen & Krajbich, 2017).
Among the games that have been used to study strategic reasoning, we choose to implement ring games and guessing games in our experiment for two reasons. First, our instruction of a fully rational computer player's behavior is tailored to align with the payoff structure of dominance-solvable games, in which the computer players' actions can be unambiguously determined (see Section 5.1 for details). Furthermore, these dominance-solvable games enable a structure-free identification approach, leveraging the notion of rationalizable strategy sets (Bernheim, 1984; Pearce, 1984). The core idea behind this identification approach is that, within a dominance solvable game, we can gauge an individual's depth of reasoning by assessing how many rounds of iterated deletion of dominated strategies the individual's chosen action would survive. Importantly, this approach does not impose structural assumptions on (the beliefs about) non-rational players' behavior. Therefore, these classes of games provide a plausible, structure-free method to empirically categorize individuals into distinct levels of rationality.
Second, we intend to implement two types of games that are sufficiently different so that, if we observe any stability in individual strategic reasoning levels across games, the stability does not result from the
similarity between games. We believe that ring games and guessing games are dissimilar to each other. On one hand, a ring game is a four-player discrete game presented in matrix forms. On the other hand, a guessing game is a two-player game with a large strategy space, which is more like a continuous game. In fact, Cerigioni et al. (2019) reported that the correlation of their experimental subjects' reasoning levels between ring games and beauty contest games is only 0.10.
Our experiment comprises two treatments within each game family: the Robot Treatment and the History Treatment. In the Robot Treatment, subjects encounter computer players employing equilibrium strategies. In the History Treatment, subjects confront choice data from human players in the Robot Treatment. The History treatment simulates an environment where human subjects interact without displaying social preferences and serves two main objectives. First, by examining if a subject's observed order of rationality in the Robot Treatment exceeds that in the History Treatment, we can evaluate whether the subject responds to equilibrium-type computer players by employing a strategy that reaches her full capacity for strategic reasoning. Second, by comparing the individual orders of rationality inferred from data in both the Robot and History Treatments, we can investigate whether the introduction of robot players contributes to stabilizing observed strategic thinking levels across various games.
Overall, our findings indicate that strategic reasoning ability may be a persistent personality trait deducible from choice data when subjects interact with robot players in strategic scenarios. Relative to interactions involving human opponents, we observe a larger proportion of participants adopting equilibrium strategies and demonstrating higher levels of rationality. This observation is supported by both our between- and within-subject statistical analyses, underscoring the effectiveness of our Robot Treatment and implying that the rationality depths exhibited in this treatment potentially approach subjects' strategic thinking capacity.3
Footnote 3: One might doubt if a subject has the motivation to act rationally upon the presence of an opponent with a (much) higher rationality level than the subject has. In Section 6, we argue that a subject does have the incentive to exhibit the highest order of rationality she can achieve when she knows her opponent is at least as rational as herself.
Furthermore, our investigation reveals that subjects' rationality levels remain remarkably stable across distinct game classes when interacting with robot players. In terms of absolute levels, a substantial number of first-order and fourth-order rational players retain their respective types while transitioning from ring games to guessing games. In the Robot Treatment, approximately 38% of subjects exhibit constant rationality depths across games. A further statistical test involving 10,000 simulated samples demonstrates that this stability in rationality levels cannot be attributed to two independent type distributions, with the actual proportion of constant-level players exceeding the mean simulated proportion by 6 percentage points. Additionally, applying the same statistical analysis to the History Treatment reveals no significant disparities in the proportions of constant-level players between actual and simulated datasets. This indicates that the stability in individual rationality depths is not solely due to game selection but is influenced by our manipulation of subjects' beliefs about opponents' strategic reasoning depths.
In terms of relative levels, the rankings of individual rationality levels also remain consistent in our
Robot Treatment. When we randomly select two subjects from our experiment, the probability of their level order remaining unchanged across games in the Robot Treatment exceeds the probability of order switching by approximately 30 percentage points, or more than threefold. A similar pattern emerges in the History Treatment, where a constant ranking of two subjects' levels across games is observed more frequently than a switching ranking, although not as frequently as in the presence of robot players. Additionally, we find evidence suggesting that individual rationality levels in a game can serve as an indicator of the degree of game complexity when subjects interact with robot players. However, this evidence is comparatively less substantial and robust in the context of interactions with human opponents. In summary, the above results demonstrate that, when we use computer players to control for beliefs, the observed rationality levels of a subject may effectively capture her overall strategic thinking ability across various types of games.
A subject's performance in other cognitive tests could potentially hold predictive power regarding her strategic reasoning performance in games. As such, we incorporate tasks measuring cognitive reflection, short-term memory, and backward induction abilities (see Section 5 for details) into our experiment. We observe that a subject's cognitive reflection and backward induction abilities are positively correlated with her levels of rationality, whereas no significant correlation is found with her short-term memory capacity.
The rest of the paper proceeds as follows. The next subsection reviews related literature. Section 2 summarizes the theoretical framework upon which our identification approach and hypotheses to be tested are based. Section 3 describes the ring games and guessing games implemented in our experiment. Section 4 discusses how we identify a subject's order of rationality given choice data. Section 5 presents our experimental design, including the experiment procedure and instruction for the robot strategy. Section 6 lists the hypotheses to be tested and discuss their implication. Section 7 reports the experiment results, and Section 8 concludes. The complete instructions of our experiment can be found in Supplementary Information.4
Footnote 4: The provided instructions are originally in Chinese and have been translated into English.
### Related Literature
Over the past thirty years, the idea of limited depth of reasoning has been theoretically studied by various researchers, including Selten (1991, 1998), Aumann (1992), Stahl (1993), Alaoui & Penta (2016, 2022), Lin & Palfrey (2022) and Lin (2022). In addition to theoretical contributions, Nagel (1995) conducted the first experiment on the beauty contest game and introduced the level-\(k\) model to describe non-equilibrium behavior. The level-\(k\) behavior has subsequently been observed in investment games, (e.g., Rapoport & Amaldoss, 2000), matrix games (e.g., Costa-Gomes et al., 2001; Crawford & Iriberri, 2007a), guessing games (e.g., Costa-Gomes & Crawford, 2006), undercutting games (e.g., Arad & Rubinstein, 2012), auctions (e.g., Crawford & Iriberri, 2007b), and sender-receiver games (e.g., Cai & Wang, 2006; Wang et al., 2010; Fong & Wang, 2023).
Unlike the literature that primarily investigates individuals' strategic sophistication within the context of a single specific game, our work, which is closely related to Georganas et al. (2015) (hereinafter, GHW),
centers on the examination of the consistency of strategic sophistication across different games. In particular, we follow the language of GHW to formalize our hypotheses to be tested.5 Although both GHW and this paper experimentally investigate whether a subject's sophistication type persists across games, our study differs from GHW in several ways. First, we substitute the ring games for the undercutting games in GHW and use a simplified, symmetric version of the guessing games. Second, we employ an identification strategy distinct from the standard level-\(k\) model to determine a subject's strategic sophistication. We use dominance solvable games in order to identify higher-order rationality without imposing strong and ad hoc assumptions on players' first-order beliefs, which can in turn reduce the noise in the estimation of individual reasoning depth using a level-\(k\) model.6 More importantly, we control for human subjects' beliefs about opponents' sophistication (and social preferences) using computer players. As a result, we observe a higher correlation in subjects' types across games compared to GHW, in which subjects are matched with each other.
Footnote 5: For a brief summary of the model in GHW, see Section 2.1; also, see Section 6 for the hypotheses.
Footnote 6: Burchardi and Penczynski (2014) conduct an experiment in a standard beauty contest with belief elicitation, finding heterogeneity in both level-0 beliefs and level-0 actions within a game.
Ring games, first utilized for identifying higher-order rationality by Kneeland (2015), are subsequently studied by Lim and Xiong (2016) and Cerigioni et al. (2019), who investigate two variants of the ring games. In this study, we follow the _revealed rationality approach_ adopted by Lim and Xiong (2016) and Cerigioni et al. (2019) as our identification approach (discussed in Section 4). It is worth noting that Cerigioni et al. (2019) also find little correlation in subjects' estimated types across various games, including ring games, e-ring games, \(p\)-beauty contests, and a \(4\times 4\) matrix game. Again, our results suggest that the lack of persistence in the identified order of rationality at the individual level is driven by subjects' heterogeneous beliefs about the rationality of their opponents.
Indeed, several empirical studies have shown that beliefs about others' rationality levels can alter a player's strategy formation. Friedenberg et al. (2018) indicate that some non-equilibrium players observed in the ring games (Kneeland, 2015) may actually possess high cognitive abilities but follow an irrational behavioral model to reason about others. Alternatively, Agranov et al. (2012) and Alaoui and Penta (2016) find that, in their experiments, a subject's strategic behavior is responsive to the information she receives about her opponents' strategic abilities.7 The designs of experiments allow them to manipulate subjects' beliefs, whereas we aim to elicit and identify individual strategic capability by unifying subjects' beliefs about opponents.
Footnote 7: In Agranov et al. (2012), the subjects play against each other, graduate students from NYU Economics Department, or players taking uniformly random actions. In Alaoui and Penta (2016), the subjects play against opponents majoring in humanities, majoring in math and sciences, getting a relatively high score, or getting a low score in a comprehension test.
Some recent studies have tried to distinguish between non-equilibrium players who are limited by their reasoning abilities and players who are driven by beliefs. Identifying the existence of ability-bounded players is important since, if non-equilibrium behavior was purely driven by beliefs, it would be unnecessary to measure an individual's reasoning depth. Jin (2021) utilizes a sequential version of ring games, finding that around half of the second-order and third-order rational players are bounded by ability. Alaoui et al. (2020) also report the presence of ability-bounded subjects by showing that an elaboration on the equilibrium
strategy shifts the subjects' level-\(k\) types toward higher levels. Overall, the existence of both ability-bounded and belief-driven players in the real world indicates the need for an approach that can measure individual reasoning ability without the impact of beliefs. Whereas Jin (2021) and Alaoui et al. (2020) do not pin down the belief-driven players' actual ability limit, we aim to directly measure each subject's strategic ability.
Bosch-Rosa and Meissner (2020) propose an approach to test a subject's reasoning level in a given game: letting a subject play against herself (i.e., an "one-person" game). Specifically, their subject acts as both players in a modified two-person _p_-beauty contest (Grosskopf and Nagel, 2008; Chen and Krajbich, 2017), in which a player's payoff decreases in the distance between her guess and the average guess multiplied by \(p\), and receives the sum of the two players' payoffs.8 The one-person game approach eliminates the impact of beliefs that arises from interacting with human players. However, a limitation of this approach is that it can only be applied to the game in which the equilibrium is Pareto optimal. For instance, it would be rational for a payoff-maximizing subject to deviate from the equilibrium and choose (Cooperate, Cooperate) in the prisoner's dilemma since (Cooperate, Cooperate) maximizes the total payoff of both players even though those are not equilibrium strategies.9 In this study, we employ an alternative approach that overcomes this limitation to measure rationality levels: letting a subject play against equilibrium-type computer players (i.e., the Robot Treatment).
Footnote 8: Bosch-Rosa and Meissner (2020) report that 69% of the subjects do not select the equilibrium action (0, 0) when playing the one-person game, which echoes the findings of the presence of ability-bounded players in Jin (2021) and Alaoui et al. (2020).
Footnote 9: Also note that in the ring game G1, both the equilibrium strategy profile (P1 chooses b; P2 chooses c; P3 chooses c; P4 chooses b) and another non-equilibrium strategy profile (P1 chooses a; P2 chooses b; P3 chooses a; P4 chooses a) lead to a total payoff of 66 (see Figure 1).
Similar to the motivation of our Robot Treatment, Devetag and Warglien (2003), Grehl and Tutic (2015), and Bayer and Renou (2016) also employ rational computer players to mitigate the impact of beliefs and social preferences on individual decisions in their experiments. While Devetag and Warglien (2003) find a positive correlation between a subject's short-term memory performance and conformity to standard theoretical predictions in strategic behavior, Grehl and Tutic (2015) and Bayer and Renou (2016) explore a player's ability to reason logically about others' types in the incomplete information game known as the dirty faces game. In contrast, our study departs from theirs by focusing on investigating whether playing against computers can provide a robust measure of strategic reasoning ability across different families of games with complete information. Additionally, we also include a memory task to investigate whether the lack of significant predictive power of short-term memory on reasoning levels observed in GHW was influenced by uncontrolled beliefs and to offer a robustness check for the findings of Devetag and Warglien (2003) in different settings.
In previous studies on strategic reasoning, equilibrium-type computer players have been introduced into laboratory experiments to induce human players' equilibrium behavior (e.g., Costa-Gomes and Crawford, 2006; Meijering et al., 2012) and to eliminate strategic uncertainty (e.g., Hanaki et al., 2016).10 In contrast, our aim is to utilize computer players to uncover individual strategic reasoning ability. Notably, our experimental design avoids fully informing the subjects about either the notion of an equilibrium (Costa-Gomes and Crawford, 2006) or the computer player's exact strategy (Meijering et al., 2012; Hanaki et al., 2016), as such
knowledge could potentially bias our estimation of individual strategic reasoning ability. Instead, following the approach of Johnson et al. (2002), in our Robot Treatment we inform subjects that the computer player is third-order rational (i.e., the computer is rational, knows its opponent is rational, knows its opponent knows it is rational) without disclosing further details (see Section 5.1). Our study contributes to the literature by demonstrating that introducing robot players can induce human subjects to exhibit stable reasoning levels across games, thus providing a solid foundation for measuring individual strategic thinking ability.
## 2 Theoretical Framework
### The Model in GHW
To formalize the idea of the depth of rationality and the hypotheses we are going to test, we introduce the model and notations used in GHW. In their model, an \(n\)-person normal form game \(\gamma\in\Gamma\) is represented by \((N,S,\{u_{i}\}_{i\in N})\), where \(N=\{1,...,n\}\) denotes the set of players, \(S=S_{1}\times\cdots\times S_{n}=\Pi_{i=1}^{n}S_{i}\) denotes the strategy sets, and \(u_{i}:S\rightarrow\mathds{R}\) for \(i\in N\) denotes the payoff functions. Following GHW, we use \(u_{i}(\sigma)\) to refer to \(E_{\sigma}[u_{i}(\sigma)]\), where \(\sigma=(\sigma_{1},...,\sigma_{n})\), when \(\sigma\) is a profile of mixed strategies (i.e., \(\sigma_{i}\in\Delta(S_{i})\)).
Player \(i\)'s strategic ability is modeled by two functions \((c_{i},k_{i})\). Let \(T\) be the set of _environmental parameters_, which captures the information a player observes about their opponents' cognitive abilities. The function \(c_{i}:\Gamma\rightarrow\mathbb{N}_{0}\) represents \(i\)'s _capacity_ for game \(\gamma\), and the function \(k_{i}:\Gamma\times T\rightarrow\mathbb{N}_{0}\) represents \(i\)'s (realized) _level_ for game \(\gamma\). A player's level for a game is bounded by her capacity, so \(k_{i}(\gamma,\tau_{i})\leq c_{i}(\gamma)\) for all \(\gamma\), \(\tau_{i}\in T\), and \(i\in N\). The goal of our experiment is to measure \(c_{i}(\gamma)\) and to test if \(c_{i}(\gamma)\) (or \(k_{i}(\gamma,\tau_{i})\), after controlling for \(\tau_{i}\)) exhibits any stability across different games (see Section 6 for further discussion).
### Level-\(k\) Model and Higher-Order Rationality
In GHW, a player's behavior is characterized by the standard level-\(k\) model. Specifically, let \(\nu:\mathbb{N}_{0}\rightarrow\Delta(\mathbb{N}_{0})\) be a player's belief about her opponents' levels. In a standard level-\(k\) model, \(\nu(m)=\mathds{1}\{m-1\}\) for all \(m\geq 1\), and a level-\(0\) player \(i\)'s strategy is exogenously given as \(\sigma_{i}^{0}\in\Delta(S_{i})\). A level-\(k\) (\(k\geq 1\)) player \(i\)'s strategy (\(\sigma_{i}^{k}\)) is defined inductively as a best response to \(\nu(k)\). Formally, for all \(s_{i}^{\prime}\in S_{i}\), \(\sigma_{i}^{k}\) satisfies \(u_{i}(\sigma_{i}^{k},\sigma_{-i}^{\nu(k)})\geq u_{i}(s_{i}^{\prime},\sigma_{-i }^{\nu(k)})\) where \(\sigma_{-i}^{\nu(k)}=(\sigma_{1}^{k-1},...,\sigma_{i-1}^{k-1},\sigma_{i+1}^{k -1},...,\sigma_{n}^{k-1})\). Notice that in order to pin down a level-\(k\) player's strategy, we need to impose an assumption on the level-\(0\) strategy. However, some studies have reported variations in level-\(0\) actions and level-\(0\) beliefs across individuals (Burchardi and Penczynski, 2014; Chen et al., 2018). Thus, an individual's identified level of reasoning can be sensitive to the structural assumptions under a level-\(k\) model.
To avoid the ad hoc assumptions on level-\(0\) players, we can instead define \(k\)th-order rationality (Bernheim, 1984; Pearce, 1984; Lim and Xiong, 2016) in the following way. Let \(R_{i}^{k}(\gamma)\) be the set of strategies that survive \(k\) rounds of iterated elimination of strictly dominated strategies (IEDS) for player \(i\). In other words, a strategy
\(s_{i}\) is in \(R^{1}_{i}(\gamma)\) if \(s_{i}\) is a best response to some arbitrary \(s_{-i}\), and \(s_{i}\) is in \(R^{k^{\prime}}_{i}(\gamma)\) if \(s_{i}\) is a best response to some \(s_{-i}\in R^{k^{\prime}-1}_{-i}(\gamma)\) for \(k^{\prime}>1\). We say that a player \(i\) exhibits _\(k\)th-order rationality_ in \(\gamma\) if and only if \(i\) always plays a strategy in \(R^{k}_{i}(\gamma)\). Equivalently, an individual exhibits \(k\)th-order rationality if and only if there is a \(\sigma^{0}_{-i}\) such that the individual can be classified as a level-\(k\) player in a standard level-\(k\) model. Note that given any game \(\gamma\in\Gamma\), \(R^{k+1}_{i}(\gamma)\subset R^{k}_{i}(\gamma)\) for all \(k\in\mathbb{N}_{0}\). In other words, a player exhibiting \(k\)th-order rationality also exhibits \(j\)th-order rationality for all \(j\leq k\).
## 3 The Games
We study two classes of games: the four-player ring games used in Kneeland (2015) for identifying individuals' higher-order rationality and a variant of the two-person guessing games first studied by Costa-Gomes & Crawford (2006) and used in GHW for identifying players' level-\(k\) types.
### Ring Games
A four-player ring game is a simultaneous game characterized by four \(3\times 3\) payoff matrices. Figure 1 summarizes the structures of the two ring games, G1 and G2, used in our experiment. As shown in Figure 1, each player \(i\in\{1,2,3,4\}\) (simultaneously) chooses an action \(a_{i}\in\{a,b,c\}\). Player 4 and Player 1's choices determine Player 4's payoff, and Player \(k\) and Player (\(k\) + 1)'s choices determine Player \(k\)'s payoff for \(k\in\{1,2,3\}\). Note that Player 4 has a strictly dominant strategy in each ring game (\(b\) in G1 and \(c\) in G2), and the two ring games are identical in the payoff matrices of Player 1, 2, and 3.
Given the payoff structure, a (first-order) rational individual will always choose \(b\) in G1 and \(c\) in G2 when
Figure 1: The Ring Games. The Nash Equilibrium is highlighted with colored borders.
acting as Player 4. By eliminating dominated strategies, an individual exhibiting second-order rationality will always choose \(c\) in G1 and \(b\) in G2 when acting as Player 3. Then, by eliminating dominated strategies iteratively, an individual exhibiting third-order rationality will always choose \(c\) in G1 and \(a\) in G2 when acting as Player 2, and an individual exhibiting fourth-order rationality will always choose \(b\) in G1 and \(c\) in G2 when acting as Player 1. The unique Nash equilibrium of G1 is thus Player 1, 2, 3, and 4 choosing \(b\), \(c\), \(c\), and \(b\), respectively, and the unique Nash equilibrium of G2 is Player 1, 2, 3, and 4 choosing \(c\), \(a\), \(b\), and \(c\), respectively, as highlighted in Figure 1.
### Guessing Games
In our experiment, the guessing game is a simultaneous two-player game parameterized by a constant \(p\in(0,1)\). We use \(p=1/3\), \(1/2\) and \(2/3\) in our experiment. Each player \(i\) simultaneously chooses a positive integer \(s_{i}\) between 1 and 100. Player \(i\)'s payoff strictly decreases in the difference between the number chosen by \(i\), \(s_{i}\), and the number chosen by \(i\)'s opponent multiplied by a constant \(p\), \(ps_{-i}\). Specifically, player \(i\)'s payoff is equal to \(0.2(100-|s_{i}-ps_{-i}|)\). Thus, a payoff-maximizing player's objective is to make a guess that matches her opponent's guess times \(p\). Note that, given \(p<1\), any action (integer) greater than or equal to \(\lfloor 100p+0.5\rfloor+1\) is strictly dominated by \(\lfloor 100p+0.5\rfloor\) since \(|\lfloor 100p+0.5\rfloor-ps_{-i}|<|s_{i}^{\prime}-ps_{-i}|\) for all \(s_{-i}\in\{1,...,100\}\) and \(s_{i}^{\prime}\in\{\lfloor 100p+0.5\rfloor+1,...,100\}\).11
Footnote 11: For instance, in a guessing game with \(p=1/3\), every integer between 34 and 100 is dominated by 33; when \(p=1/2\), every integer between 51 and 100 is dominated by 50; when \(p=2/3\), every integer between 68 and 100 is dominated by 67.
Given the payoff function, a rational individual will always choose an integer between 1 and \(K_{1}\equiv\lfloor 100p+0.5\rfloor\). A second-order rational individual will believe the other player is first-order rational and choose a positive integer between 1 and \(\lfloor K_{1}p+0.5\rfloor\), and so on. The unique equilibrium of the two-person guessing game is thus both players choosing 1.
## 4 Identification
Our model does not allow us to directly identify one's higher-order rationality from choice data. For example, an equilibrium player will choose 1 in the guessing game with \(p=1/2\), while a player choosing 1 may have only performed one step of reasoning if her first-order belief is that her opponent guesses 2. Thus, observing a player \(i\) choosing a strategy in \(R_{i}^{k}(\cdot)\) for \(k>1\) (in a finite number of rounds) does not imply that \(i\) exhibits \(k\)th-order rationality, which renders an individual's higher-order rationality unidentifiable. In fact, following the definition of \(R_{i}^{k}(\cdot)\), we have \(R_{i}^{k+1}(\cdot)\subset R_{i}^{k}(\cdot)\) for all \(k\in\mathbb{N}_{0}\). Namely, every strategy (except for the dominated actions) can be rationalized by some first-order belief.
Following the rationale of higher-order rationality, we use the _revealed rationality approach_(Lim and Xiong, 2016; Brandenburger et al., 2019; Cerigioni et al., 2019) as our identification strategy. As explained below, this approach allows us to identify individual higher-order rationality in a dominance-solvable game. Under
the revealed rationality approach, we say that a player \(i\) exhibits \(k\)th-order revealed rationality if (and only if) we observe the player actually playing a strategy that can survive \(k\) rounds of IEDS, i.e., \(s_{i}\in R_{i}^{k}(\cdot)\). A subject is then identified as a \(k\)th-order (revealed-)rational player when she exhibits \(m\)th-order revealed rationality for \(m=k\) but not for \(m=k+1\). That is, a player is classified into the upper bound of her (revealed) rationality level.12
Footnote 12: Kneeland (2015) uses the _exclusion restriction_ (ER) as its identification strategy, assuming that a player with low order rationality does not respond to changes in payoff matrices positioned away from herself. However, Lim & Xiong (2016) show that more than three-quarters of their experimental subjects change their actions in two identical ring games, which suggests the failure of the ER assumption since a rational player is predicted to take the same action in two identical games under the exclusion restriction. Also, the ER assumption does not facilitate the identification of higher-order rationality in the guessing games since we cannot separate out first-order payoffs from higher-order ones.
The idea behind the revealed rationality approach is the "as-if" argument: a subject \(i\) selecting \(s_{i}\in R_{i}^{k}(\cdot)\setminus R_{i}^{k+1}(\cdot)\) in finite observations behaves like a \(k\)th-order rational player, who always selects a strategy in \(R_{i}^{k}(\cdot)\) but probably not in \(R_{i}^{k+1}(\cdot)\), and thus is identified as a \(k\)th-order revealed rational player. Under this identification criterion, we can identify an individual's order of (revealed) rationality without requiring her to play in multiple games with different payoff structures. In our data analysis, we will classify subjects into five different types: first-order revealed rational (R1), second-order revealed rational (R2), third-order revealed rational (R3), fourth-order (or fully) revealed rational (R4), and non-rational (R0).13 Tables 1 and 2 summarize the predicted actions under the revealed rationality approach for each type of players in our ring games and guessing games, respectively.
Footnote 13: In a four-player ring game, the highest identifiable (revealed) order of rationality is level 4.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{5}{c}{Ring Games} \\ \cline{2-6} & \multicolumn{2}{c}{P1} & \multicolumn{2}{c}{P2} & \multicolumn{2}{c}{P3} & \multicolumn{2}{c}{P4} \\ \cline{2-6} Type & G1 & G2 & G1 & G2 & G1 & G2 \\ \hline R0 & N/A & N/A & N/A & not (b, c) \\ R1 & N/A & N/A & not (c, b) & & (b, c) \\ R2 & N/A & not (c, a) & & (c, b) & (b, c) \\ R3 & not (b, c) & & (c, a) & & (c, b) & (b, c) \\ R4 & (b, c) & & (c, a) & & (c, b) & (b, c) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Predicted Actions in the Ring Games Under the Revealed Rationality Approach
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{Guessing Games} \\ \cline{2-4} Type & \(p\) = 1/3 & \(p\) = 1/2 & \(p\) = 2/3 \\ \hline R0 & 34–100 & 51–100 & 68–100 \\ R1 & 12–33 & 26–50 & 46–67 \\ R2 & 5–11 & 14–25 & 31–45 \\ R3 & 2–4 & 8–13 & 21–30 \\ R4 (or above) & 1 & 1–7 & 1–20 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Predicted Actions in the Guessing Games Under the Revealed Rationality Approach
Experimental Design
### Protocol
We design a laboratory environment to measure the subjects' higher-order rationality. The experiment protocol is summarized in Figure 2. The experimental subjects did not receive any feedback about the outcomes of their choices until the end of the experiment.
At the beginning of the experiment, we let subjects complete two tasks that measure the cognitive abilities that have been found to be correlated with strategic abilities. The first task is the Cognitive Reflection Test (CRT) (Frederick, 2005), which is designed to evaluate the ability to reflect on intuitive answers. This test contains three questions that often trigger intuitive but incorrect answers. GHW report that the subjects' CRT scores have moderate predictive power on their expected earnings and level-\(k\) types.
The second test is the Wechsler Digit Span Test (Wechsler, 1939), which is designed to test short-term memory. In our experiment, this test contains eleven rounds. In each round, a subject needs to repeat a sequence of digits displayed on the screen at the rate of one digit every second. The maximum length of the digit sequence a subject can memorize reflects the subject's short-term memory capacity.14 Devetag & Warglien (2003) find a positive correlation between individual short-term memory and strategic ability.
Footnote 14: The length of the digit sequence increases from three digits to thirteen digits round by round.
After completing the tasks, the subjects played the ring games in two different scenarios, the _Robot Treatment_ and the _History Treatment_. To balance out potential spillover effects from one treatment to another, we alternated the order of the two scenarios across sessions (RH Order and HR Order). In each scenario, each subject played the two four-player, three-action ring games (G1 and G2 in Figure 1) in each position in each game once (for a total of eight rounds). Each subject was, in addition, assigned a neutral label (Member A, B, C, or D) before the ring games started. The label was only used for the explanation of an opponent's strategy in the History Treatment and did not reflect player position. To facilitate the cross-subject comparison, all the subjects played the games in the following fixed order: P1 in G1, P2 in G1, P3 in G1, P4 in G1, P1 in G2, P2 in G2, P3 in G2, and P4 in G2.15 The order of payoff matrices was also
Figure 2: Experiment Protocol
fixed, with a subject's own payoff matrix being fixed at the leftmost side.16 Note that the payoff structures of our ring games are the same as those in Kneeland (2015).17 Adopting the same payoff structure facilitate the comparability between our and Kneeland's results.
Footnote 16: This feature is adopted in Jin (2021) and the main treatment of Kneeland (2015). Kneeland (2015) perturbs the order of payoff matrices in a robust treatment and finds no significant effects on subject behavior.
Footnote 17: The only difference is that we alter the order of the (row) payoffs for Player 4 in G1 to avoid coinciding, predicted actions given by a standard level-\(k\) model. The actions \(a,b,c\) in G1 (Figure 1) for Player 4 correspond to \(c,a,b\), respectively, in the G1 in Kneeland (2015).
In the Robot Treatment, the subjects played against fully rational computer players. Specifically, each subject in each round was matched with three robot players who only select the strategies that survive iterated dominance elimination (i.e., the equilibrium strategy). We informed the subjects of the presence of robot players that exhibit third-order rationality.18 The instructions for the robot strategy are as follows:19
Footnote 19: Our instructions are adapted from the experiment instructions of Study 2 of Johnson et al. (2002). The original instructions are as follows: “In generating your offers, or deciding whether to accept or reject offers, assume the following: 1. You will be playing against a computer which is programmed to make as much money as possible for itself in each session. The computer does not care how much money you make. 2. The computer program expects you to try to make as much money as you can, and the program realizes that you have been told, in instruction (1) above, that it is trying to earn as much money as possible for itself” (p. 44-45).
_When you start each new round, you will be grouped with three other participants who are in different roles. The three other participants will be computers that are programmed to take the following strategy:_
1. _The computers aim to earn as much payoff as possible for themselves._
2. _A computer believes that every participant will try to earn as much payoff as one can._
3. _A computer believes that every participant believes "the computers aim to earn as much payoff as possible for themselves."_
The first line of a robot's decision rule ("The computers aim to...") implies that a robot never plays strictly dominated strategies and thus exhibits first-order rationality. The second line (along with the first line) indicates that a robot holds the belief that other players are (first-order) rational and best responds to such belief, which implies a robot's second-order rationality. The third line (along with the first and second lines) implies that, applying the same logic, a robot exhibits third-order rationality.
In the History Treatment, the subjects played against the data drawn from their decisions in the previous scenario. Specifically, in each round, a subject was matched with three programmed players who adopt actions chosen in the Robot Treatment by three other subjects.20 Every subject was informed that other human participants' payoffs would not be affected by her choices at this stage. By having the subjects play against past decision data, we can exclude the potential confounding effect of other-regarding preferences on individual actions.
Footnote 20: In the HR Order sessions, the choices made by a subject’s opponents were drawn from the participants in the Robot Treatment of previous sessions.
After the ring games, the subjects played the two-person guessing games (in the order of \(p=2/3\), \(1/3\), \(1/2\)) in both the Robot Treatment and the History Treatment. Instead of being matched with three opponents, a subject was matched with only one player in the guessing games. The instructions for the guessing games in both treatments are revised accordingly.
At the last section of the experiment, we introduce an individual task developed by Bone et al. (2009)--the _farsightedness task_--to measure a subject's ability to do backward induction, or to anticipate her own future action and make the best choice accordingly. By comparing a subject's decisions in the ring games/guessing games and the farsightedness task, we can evaluate the correlation between one's depth of reasoning in a simultaneous game and that in a sequential decision task. We describe the details of the farsightedness task in the next subsection (Section 5.2).
There was a 180-second time limit on every subject's decisions in the ring games, guessing games, and farsightedness task. A subject who did not confirm her choice within 180 seconds earned zero payoff (for that round).21
Footnote 21: Jin (2021) sets a 60-second time limit on decisions in the ring games and finds little effect on type classification.
The subjects were paid based on the payoffs (in ESC, Experimental Standard Currency) they received throughout the experiment. In addition to the payoff in the farsightedness task, one round in the ring games and one round in the guessing games were randomly chosen for payment. A subject also got three ESC for each correct answer in the CRT, and one ESC for each correct answer in the Digit Span Test.
### Farsightedness Task
The farsightedness task (Bone et al., 2009) is a sequential task that involves two sets of decision nodes and two sets of chance nodes (see the decision tree in Figure 3). The first and third sets of nodes are the decision nodes where a decision maker is going to take an action (up or down). The second and fourth sets of nodes are the chance nodes where the decision maker is going to be randomly assigned an action (with equal probability).
Notice that there is one dominant action, in the sense of first-order stochastic dominance, at each of the third set of nodes (i.e., the second set of decision nodes). Anticipating the dominant actions at the second set of decision nodes, the decision maker also has a dominant action (down) at the first node. However, if a payoff maximizer lacks farsightedness and anticipates that each payoff will be reached with equal chance, then the dominated action (up) at the first node will become the dominant option from this decision maker's perspective. Therefore, a farsighted payoff-maximizer is expected to choose down, but a myopic one is expected to choose up, at the first move (and choose the dominant actions at the second moves).
### Laboratory Implementation
We conducted 41 sessions between August 31, 2020 and January 28, 2021 at the Taiwan Social Sciences Experiment Laboratory (TASSEL) in National Taiwan University (NTU). Each session lasted about 140
minutes, and all participants were NTU students recruited through ORSEE (Greiner, 2015). A total of 299 subjects participated in the experiment, among whom 136 subjects played the Robot Treatment before the History Treatment in both families of games (RH Order) and 157 subjects played the History Treatment first (HR Order).22 The experiment was programmed with the software zTree (Fischbacher, 2007) and instructed in Chinese. Including a show-up fee of NT$200 (approximately $7 in USD in 2020), the earnings in the experiment ranged between NT$303 and NT$554, with an average of NT$430.23
Footnote 22: Six subjects were dropped due to computer crashes.
Footnote 23: The exchange rate was 1 ESC for NT$4, and the foreign exchange rate was around US$1 = NT$29.4.
## 6 Hypotheses
The Robot Treatment is designed to convince subjects that the computer opponents they face are the most sophisticated players they could encounter. Consequently, if our Robot Treatment is effectively implemented, it should prompt subjects to employ a strategy at the highest achievable level \(k\), i.e., \(k_{i}(\gamma,\tau_{i}=Robot)=c_{i}(\gamma)\) for all \(\gamma\) and \(i\). (Recall that \(k_{i}\) and \(c_{i}\) denote subject \(i\)'s realized level and capacity, respectively.) This observation gives rise to the first hypothesis we aim to evaluate.
**Hypothesis 1** (**Bounded capacity**).: \(k_{i}(\gamma,\tau_{i}=History)\leq k_{i}(\gamma,\tau_{i}=Robot)\) for all \(\gamma\).
In words, we test whether subjects' rationality levels against robots capture individual strategic reasoning capacity. If Hypothesis 1 holds, then we can evaluate several possible restrictions on \(c_{i}\) by forming hypotheses on \(k_{i}(\gamma,Robot)\). In evaluating Hypothesis 2 and 3, we study if we can observe any stable patterns of (revealed) individual depth of reasoning across games.
Figure 3: The Farsightedness Task in Bone et al. (2009)
**Hypothesis 2** (**Constant capacity**).: \(k_{i}(\gamma,Robot)=k_{i}(\gamma^{\prime},Robot)\) for all \(\gamma,\gamma^{\prime}\).
**Hypothesis 3** (**Constant ordering of capacity**).: For every \(i,j\in N\), \(k_{i}(\gamma,Robot)\geq k_{j}(\gamma,Robot)\) for some \(\gamma\) implies \(k_{i}(\gamma^{\prime},Robot)\geq k_{j}(\gamma^{\prime},Robot)\) for all \(\gamma^{\prime}\in\Gamma\).
We first examine if a player's reasoning depth is constant across games, which is the strictest restriction on the stability of individual rationality levels (i.e., Hypothesis 2). If Hypothesis 2 does not hold, we can examine a weaker restriction and test whether the ranking of players (in terms of rationality levels) remains the same across games (i.e., Hypothesis 3). In other words, we examine if playing against robots can give us a measure of one's _absolute_ level of depth of reasoning by evaluating Hypothesis 2, and a measure of _relative_ level by evaluating Hypothesis 3.
The last restriction we will evaluate is whether the ordering of games (in terms of a player's rationality level) remain the same across players:
**Hypothesis 4** (**Consistent ordering of games**).: For every \(\gamma,\gamma^{\prime}\in\Gamma\), \(k_{i}(\gamma,Robot)\geq k_{i}(\gamma^{\prime},Robot)\) for some \(i\) implies \(k_{j}(\gamma,Robot)\geq k_{j}(\gamma^{\prime},Robot)\) for all \(j\in N\).
In words, we examine if playing against robots can give us a measure of game difficulty (in terms of players' depth of reasoning) by evaluating Hypothesis 4.24
Footnote 24: Our Hypothesis 2, 3, and 4 correspond to Restriction 2, 3, and 5 in GHW, respectively (see p. 377).
### Discussion
An implicit assumption behind Hypothesis 1 is that a subject has an incentive to play a strategy with the maximum level she can achieve when encountering fully rational opponents that play at their highest reasoning level. This statement is trivially true for an equilibrium-type subject since she knows her opponents will play the equilibrium strategy and is able to best respond to it. However, it may or may not be true for a bounded rational player. If one believes that an iterative reasoning model describes an individual's actual decision-making process, there are two possible reasons that a player is only able to perform \(k\) steps of iterative reasoning. First, she may incorrectly believe that other players can exhibit (at most) \((k-1)\)th-order of rationality and best respond to such belief. Second, she may correctly perceive that other players can exhibit (at least) \(k\)th-order of rationality but fail to best respond to it. While our statement regarding incentive compatibility still holds in the first case, it becomes unclear how a bounded rational player would respond when confronted with a player exhibiting an order of rationality above \(k\).
Nevertheless, we argue that this case will not be a problem under the identification strategy of the revealed rationality approach. Notice that a player who exhibits \(k\)th-order rationality would also exhibit \(m\)th-order rationality for all \(m\leq k\). Thus, a level-\(k\) individual \(i\) who perceives that other players are exhibiting (at least) \(k\)th-order rationality would also perceive that they are exhibiting \((k-1)\)th-order rationality. That is, she knows that her robot opponents' strategies will survive \(k-1\) rounds of IEDS. Thus, a payoff-maximizing
player \(i\) who is able to perform \(k\) steps of iterative reasoning will choose some strategy in \(R_{i}^{k}(\cdot)\), which contains all the undominated strategies after \(k-1\) rounds of IEDS. Under the revealed rationality approach, player \(i\) will then be classified as a \(k\)th-order revealed-rational player.
## 7 Experiment Results
### Data Description
Before delving into the main results, we begin by summarizing the subjects' choice frequencies in the ring games (Figure 4) and guessing games (Figure 5). Figure 4 reports the subjects' choice frequencies in the two ring games (G1 and G2, see Figure 1) at each player position.
From the figure, we can first observe that in both treatments, over 97% of subjects choose the strictly dominant strategy at P4 (\(\chi^{2}\) test \(p\)-value = 0.252). This suggests a clear understanding of the ring game's payoff structure and the ability to recognize strict dominance.
Second, at each player position except P4, the significance of \(\chi^{2}\) tests suggests that subjects' behavior is
Figure 4: Ring Game Choice Frequency at Each Position. The first and the second arguments represent the actions of G1 and G2.
responsive to the treatments (P1: \(\chi^{2}\) test \(p\)-value = 0.020; P2: \(\chi^{2}\) test \(p\)-value \(<0.001\); P3: \(\chi^{2}\) test \(p\)-value = 0.088). Moreover, the Robot Treatment shows a 10% to 15% higher frequency of subjects choosing the equilibrium strategy (\(b\), \(c\)) at P1, (\(c\), \(a\)) at P2, and (\(c\), \(b\)) at P3 compared to the History Treatment, indicating that the Robot Treatment effectively prompts subjects to display higher levels of rationality.
Third, at each player position except P4, a notable proportion of subjects choose the action that maximizes the minimum possible payoff among the three available actions (\(a\) at P1, \(b\) at P2, \(a\) at P3). It is also worth noting that in G1, the minimum possible payoffs of equilibrium actions (except at P4) are all 0. As shown in Figure 4, there is a higher proportion of subjects choosing the equilibrium actions in G2 but maxmin actions in G1 ((\(a\), \(c\)), (\(b\), \(a\)), (\(a\), \(b\)) at P1, P2, P3, respectively) in the History Treatment compared to the Robot Treatment. This evidence suggests that when players have uncertainty about their opponents' reasoning and strategic behavior, some players may opt for a non-equilibrium strategy to avoid the possibility of experiencing the worst possible payoff.
Figure 5 summarizes the subjects' guesses in the three guessing games using cumulative distribution. We find significant differences in the distributions between the two treatments, regardless of the value of \(p\) (\(p=2/3\): KS test \(p\)-value = 0.001; \(p=1/3\): KS test \(p\)-value < 0.001; \(p=1/2\): KS test \(p\)-value = 0.001; \(p=1/3\): KS test \(p\)-value = 0.
= 0.001). Furthermore, in the Robot Treatment, there is a 13-17 percentage point higher proportion of subjects making the equilibrium guess (i.e., choosing 1) across all three guessing games compared to the History Treatment. This difference in proportions results in the (first-order stochastic) dominance of the cumulative distribution of guesses in the Robot Treatment over that in the History Treatment, suggesting a higher level of rationality exhibited by the subjects in the Robot Treatment for the guessing games as well. In the subsequent subsection, we will describe our approach for classifying individual rationality levels and perform statistical tests to assess whether subjects demonstrate higher orders of rationality when playing against robots.
### Type Classification
We adopt the revealed rationality approach to classify subjects into different rationality levels. Specifically, let \(s_{i}=(s_{i}^{\gamma})\) be the vector which collects player \(i\)'s actions in each family of games \(\gamma\), where \(\gamma\in\{Ring,Guessing\}\). In the ring games, we classify subjects based on the classification rule shown in Table 1. In both the Robot Treatment and the History Treatment, if a subject's action profile matches one of the predicted action profiles of type R0-R4 exactly, then the subject is assigned that type. Therefore, we can obtain each subject's type in the Robot Treatment and the History Treatment, which are denoted as \(k_{i}(Ring,Robot)\) and \(k_{i}(Ring,History)\), respectively.
Similarly, for the guessing games, we classify subjects based on the rule outlined in Table 2. In both treatments, each subject makes three guesses (at \(p=2/3\), \(1/3\), and \(1/2\)). If a subject is categorized into different types in different guessing games, we assign her the lower type. Thus, we can obtain the types in both treatments, denoted as \(k_{i}(Guessing,Robot)\) and \(k_{i}(Guessing,History)\), respectively. Following this rationale, we construct the overall distribution of individual rationality levels for each treatment by assigning each subject the lower type she exhibits across the two classes of games, i.e., \(k_{i}(\tau_{i})=\min\{k_{i}(Ring,\tau_{i}),k_{i}(Guessing,\tau_{i})\}\).
### Type Distributions
Figure 6 reports the distributions of rationality levels in the two treatments for both ring games and guessing games. As shown in the top of Figure 6, subjects tend to be classified into higher types when playing against robots. There are more R1 and R2 players but fewer R3 and R4 players in the History Treatment than in the Robot Treatment. To examine if a subject's rationality depth is bounded by her revealed rationality level in the Robot Treatment (Hypothesis 1), at the aggregate level, we conduct the two-sample Kolmogorov-Smirnov test to compare the distributions of rationality levels in the two treatments. If Hypothesis 1 holds, we should observe either no difference in the two distributions or the distribution in the Robot Treatment dominating the distribution in the History Treatment. Our results show that the underlying distribution of individual rationality levels in the Robot Treatment stochastically dominates the one in the History Treatment (Ring game: KS test \(p\)-value = 0.015; Guessing game: KS test \(p\)-value = 0.001), and thus provide supporting evidence for Hypothesis 1.
Moreover, our within-subject design gives us paired data of individual rationality types across treatments (as summarized in the bottom of Figure 6), which gives us another way to test Hypothesis 1. Overall, 72 percent of subjects (211/293) exhibit (weakly) higher rationality levels in the Robot Treatment than in the History Treatment in both families of games. In contrast, only fewer than four percent of subjects (11/293) consistently exhibit strictly lower rationality levels in the Robot Treatment across games. We further conduct the Wilcoxon signed-rank test to examine whether the subjects' rationality levels in the Robot Treatment are significantly greater than the History Treatment. Consistent with Hypothesis 1, we observe higher rationality levels in the Robot Treatment (Wilcoxon test \(p\)-value \(<0.0001\) for both ring games and guessing games). Therefore, we conclude that the rationality levels in the Robot Treatment for a game can serve as a proxy of individual strategic reasoning capacity in that game.
In the guessing games, our classification results display a typical distribution pattern of estimated levels as documented in Costa-Gomes & Crawford (2006) and GHW. First, the modal type is R1 (Level 1), with more than 35 percent of subjects classified as R1 players in both treatments (Robot: 38.23%; History: 47.78%; Costa-Gomes & Crawford (2006): 48.86%; GHW: 50.00%). In particular, the proportion of R1 players reported in the History treatment of our guessing games is very close to the proportion of level-1
Figure 6: Frequency of Rationality Levels in Ring Games (Left) and Guessing Games (Right)
players reported in Costa-Gomes & Crawford (2006) and GHW. Second, R3 (Level 3) represents the least frequently observed category among the rational types (i.e., R1-R4), with fewer than 10 percent of subjects classified as R3 players in both treatments, a proportion that aligns with findings in the literature. (Robot: 6.14%; History: 4.10%; Costa-Gomes & Crawford (2006): 3.41%; GHW: 10.34%). Third, the percentage of R4 players in our History Treatment falls within the range of equilibrium-type player proportions reported in Costa-Gomes & Crawford (2006) and GHW (Robot: 30.03%; History: 16.04%; Costa-Gomes & Crawford (2006): 15.91%; GHW: 27.59%). Noticeably, in our Robot Treatment, we observe a relatively high frequency of R4 players compared to previous literature.25 This finding underscores the significant impact of non-equilibrium belief about opponents on non-equilibrium behavior.
Footnote 25: For instance, Arad & Rubinstein (2012) also note that, in their 11–20 money request game, the percentage of subjects employing more than three steps of iterative reasoning does not exceed 20 percent. This aligns with the proportion of R4 players identified in our History Treatment but is lower than that in our Robot Treatment.
It is noteworthy that, contrary to previous findings, we observe very few R0 players in the ring games in both treatments (Robot: 0.68%; History: 2.04%).26 In our experiment, the subjects do not interact with each other in both treatments. Thus, our observation suggests that, when human interactions exist, social preferences may play some roles in a ring game and lead to (seemingly) irrational behavior, though we cannot exclude the possibility that this discrepancy in the prevalence of R0 players is due to different samples.
Footnote 26: Kneeland (2015) observes 6 percent of R0 players (with the ER approach) and Cerigioni et al. (2019) observe more than 15 percent of R0 players (with the revealed rationality approach) in their experiments.
### Constant Absolute Rationality Levels
In this subsection, we evaluate the hypothesis that an individual has constant strategic reasoning capacity across games in the Robot Treatment (i.e., Hypothesis 2). Figure 7 reports how frequently a ring game player with a rationality depth is classified into the same or another type in the guessing games. If the observed individual rationality level is the same across games, then every diagonal entry of each transition matrix in Figure 7 will be 100(%). Alternatively, if subjects' rationality orders in the ring games and guessing games are uncorrelated, every row in a transition matrix will be the same and equals the overall distribution in the guessing games.
The transition matrices for both treatments show that most R1 and R4 players in the ring games remain as the same type in the guessing games. Most R2 ring game players, however, only exhibit first-order rationality in the guessing games. We do not observe any subjects consistently classified into R3 for both ring and guessing games, possibly because we have relatively low numbers of R3 subjects in either games. Overall, there is a relatively high proportion of subjects that exhibit the same rationality depth across games in both treatments (Robot: 38.23%; History: 40.27%).27 Note that in the Robot Treatment, we observe a relatively high proportion (52/293 = 17.74%) of subjects classified as R4 players in both games,28 suggesting that subjects in our experiment understand the instruction for robots' decision rules and try to play the best response to such rules.
Footnote 27: GHW report that only 27.3% of their subjects play at the same level across two families of games.
To test if the high proportion of constant-level players actually results from independent type distributions, we generate 10,000 random samples of 293 pairs of levels, independently drawn from the empirical distribution of rationality levels in the Robot Treatment. The simulated datasets provide a distribution of the frequency with which a subject plays at the same level in both game families. The mean frequency is found to be 32.86%, with a 95 percent confidence interval ranging from 27.65% to 38.23%. The observed frequency is 38.23%, rejecting the null hypothesis that the subjects' rationality depths are independently distributed across game families in terms of absolute rationality depths, at a significance level close to 5% (\(p\)-value = 0.057). Thus, we conclude that the hypothesis stating an individual exhibits constant depths of rationality across games has predictive power (despite not being perfectly accurate) regarding experimental subjects' actions under proper belief control.
To establish a baseline for comparison, we utilize the same Monte Carlo simulation and statistical test outlined above to investigate whether the restriction of constant rationality depth can be applied to modeling subjects' actions when they face human opponents (choice data) instead of robots in the History Treatment. Upon examining the pooled data in the History Treatment, we find that the null hypothesis of independently distributed rationality levels cannot be rejected despite the seemingly high proportion of constant-level players. The simulated samples generated from the data in the History Treatment exhibit an average of 40.27% constant-level players (95% CI = \([34.81\%,45.73\%]\)), and the observed frequency in the actual data is 41.30% (\(p\)-value = 0.768). This finding suggests that the high stability observed in individual (absolute) rationality depth within our experiment does not result from the specific games selected. Moreover, it indicates that unifying subjects' beliefs about opponents' strategic reasoning depth effectively stabilizes the individual revealed order of rationality across games.
Figure 7: Transition Matrix for Rationality Levels in Both Treatments (from Ring to Guessing Games)
### Constant Ordering of Rationality Levels
In this subsection, we evaluate the hypothesis that the ranking of individual strategic reasoning capacity between two players in different game families is the same (i.e., Hypothesis 3). Table 3 reports the switch frequency, non-switch frequency, and switch ratio observed in the actual data and computed under the null hypothesis of independently distributed rationality levels. The _switch frequency_, as defined in GHW, represents the proportion of player pairs in which the player who exhibits a strictly higher level in one game becomes the player with a strictly lower level in another game. On the other hand, the _non-switch frequency_ corresponds to the proportion of player pairs in which the player with a strictly higher level in one game maintains that higher level in another game.29 The _switch ratio_ is calculated by dividing the switch frequency by the non-switch frequency. If the relative rationality levels are preserved across games, the switch ratio will be zero. Alternatively, if the rationality levels are independently drawn, we expect to observe a switch ratio of one.
Footnote 29: The sum of the switch frequency and non-switch frequency may not be one since the paired players who exhibit the same level in one game are excluded.
The results presented in Table 3 provide compelling evidence of stable rankings of individual rationality levels. Our pooled data show that non-switching occurs three times more frequently than switching in the Robot Treatment (Non-switching: 41.30%; Switching: 12.28%). The switch ratio of 0.30, derived from the switch and non-switch frequencies, is lower than any switch ratio obtained from our 10,000-sample simulated data. Additionally, these results consistently hold across different treatment orders. Whether the Robot Treatment is played first or second, the observed switch ratios remain around 0.30, both of which are lower than any switch ratio obtained from the simulated data. Consequently, we reject the null hypothesis of independently distributed levels in terms of relative rationality depths, with a \(p\)-value less than 0.0001.
\begin{table}
\begin{tabular}{c c c c c c c} \hline & \multicolumn{2}{c}{Pooled Data (\(n=293\))} & \multicolumn{2}{c}{RH Order (\(n=136\))} & \multicolumn{2}{c}{HR Order (\(n=157\))} \\ \cline{2-7} Ring Game vs. & Empirical & Null & Empirical & Null & Empirical & Null \\ Guessing Game & Data & Hypothesis & Data & Hypothesis & Data & Hypothesis \\ \hline Robot Treatment & & & & & & \\ Switch frequency: & 12.3\% & 22.5\% & 11.9\% & 19.8\% & 12.5\% & 24.0\% \\ Non-switch frequency: & 41.3\% & 22.5\% & 37.7\% & 19.8\% & 42.3\% & 24.1\% \\ Switch ratio: & 0.30 & 1.01 & 0.32 & 1.03 & 0.29 & 1.02 \\ \(p\)-value: & \(<0.0001\) & & \(<0.0001\) & & \(<0.0001\) & \\ History Treatment & & & & & & \\ Switch frequency: & 12.9\% & 17.9\% & 11.0\% & 21.2\% & 14.8\% & 14.5\% \\ Non-switch frequency: & 34.5\% & 17.8\% & 40.3\% & 21.3\% & 28.1\% & 14.5\% \\ Switch ratio: & 0.37 & 1.02 & 0.27 & 1.02 & 0.53 & 1.04 \\ \(p\)-value: & \(<0.0001\) & & \(<0.0001\) & & 0.019 & \\ \hline \end{tabular}
\end{table}
Table 3: Switch Ratio for the Robot and History Treatment
We also calculate the switch and non-switch frequencies in the History Treatment to investigate whether the rankings of individual rationality levels remain stable when subjects' beliefs about others' rationality depths are not controlled. In the History Treatment, the null hypothesis of independently distributed levels in terms of relative rationality depths is also rejected (\(p\)-value \(<0.0001\)), with the pooled data showing switch and non-switch frequencies of 12.89% and 34.47%, respectively, resulting in a switch ratio of 0.37. However, it is noteworthy that the switch ratio in the History Treatment is 23% higher than that in the Robot Treatment, and this difference increases to 66% when focusing solely on the Robot and History Treatments that are played first by subjects (Robot: 0.32; History: 0.53).30 This result mainly stems from the fact that the non-switch frequency in the Robot Treatment is substantially higher than that in the History Treatment. These findings once again suggest that unifying subjects' beliefs about the strategic reasoning capability of their opponents can significantly improve the stability observed in individual rationality levels across games.
Footnote 30: We conduct a statistical comparison by contrasting a switch ratio of 0.32 with the switch ratios obtained from 10,000 random samples of independently drawn levels from the empirical distribution of rationality levels in the History Treatment under HR Order. Our analysis reveals that, when focusing exclusively on the data from treatments played first, we can reject the null hypothesis that the observed rationality levels in the Robot Treatment are drawn from the same distribution of levels as in the History Treatment, in terms of switch ratios (\(p\)-value = 0.027).
To summarize, our Robot Treatment reveals stability in both the subjects' absolute and relative rationality depths. These findings indicate that strategic reasoning ability could be an inherent personal characteristic that can be inferred from choice data when participants interact with robot players.
### Persistence of Ordering of Games
In this subsection, we evaluate the hypothesis that the ranking of games in terms of individual strategic reasoning capacity is the same (i.e., Hypothesis 4). Table 4 reports the change-in-same-direction frequency, change-in-opposite-directions frequency, and the opposite/same ratio computed based on actual data and simulated data generated from independently-drawn levels. The _change-in-same-direction frequency_ represents the proportion of player pairs in which both players exhibit a strictly higher level in the same game. On the other hand, the _change-in-opposite-directions frequency_ refers to the proportion of player pairs in which the two players exhibit a strictly higher level in different games.31 The _opposite/same ratio_ is calculated by dividing the change-in-opposite-directions frequency by the change-in-same-direction frequency. In the case of a constant ranking of games across players, the opposite/same ratio would be zero. Conversely, if the rationality levels are independently drawn, we would expect the opposite/same ratio to be one.
Footnote 31: Again, the sum of the change-in-same-direction frequency and change-in-opposite-directions frequency may not be one since a pair of players is excluded if one of them exhibits the same level across games.
In the Robot Treatment, the frequency with which two paired players change their rationality levels in the same direction (20.58%) is 3 percentage points higher than the frequency of changing in the opposite directions (17.58%), as shown in Table 4 (the column of Pooled Data). The observed opposite/same ratio of 0.85 significantly deviates from the mean of the simulated datasets (1.00 with a 95 percent confidence interval of 0.96 to 1.01), leading to the rejection of the null hypothesis of independently distributed levels in terms of the ordering of games (\(p\)-value \(<0.0001\)). This result remains robust regardless of the order of
treatments, although it only reaches marginal significance when the analysis is limited to the subjects who played the Robot Treatment first (RH Order: \(p\)-value = 0.079; HR Order: \(p\)-value = 0.0004). Consequently, our findings suggest that an individual's strategic reasoning level, measured under an environment where a player's belief is well controlled, could serve as a reliable proxy for the (relative) complexity or difficulty of a game.
In the History Treatment, we find a similar result but with weaker evidence. The simulated datasets generated from the History Treatment data yield a mean opposite/same ratio of 1.00, with a 95 percent confidence interval of 0.95 to 1.01. The actual ratio of 0.90, which is 6% higher than that in the Robot Treatment, still rejects the null hypothesis of independently distributed levels with a significance level of \(p\)-value = 0.002. However, this result becomes less robust when considering the order of treatments. We cannot reject the null hypothesis when examining only the subjects who played against robots before playing against human choice data (RH Order: \(p\)-value = 0.112; HR Order: \(p\)-value = 0.025). Without controlling for a subject's belief about her opponents' strategic thinking abilities, the observed rationality level could reflect either the complexity of the environment or how a subject believes others would perceive the complexity of the environment, and thus has weaker predictive power on other players' (revealed) order of rationality.
### Cognitive Tests, Farsightedness, and Strategic Thinking
If individual strategic sophistication is persistent across games, a natural next question will be whether a subject's performance in other cognitive tests or strategic tasks can predict her strategic reasoning ability. Accordingly, we run regressions of revealed rationality levels on subjects' CRT scores, short-term memory task scores, and farsightedness task scores (see Figure 8). We cluster the regression standard errors at the session level.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{Pooled Data (\(n=293\))} & \multicolumn{2}{c}{RH Order (\(n=136\))} & \multicolumn{2}{c}{HR Order (\(n=157\))} \\ \cline{2-7} Ring Game vs. & Empirical & Null & Empirical & Null & Empirical & Null \\ Guessing Game & Data & Hypothesis & Data & Hypothesis & Data & Hypothesis \\ \hline Robot Treatment & & & & & & \\ Change in opposite direction: & 17.5\% & 22.6\% & 18.7\% & 19.8\% & 16.5\% & 24.1\% \\ Change in same direction: & 20.6\% & 22.6\% & 20.2\% & 19.8\% & 20.8\% & 24.1\% \\ Opposite/same ratio: & 0.85 & 1.00 & 0.93 & 1.00 & 0.79 & 1.00 \\ \(p\)-value: & \(<0.0001\) & & 0.079 & & 0.0004 & \\ History Treatment & & & & & & \\ Change in opposite direction: & 16.3\% & 17.9\% & 17.1\% & 21.2\% & 15.6\% & 14.5\% \\ Change in same direction: & 18.1\% & 17.9\% & 18.2\% & 21.3\% & 17.8\% & 14.5\% \\ Opposite/same ratio: & 0.90 & 1.00 & 0.94 & 1.00 & 0.88 & 1.00 \\ \(p\)-value: & 0.002 & & 0.112 & & 0.025 & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Opposite/same Ratio for the Robot and History Treatment
The definitions of the independent variables are as follows: _CRT Score_ (ranging from 0 to 3) represents the number of correct answers a subject gets in the three CRT questions. _Memory Score_ (ranging from 0 to 11) is defined as the number of correct answers a subject provides before making the first mistake. _Farsightedness_ is an indicator variable that equals one if a subject chooses to go down at the first move in the farsightedness task (see Section 5.2). Last, the dependent variable is the individual rationality level (ranging from 0 to 4) revealed in each type of games and each treatment.
Figure 8 presents a coefficient plot summarizing the OLS regression results of revealed rationality levels. The analysis demonstrates a positive association between a subject's performance in the CRT and her revealed rationality depth across all types of games and treatments. Overall, the CRT score has a stronger predictive power on subjects' rationality levels in the guessing games and in the Robot Treatment. In the Robot Treatment, each additional correct answer on the CRT is associated with an average increase of 0.298 (\(p\)-value \(<\) 0.001) in the individual's revealed rationality level for ring games and 0.566 (\(p\)-value \(<\) 0.001) for guessing games. Comparatively, in the History Treatment, each additional correct answer on the CRT is associated with a relatively smaller average increase of 0.239 (\(p\)-value = 0.002) in the individual's revealed
Figure 8: Coefficient Plot for OLS Regressions with Different Dependent Variables. Error bars indicate the 95% CIs, and the standard errors are clustered at the session level.
rationality level for ring games and 0.461 (\(p\)-value \(<\) 0.001) for guessing games, both approximately 0.8 times the coefficient sizes reported under the Robot Treatment.
In contrast to the previous finding, our results show no significant correlation between short-term memory and strategic sophistication. The coefficient estimates of _Memory Score_ are all below 0.03, and all the corresponding \(p\)-values are above 0.3. Notably, these findings are in line with those of GHW, who also observe that CRT scores hold some predictive power over subjects' strategic thinking types, whereas short-term memory capacity does not.
A subject's choice in the farsightedness task also holds significant predictive power over her revealed rationality depth across all types of games and treatments. Similar to the CRT score, we observe a stronger association between farsightedness and individual rationality levels in the guessing games and in the Robot Treatment. In the Robot Treatment, a farsighted subject's revealed rationality level is, on average, 0.569 (\(p\)-value = 0.002) and 0.842 (\(p\)-value \(<\) 0.001) levels higher than that of a myopic subject when playing ring games and guessing games, respectively. Comparatively, in the History Treatment, a farsighted subject's revealed rationality level is, on average, 0.339 (\(p\)-value = 0.050) and 0.631 (\(p\)-value \(<\) 0.001) levels higher than that of a myopic subject when playing ring games and guessing games, respectively. Both of these coefficients are smaller in size compared to the estimates reported for the Robot Treatment. In summary, these results indicate a strong correlation between an important strategic thinking skill in a dynamic game--backward induction ability--and the strategic reasoning ability in a one-shot interaction.
## 8 Concluding Remarks
This study delves into the cognitive capacity of individuals in strategic interactions. To examine their ability to engage in multi-step reasoning, we conduct an experiment designed to elicit and identify each subject's "rationality bound," while controlling for a subject's belief about her opponent's rationality depth. Following the revealed rationality approach, we use two classes of dominance solvable games, ring games and guessing games, as the base games in our experiment for identifying a subject's order of rationality. More importantly, to disentangle the confounding impact of beliefs, we introduce equilibrium-type computer players that are programmed to exhibit infinite order of rationality into the experiment. Under the theoretical framework of GHW, which formalizes the idea of individual capacity of strategic reasoning, we then test (1) whether a subject's order of rationality is (weakly) higher in the Robot Treatment and (2) whether the observed order of rationality in the Robot Treatment exhibits any stable pattern across games.
Overall, our results offer compelling evidence that matching subjects with robot players to elicit and identify individual strategic reasoning ability is an effective approach. First, subjects exhibit a higher order of rationality in the Robot Treatment compared to the History Treatment, supporting the hypothesis that a subject plays at her highest achievable rationality level (i.e., her capacity bound) in the Robot Treatment. Second, the observed absolute and relative order of rationality in the Robot Treatment remains stable across
different types of games, a rare finding in previous literature. Additionally, we find a positive association between a subject's rationality level and her CRT score and backward induction ability, while no significant correlation is observed with short-term memory. These findings indicate that strategic reasoning ability may represent an inherent personal characteristic that is distinct from other cognitive abilities and can be reliably inferred from choice data when subjects' beliefs about others are properly controlled.
Considering that the revealed rationality bound identified in the Robot Treatment can serve as a reliable proxy for an individual's strategic thinking ability, we can independently implement dominance-solvable games, such as ring games and guessing games, with human subjects playing against fully rational computer opponents to effectively elicit and identify human players' strategic capacity, either before or after any lab experiment. By matching human players with computer players, their revealed strategic sophistication is not confounded by their endogenous beliefs about each other's level of sophistication. Furthermore, the robot approach eliminates the need for multiple players to identify a single player's \(k\)th-order rationality in a game, allowing for an individual task that efficiently elicits and identifies a subject's higher-order rationality. Additionally, as the interactions with computer players are independent of the interactions with human players, the two experiences are expected to have minimal influence on each other. Consequently, the measurement of strategic reasoning ability could remain distinct from the behavioral patterns observed in the main experiment session, thereby avoiding any potential contamination between the two.
In conclusion, we believe that such experiment protocol, particularly the robot approach, has the potential to become a standard tool for measuring a player's actual strategic sophistication, analogous to the usage of the established method (for eliciting risk attitude) in Holt & Laury (2002) but applied to the domain of strategic reasoning. By utilizing this tool, we can gain a better understanding of whether non-equilibrium behavior observed in the main experiment can be attributed to bounded strategic thinking capability or other factors, such as non-equilibrium beliefs and social preferences.
|
2308.00061 | Hall mobilities and sheet carrier densities in a single LiNbO$_3$
conductive ferroelectric domain wall | For the last decade, conductive domain walls (CDWs) in single crystals of the
uniaxial model ferroelectric lithium niobate (LiNbO$_3$, LNO) have shown to
reach resistances more than 10 orders of magnitude lower as compared to the
surrounding bulk, with charge carriers being firmly confined to sheets of a few
nanometers in width. LNO thus currently witnesses an increased attention since
bearing the potential for variably designing room-temperature nanoelectronic
circuits and devices based on such CDWs. In this context, the reliable
determination of the fundamental transport parameters of LNO CDWs, in
particular the 2D charge carrier density $n_{2D}$ and the Hall mobility
$\mu_{H}$ of the majority carriers, are of highest interest. In this
contribution, we present and apply a robust and easy-to-prepare Hall-effect
measurement setup by adapting the standard 4-probe van-der-Pauw method to
contact a single, hexagonally-shaped domain wall that fully penetrates the
200-$\mu$m-thick LNO bulk single crystal. We then determine $n_{2D}$ and
$\mu_{H}$ for a set of external magnetic fields $B$ and prove the expected
cosine-like angular dependence of the Hall voltage. Lastly, we present
photo-Hall measurements of one and the same DW, by determining the impact of
super-bandgap illumination on the 2D charge carrier density $n_{2D}$. | Henrik Beccard, Elke Beyreuther, Benjamin Kirbus, Samuel D. Seddon, Michael Rüsing, Lukas M. Eng | 2023-07-31T18:31:58Z | http://arxiv.org/abs/2308.00061v2 | Hall mobilities and sheet carrier densities in a single LiNbO\({}_{3}\) conductive ferroelectric domain wall
###### Abstract
For the last decade, conductive domain walls (CDWs) in single crystals of the uniaxial model ferroelectric lithium niobate (LiNbO\({}_{3}\), LNO) have shown to reach resistances more than 10 orders of magnitude lower as compared to the surrounding bulk, with charge carriers being firmly confined to sheets of a few nanometers in width. LNO thus currently witnesses an increased attention since bearing the potential for variably designing room-temperature nanoelectronic circuits and devices based on such CDWs. In this context, the reliable determination of the fundamental transport parameters of LNO CDWs, in particular the 2D charge carrier density \(n_{2D}\) and the Hall mobility \(\mu_{H}\) of the majority carriers, are of highest interest. In this contribution, we present and apply a robust and easy-to-prepare Hall-effect measurement setup by adapting the standard 4-probe van-der-Pauw method to contact a single, hexagonally-shaped domain wall that fully penetrates the 200-\(\mu\)m-thick LNO bulk single crystal. We then determine \(n_{2D}\) and \(\mu_{H}\) for a set of external magnetic fields \(B\) and prove the expected cosine-like angular dependence of the Hall voltage. Lastly, we present photo-Hall measurements of one and the same DW, by determining the impact of super-bandgap illumination on the 2D charge carrier density \(n_{2D}\).
ferroelectrics, domain walls, domain wall conductivity, Hall effect, photo-Hall effect, single crystals, lithium niobate, LiNbO\({}_{3}\), van-der-Pauw, confinement, 2D charge carrier density, 2D electron gas, 2DEG.
## I Introduction
Continuous progress in solid-state nanotechnology relies on answering a number of unsolved scientific questions with respect to both material systems and device operation principles. One such burning issue is electrical transport under well-defined and controlled conditions in reduced dimensions, as are the 2-dimensional (2D) material systems. While, for example, the 2D van-der-Waals materials are thoroughly analyzed [1], our focus here lies on 2D sheets built up from conductive ferroelectric domain walls (CDWs), i.e., the transition regions between ferroelectric domains of opposite dielectric polarization, which can be tuned to exhibit a strongly enhanced conductivity as compared to the surrounding bulk [2; 3]. In fact, DWs in ferroelectrics have been reported to form effective 2D electron gases (2DEGs) after applying specialized preparation routines to these wide-bandgap bulk materials [4; 5]. LiNbO\({}_{3}\) (LNO) turned out to be _the_ "drosophila" ferroelectric for CDW engineering, since it is robust, semiconductor compatible, and easy-to-reconfigure at room temperature, while being commercially available both as a bulk material and in crystalline thin-film-on-insulator form [6; 7; 8; 9]. Notably, this ever rising interest in CDWs has been reviewed with respect to both theoretical and device oriented aspects in a number of excellent works [10; 11; 12; 13; 14; 15; 16; 17].
At the fundamental level, primarily, the divergence of the ferroelectric polarization (i.e., the local vector field that describes the volume density of unit cell dipoles) at the DW has been identified as one of the main driving forces behind the localized DW conductivity, that, not to forget, has been predicted already in the 1970s [18]. This divergence is understood as an intrinsic charge density acting as the source of the so-called depolarization field. In turn, the emergent electrostatic field leads to the attraction of free charge carriers to the DWs, as well as to the population of otherwise free electron and/or hole states due to local band bending at the DW position. For the simplest case of a uniaxial ferroelectric (as is LNO) that shows purely Ising-type DWs, this divergence is directly related to the geometrical inclination of the DW with respect to the polar axis [2; 19].
Nevertheless, on a practical level, the determination of DW-related quantitative transport data such as the charge carrier type, density, and mobility, was, to date, restricted to a few exemplary cases, only. In this context, especially the analysis of the Hall effect in DWs in the improper ferroelectrics YbMnO\({}_{3}\) and ErMnO\({}_{3}\) has shown to be a valuable tool, as reported in the groundbreaking papers of Campbell _et al._[20] and Turner _et al._[21], respectively. Those authors chose scanning-probe-based approaches for their evaluation, needing a cumbersome and sophisticated procedure to disentangle the Hall potential from the cantilever-based three-terminal reading via calibration routines and accompanying simulations. Moreover, as these authors state themselves, their approach is mainly valid to extract near-surface charge carrier densities and mobilities, only. With respect to proper ferroelectrics, the Hall-effect investigation by Qian _et al._[22] of DW pn-junctions engineered into x-cut thin-film lithium niobate (TFLN), was equally limited to surface-near carrier densities, while McCluskey _et al._[23] made use of the fact that a DW in z-cut TFLN mirrors the Corbino cone geometry and in turn found promisingly high carrier mobilities, which were extracted from a magnetoresistance analysis. Notably, the recent study by Beccard _et al._[5] proposes a completely different approach, quantifying both the 2D charge carrier densi
ties \(n_{2D}\) and Hall mobilities \(\mu_{H}\) through "macroscopic" Hall-effect measurements, by adapting the classical van-der-Pauw (vdP) [24] four-point electrode configuration to measurements from a single CDW in bulk BaTiO\({}_{3}\).
The work presented in this paper sets in exactly at this point, by adopting the vdP scenario to the particular case of a single CDW in z-cut bulk LiNbO\({}_{3}\), the uniaxial model ferroelectric of uttermost importance for prospective nanoscale applications. While significantly high DW conductivity (DWC) in LNO has been proven for the last decade in a number of consecutive works [2; 3; 25; 26], Hall-effect measurements in LNO CDWs are still lacking. In fact, the challenge consists in adapting the vdP method to the hexagonally-shaped DW in LNO as depicted in Fig. 1; as seen, the four vdP contacts then likely measure the parallel junction of two such conductive DWs in LNO, rather than connecting to one single planar 2D sheet as was the case for the CDW in BaTiO\({}_{3}\)[5]. Nonetheless, in the following we will show that evaluating the DW sheet resistance and the magnetic-field dependent Hall voltages still allows applying the vdP method, hence revealing quantitative data for both \(n_{2D}\) and \(\mu_{H}\) on a so far unprecedented level and precision. In fact, the error stemming from the parallel DW junction is a factor of two in maximum, as easily figured when calculating the total resistance of a parallel junction of two identical / different resistors, i.e., two CDWs. Notably, this factor of two does not change the order of magnitude of both \(n_{2D}\) and \(\mu_{H}\). In addition, the same error factor of two is obtained when analyzing the Hall-data in a more rigorous way by applying the concept of the resistor network (RN) [27], as is discussed in the SI-section D.
Moreover, the integrity of the results obtained by the vdP method is corroborated here by two further investigations, (i) by quantifying the angular Hall-voltage dependence, and (ii) when inspecting the Hall-voltage response under super-bandgap illumination for purposely generating additional electron-hole pairs within the CDW.
## II Materials and Methods
### Samples - Fabrication of Domain Walls
For our study here, we employed two 5-mol% Mg-doped congruent z-cut LiNbO\({}_{3}\) crystal plates, with sizes of approx. 1\(\times\)0.5 mm\({}^{2}\) in the xy-plane and a thickness of 200 um in the polar z-direction, cut from a commercial wafer by Yamaju Ceramics Co., Ltd.. In the following, we label these samples as "LNO\({}_{1}\)" and "LNO\({}_{2}\)". A single, fully penetrating and hexagonally-shaped ferroelectric domain of a 257 um (LNO\({}_{1}\)) and 356 um (LNO\({}_{2}\)) diameter [SI-Figs. S7(a) and (b)] was then grown into these samples by applying the well-established method of UV-assisted poling [25; 28] using liquid electrodes and a He-Cd laser (Kimmon Koha IK3301R-G) operated at a 325 nm wavelength, for more details refer, e.g., to [25; 29]. Then, four 8-nm-thick chromium electrodes were vapor-deposited under high-vacuum conditions onto every DW structure, using a shadow mask. The exact electrode geometry and arrangement with respect to the two crystals and DW orientations are sketched in Fig. 1(b,d), while a polarization-sensitive microscopy top-view image is found in the SI-Fig. S7(c). In the following, the electrodes are consecutively labelled by indices 1-4, as standard to 4-point van-der-Pauw experiments. As a result, the 4 vdP electrodes 1-4 directly contact to the two DWs that lie in the xz-plane, one in the front (red, dashed), one in the back (green, dotted), as seen in Fig. 1 for samples "LNO\({}_{1}\)" and "LNO\({}_{2}\)", respectively. Note that in parallel, two monodomain reference samples "LNO\({}_{3}\)" and "LNO\({}_{4}\)" were prepared as well, having identical electrodes 1 - 4, but containing no DWs.
### Enhancement of the Domain Wall Conductivity
Subsequently, the as-grown hexagonal DWs in samples "LNO\({}_{1}\)" and "LNO\({}_{2}\)" underwent the DWC "enhancement"
Figure 1: (a,c) 3D Cherenkov second-harmonic generation microscopy (CSHG) data, and (b,d) chromium electrode arrangement 1–4 for the two samples “LNO\({}_{1}\)” and “LNO\({}_{2}\)”, as prepared from z-cut LiNbO\({}_{3}\) single crystals for Hall-transport measurements using the van-der-Pauw method. Note that contacts 1–4 inclose a parallel junction of 2 conductive domain walls (CDWs), one in the front (red, dashed) and one in the back (green, dotted). The different protocols applied for domain-wall conductivity (DWC) enhancement in “LNO\({}_{1}\)” (a,b) and “LNO\({}_{2}\)” (c,d) resulted in the different shape and appearance, as seen in (a,c). The enhanced and desired head-to-head (“H2H”) DW inclination is color coded in red in the CSHG scale bar, while tail-to-tail type DWs (“T2T”) appear in blue. As seen, sample “LNO\({}_{2}\)” shows a larger DW inclination enhancement, justifying the larger DW current as compared to sample LNO\({}_{1}\).
procedure by applying high voltages between the z+ and z-sides supplied by the voltage source of a Keithley 6517B electrometer, as described previously in refs. [2; 3]. The corresponding current-voltage curves recorded during these voltage ramps are depicted in SI-Fig. S1, and the exact parameters of the post-growth treatment are stated in the SI-tab. S1. Our enhancement procedure leads to higher average DW inclination angles relative to the polar z-axis and, in turn, to stronger DW confined charge accumulation and thus to a larger DW conductivity. However, in order to possibly break up the initial parallel DW junction arrangement [see again Fig. 1] and to apply our Hall-effect measurements to a single CDW, the enhancement procedure here was realized differently and deliberately asymmetrical for the two LNO samples. In particular, we treated sample \(\text{LNO}_{1}\) with only one high-voltage ramp between electrodes 1 and 3, while sample \(\text{LNO}_{2}\) experienced consecutive voltage ramps between both top-bottom electrode pairs 1/3 and 2/4, respectively, cf. SI-tab. S1 and SI-Fig. S1. Accompanying images by 3D Cherenkov second-harmonic generation microscopy (CSHG), our standard non-destructive and real-space method of choice for both visualizing ferroelectric DWs [3; 30; 31] and especially for correlating their local inclination to the DW conductivity, indeed elucidates a very different DW appearance for samples \(\text{LNO}_{1}\) and \(\text{LNO}_{2}\) [see Fig. 1(a) and (c)] with average inclination angles between 0\({}^{\,\ast}\) and 0.5\({}^{\,\ast}\) and broad inclination distributions. The two different enhancement procedures for samples \(\text{LNO}_{1}\) and \(\text{LNO}_{2}\) are clearly reflected in the two CSHG pictures in Fig. 1(a),(c), where the walls in \(\text{LNO}_{2}\) show on average a much larger distribution of angles suggesting a larger local screening charge, which is later reflected in the carrier densities; nonetheless, the overall DW conductivities could be readily enhanced for both cases, by three (\(\text{LNO}_{1}\)) and six (\(\text{LNO}_{2}\)) orders of magnitude in maximum (cf. SI-Fig. S2(a) and (b) and SI-tab. S1). The detailed current-voltage characteristics recorded between the different electrode pairs can be found in the Supplemental Information, part A.
### Realization of Hall Voltage Measurements
For quantifying the LNO DW Hall voltage, the adapted vdP configuration [24] was employed as illustrated in the inset of Fig. 2 and, within a previous study, successfully tested for 2DEGs in BaTiO\({}_{3}\) CDWs [5]. The sample therefore was mounted into an electromagnet at room temperature that delivers magnetic fields \(B\) of up to \(\pm 420\) mT. Contacts 1 and 4 were connected to the Keithley 6517B electrometer to apply a bias voltage of 6 V, which resulted in a domain wall current \(I=I_{14}\) on the order of 0.1 nA. The corresponding carriers hence experience the Lorentz force \(F_{L}\) as sketched in the inset of Fig. 2, resulting in the Hall voltage \(U_{H}:=U_{23}\) that is detected between contacts 2 and 3 using a Keithley 2700 multimeter. The ratio \(R_{h}=U_{H}/I\), subsequently denoted as the Hall resistance, was determined for six different \(B\)-field values set between 330 mT and 420 mT. In order to account for any sample misalignment within the electromagnet (i.e. non-parallel alignment of \(B\)-field and DW normal vector), the \(B\)-field direction was switched by changing the sign of the electromagnet's voltage, and the measurement series was repeated - a common practice for Hall voltage measurements[32]. The corresponding data set was acquired for both DW samples \(\text{LNO}_{1}\) and \(\text{LNO}_{2}\) (see Fig. 2 for the averaged data and SI-Fig. S5 for the raw data, i.e., the \(B\)-field-direction dependent data).
### Angular Dependence of the Hall Voltage
Now, to verify that a true Hall voltage and not a parasitic quantity is measured, the angular dependence \(U_{H}(\Phi)\) was recorded next, by mounting sample \(\text{LNO}_{2}\) on a rotation table inside the electromagnet. Here, \(\Phi\) denotes the angle between magnetic field vector \(B\) and the plane of carrier transport, varying between 0\({}^{\,\ast}\) and 90\({}^{\,\ast}\) in a cosine fashion [see inset of Fig. 3(a)]. \(U_{H}(\Phi)\) was then recorded for a fixed \(B\)-field of 400 mT at 5 different \(\Phi\) values including 0\({}^{\,\ast}\) and 90\({}^{\,\ast}\). The data is plotted in Fig. 3(a), and clearly shows the expected Hall-voltage behavior, especially also recording \(U_{H}(\Phi=90\)\({}^{\,\ast})=0\).
Figure 2: Results of the macroscopic Hall-effect measurements for samples \(\text{LNO}_{1}\) and \(\text{LNO}_{2}\), carried out as sketched by the inset. The \(B\)-field was applied perpendicular to the CDWs and then swept both positively and negatively. The charge carriers, i.e., mainly electrons – as discussed in the text – that flow between the current contacts 1 and 4 are deflected by the Lorentz force \(F_{L}\) resulting in a measurable Hall voltage \(U_{H}=U_{23}\) read between contacts 2 and 3. Plotted in the main diagram is the Hall resistance \(R_{h}=U_{23}/I_{14}\) as a function of \(B\)-field for the two samples, with sample \(\text{LNO}_{2}\) showing a nearly 10-times larger response. Note that the plotted \(R_{h}\) values are the averaged values of \(R_{h}(+B)\) and \(R_{h}(-B)\) in order to compensate for sample misalignment effects [see raw data in SI-Fig. S5, and the reference \(U_{H}\) measurement recorded from a monodomain bulk LNO crystal (containing no CDW at 4) in SI-Fig. S6(a)]. The corresponding 2D charge carrier densities \(n_{2D}\) were then extracted from the slope of these linear relationships according to eq. (1) and are discussed further down as well as summarized in Tab. 1.
### Hall Voltage under Super-Bandgap Illumination
A second independent integrity experiment, carried out also only with specimen \(\text{LNO}_{2}\), investigated the influence of super-bandgap illumination on the DW current, which is expected to significantly enlarge the sheet carrier density \(n_{2D}\) by generating electron-hole pairs that then must decrease \(R_{h}\) and \(U_{H}\). For this complementary test the DW of \(\text{LNO}_{2}\) was placed in a 110-mT field of a permanent magnet with \(\Phi=0\)\({}^{\circ}\), and then illuminated at a 310 nm wavelength, which corresponds to the optical band gap of \(E_{g}=4.0\) eV for bulk 5-mol\(\%\) Mg-doped \(\text{LiNbO}_{3}\)[25]. A 1000-W Xe arc lamp coupled into a grating monochromator (Cornerstone 260 by Oriel Instruments) and focused onto the whole 5\(\times\)10 mm\({}^{2}\) sample area served as the photo-exciting light source applying a constant photon flux of \(10^{13}\) s\({}^{-1}\) [see sketch in Fig. 3(b)]. Using this setup, \(U_{H}\) was then acquired as a function of DW-current \(I\) both with/without light. The data are displayed in Fig. 3(b).
### Theoretical Background for the Extraction of Sheet Carrier Densities and Hall Mobilities
Prior to discussing the results of these experiments, we briefly summarize here the mathematical background that is needed to extract both the charge carrier densities \(n_{2D}\) and the Hall mobilities \(\mu_{H}\) from our experimental data. The full discussion was explained step-by-step in Beccard _et al._[5], where we had already underlined the indispensable necessity for offset and error corrections in vdP experiments as outlined by Werner [32]. To interpret our Hall data, we assume (A) that the DW current is established by electrons as the majority charge carriers. That the majority carrier are negative was derived from a preliminary experiment testing simply the polarity of the Hall voltage. The assumption that these carriers are very probably electrons is derived from two recent studies: first, via in-situ strain experiments for a set of equivalently prepared DWs based on crystal pieces of LNO wafers from the same manufacturer [33] and which can be understood due to their superior mobility compared to holes and the positive bound charges at a head-to-head (H2H)-DW requiring negative screening charges [34; 26]; and second, via temperature-dependent DWC measurements revealing activation energies in the range of 100-250 meV [35] and thus pointing towards electron-polaron mediated hopping with ionic transport being very improbable. And (B), we assume that the magnetic field stands perpendicular to the conductive layer, i.e., the CDW. Then, the Hall voltage \(U_{H}\) follows [36; 37] the relationship \(U_{H}=\frac{I-B}{q+d}\) with \(B\) the absolute value of the magnetic field, \(I\) the current driven through the conducting DW layer (in particular: \(I=I_{14}\)), \(q\) the elementary charge, \(d\) the DW width, and \(n\) the 3D charge carrier density. Introducing the 2D charge carrier density \(n_{2D}\), also referred to as sheet carrier density, with \(n_{2D}=n\cdot d\) and the Hall resistance \(R_{h}=U_{H}/I=U_{23}/I_{14}\), the former can be extracted from the slope of the (measured) expected linear \(R_{h}\)-vs.-B dependence as:
Figure 3: (a) The DW of sample \(\text{LNO}_{2}\) was exemplarily rotated in the \(B\)-field of the electromagnet, with the angle \(\Phi\) being varied between 0 and 90\({}^{\circ}\) and the corresponding Hall voltages measured, as sketched in the inset. As expected, the Hall voltage \(U_{H}\) decreases when changing \(\Phi\) from 0\({}^{\circ}\) to 90\({}^{\circ}\) and the ratio \(U_{H}/U_{H,0}\) with \(U_{H,0}\) being the Hall voltage at 0\({}^{\circ}\) (i.e., with the magnetic field vector being aligned perpendicular to both the CDW and the injected current) follows the theoretically predicted cosine function very clearly. All data points were acquired with a constant magnetic field of 400 mT. (b) Impact of super-bandgap illumination on the Hall transport for sample \(\text{LNO}_{2}\), which was placed in a constant field of \(B=110\) mT supplied by a permanent magnet and illuminated at a wavelength of 310 nm under a constant photon flux of \(10^{13}\)s\({}^{-1}\). The Hall voltage \(U_{H}=U_{23}\) was recorded as a function of the current \(I=I_{14}\) for the two cases: (i) under illumination (purple data points) and (ii) in the dark (black data points). Both data sets show a linear behavior, however, with a significantly increased response when illuminating the DW, then generating additional electron-hole pairs.
\[R_{h}\propto\frac{B}{q\cdot n_{2D}}\quad. \tag{1}\]
For the subsequent calculation of the Hall mobility \(\mu_{H}\) of the majority charge carriers, we employ the relation:
\[\mu_{H}=\frac{1}{q\cdot n_{2D}\cdot R_{S}}\quad. \tag{2}\]
Note that the Hall mobility \(\mu_{H}\) is linked to the "actual" mobility \(\mu\) via the Hall factor \(R_{h}\) as \(\mu_{H}=R_{h}\cdot\mu\). The Hall factor, which depends on internal scattering mechanisms, is not known in most practical cases and commonly assumed to be unity [20]. Furthermore, \(R_{S}\) is the sheet resistance of the CDW, which is readily obtained by solving van der Pauw's equation numerically [37]:
\[\exp\left(-\frac{\pi}{R_{S}}R_{13,42}\right)+\exp\left(-\frac{\pi}{R_{S}}R_{34,21}\right)=1\quad. \tag{3}\]
\(R_{13,42}=U_{42}/I_{13}\) and \(R_{34,21}=U_{21}/I_{34}\) are easily extracted from corresponding current injection and voltage measurements between the respective contacts. The values measured here are all listed in SI-table S3.
## III Results and discussion
As key findings of our study, the measured relationship between \(R_{h}=U_{H}/I\) and the absolute value of the magnetic field \(B\) normal to the DW is depicted for both DWs in Fig. 2. The corresponding raw data before averaging over both field directions can be found in the SI-Fig. S5, while an additional averaging over the two possible current directions as in refs. [5; 32] could not be realized due to the rectifying character of the \(U_{23}\)-vs-\(I_{14}\) curves. According to eq. (1), rewritten as \(R_{h}\propto b\cdot B\) with the slope \(b=\frac{1}{q\cdot n_{2D}}\), very clear and linear dependencies of the Hall resistance \(R_{h}\) vs. magnetic field \(B\) curves are indeed observed experimentally here for the DWs in both LNO samples. The corresponding slopes were extracted from the linear fits. They read as \(b_{1}=(31\pm 3)\cdot 10^{9}\)\(\Omega T^{-1}\) as well as \(b_{2}=(21\pm 3)\cdot 10^{8}\)\(\Omega T^{-1}\) for samples LNO\({}_{1}\) and LNO\({}_{2}\), respectively. Table 1 summarizes the sheet carrier densities \(n_{2D}\) as extracted from these slopes. The numerical values for \(n_{2D}\) cover two orders of magnitude for the two specimen, with LNO\({}_{2}\) showing the larger \(n_{2D}\) value of 3\(\times\)10\({}^{5}\)cm\({}^{-2}\), which is two orders of magnitude more than recently observed in conductive BaTiO\({}_{3}\) DWs, and moreover, a very reasonable value for a 2D electronic system [5]. Reconsidering the findings of the CSHG imaging [cf. Fig. 1(a),(c)] the \(n_{2D}\)-discrepancy between LNO\({}_{1}\) and LNO\({}_{2}\) is not a surprise, since the DW of LNO\({}_{2}\) shows a broader range of DW inclination angles. Consequently, a higher charge carrier reservoir in contrast to LNO\({}_{1}\) is obviously expected. In other words, the link between DW geometry, or more generally, the DW's real structure, and electrical performance, is reflected in the present Hall-effect results.
In principle, the 3D charge carrier density \(n=n_{2D}/d\) can be calculated as well, provided the DW width \(d\) is known. Nevertheless, for the two specimen here \(d\) is unknown, but to tentatively estimate typical values we use the literature value of 174 pm, which was derived from transmission electron microscopy (TEM) by Gonnissen _et al._[38] on macroscopically non-inclined LNO domain walls, and receive carrier densities \(n\) reading \(1.15\cdot 10^{12}\) cm\({}^{3}\) for LNO\({}_{1}\) and \(17\cdot 10^{12}\) cm\({}^{3}\) for LNO\({}_{2}\), respectively. However, as of now it is not clear, (i) whether the width of our conductivity-enhanced DWs here is comparable with the width of the DWs as measured by Gonnissen _et al._, and (ii) to what extent the transport channel is different in width compared to the width defined by the polarization change as measured via TEM, i.e., whether screening charges are trapped in a larger area. Therefore, the above values for \(n\) should be seen as first "qualified guesses".
The second quantity evaluated from our four-point probe setup employing eqs. (2) and (3), is the Hall mobility \(\mu_{H}\). The extracted values are listed in Tab. 1 as well. In comparison to mobilities of LNO _bulk_ that have been reported [39] to be 0.8 cm\({}^{2}\)/Vs, the _domain walls_' Hall mobilities found here are significantly (almost 2 orders of magnitude) higher, which is an expected and desirable result. On the other hand, the Hall mobilities in _thin-film based_ LNO domain walls, which were reported by Qian _et al._[22] (337.30 cm\({}^{2}\)/Vs) and McCluskey _et al._[23] (around 3700 cm\({}^{2}\)/Vs) are 1-2 orders of magnitude larger once more and it will be a future experimental challenge to investigate whether these ranges can be achieved within LNO single crystal DWs as well. One may question to what extent the preconditions that justify eq. (3), are satisfied for the DWs of the current study. In an ideal case, the conducting sheet in the van-der-Pauw configuration has to be homogeneous, isotropic, uniform in thickness, without holes, and the contacts have to be point contacts at the perimeter. Reconsidering the CSHG-microscopy images showing a significant range of inclination angles and a tendency towards spike domain formation at least for sample LNO\({}_{2}\) and the fact that the LNO DWs are more tube-like rather than forming a 2D sheet, it appears to be necessary to estimate these errors as induced by these non-idealities:
In the simplest case and as indicated in Fig. 1(b,d), the Hall transport in samples LNO\({}_{1}\) and LNO\({}_{2}\) can be assumed as a parallel conduction through two identical DW sheets, which
\begin{table}
\begin{tabular}{l c c c} sample & \(R_{S}\) & \(n_{2D}\) & \(\mu_{H}\) \\ & \((10^{12}\Omega/\square)\) & \((10^{3}\)/cm\({}^{2})\) & (cm\({}^{2}\)/Vs) \\ \hline LNO\({}_{1}\) & 5.6 & \(20\pm 2\) & \(54\pm 5\) \\ LNO\({}_{2}\) & 0.6 & \(297\pm 36\) & \(35\pm 4\) \\ \end{tabular}
\end{table}
Table 1: Overview of the DW parameters, i.e., the sheet resistance \(R_{S}\) (for their graphical determination see SI-tab. S3 and SI-fig. S4), the 2–dimensional charge carrier density \(n_{2D}\), and the Hall mobility \(\mu_{H}\), as extracted from DW-confined Hall effect measurements using eqs. (1)–(3).
would mean a factor of 2 between the measured current and the current flowing through one of the two sheets, i.e., in turn, \(n_{2D}\) would be diminished by a factor of 2. Accounting for the more complex real structure of the CDWs in samples LNO\({}_{1}\) and LNO\({}_{2}\) [see Fig. 1(a,c)] we accomplished resistor-network simulations as described in detail earlier [27] and briefly explicated in SI-section D, with the rather similar result that the determined charge carrier densities are on the same order of magnitude needing correction factors of 0.51 and 0.65 for sample LNO\({}_{1}\) and LNO\({}_{2}\), respectively (see SI-sec. D). Note that the other extreme case, i.e., a strongly asymmetric conductivity of the two parallel DW sheets, would not need such a correction, since the measurement would reflect mainly the "Hall-behavior" of the highly conductive DW. Against the background that in a Hall scenario primarily the _order of magnitude_ of carrier densities and mobilities are of interest, correction factors between 0.5 and 1 are fully acceptable in any case.
Nevertheless, the non-ideal fitting to the van-der-Pauw restrictions gave further motivation to test the integrity of the approach by additional Hall-effect measurements:
In a first supporting experiment we recorded the dependence of \(U_{H}\) on the angle \(\Phi\) enclosed between magnetic field \(B\) and conducting sheet as exemplarily illustrated in Fig. 3(a) for sample LNO\({}_{2}\), i.e., the DW with the largest charge carrier density and the lowest sheet resistance. As seen from the plot of the normalized Hall voltage \(U_{H}/U_{H,0}\) versus angle \(\Phi\), the expected cosine behavior is convincingly reproduced by the measured data.
In a second additional experiment illustrated in Fig. 3(b) performed again with the DW of specimen LNO\({}_{2}\), we studied whether super-bandgap illumination at 310 nm, which corresponds to the optical bandgap of Mg-doped LiNbO\({}_{3}\) of 4.0 eV, might impact the Hall voltage. In fact, the earliest experiments focusing on measuring the DW conductivity in LNO had already qualitatively reported that super-bandgap light strongly enhances the charge carrier density at DWs [25]. Indeed, the slope of the \(U_{H}\)-vs.-\(I\) decreases under UV illumination, as shown in Fig. 3(b), indicating a decrease in \(R_{h}\) and hence an increase in the sheet carrier density \(n_{2D}\), as follows from eq. (1). The numerical evaluation of the slope via linear curve fitting yields a significant increase of the carrier density by a factor of 4-5 under super-bandgap excitation with a rather moderate photon flux. In the control experiment where a monodomain z-cut LiNbO\({}_{3}\) bulk crystal (i.e., without hexagonal domain structures) was covered with the same electrode configuration and tested in the same type of measurement scenario, a bulk photo-Hall effect could be excluded, since no functional relationsship between the driving current \(I_{14}\) and the Hall voltage \(U_{23}\) apart from noise was observed [SI-Fig. S6(b)]. Thus, we state that the proposed Hall-effect measurement setup is also efficient for investigating the photo-induced DW-confined transport behavior, which clearly opens up the box to implement DW-based devices for nano-optoelectronic applications as well.
## IV Conclusion
In summary, two exemplary conductive ferroelectric domain walls, engineered into a z-cut 200-\(\upmu\)m-thick 5-mol\(\%\) MgO-doped LiNbO\({}_{3}\) single crystal, completely penetrating the latter, and shaped like hexagonal tubes with, however, different microscopic real structure, were macroscopically electrically connected with four Cr electrodes (two on the z\({}^{+}\) and two on the z\({}^{-}\) surface) in order to set up a van-der-Pauw four-point-probe geometry for Hall probing. This setting in turn was employed for measuring the Hall resistance \(R_{h}\) as a function of the magnetic field \(B\) applied perpendicular to two of the six side sheets of the hexagonally-shaped DW tubes, as well as the sheet resistance \(R_{s}\) of the DWs, which finally allowed us to extract two characteristic key quantities for such low-dimensional electronic systems, i.e., the 2D charge carrier density \(n_{2D}\) found to be in the range 20...300\(\cdot\)10\({}^{2}\)cm\({}^{-2}\) and the Hall mobility \(\mu_{H}\) (extracted as 54 and 35 cm\({}^{2}\)(Vs)\({}^{-1}\) for the two samples, respectively, with a reasonable error around 10%). The validity of these numbers was further tested through angle- and illumination dependent Hall-voltage recordings, which both showed the expected behavior. Moreover, we employed resistor-network simulations to calculate correction factors for \(n_{2D}\) and \(\mu_{h}\), due to the parallel junction formed by the two CDWs, which were in the range of 0.5 and thus did not change the order of magnitude of the two quantities. Thus we propose that macroscopic Hall-effect analysis, as applied here, provides a robust and versatile method for the comparative quantification of the electrical performance of conductive domain walls in both LNO and many other materials. Moreover, photo-induced Hall measurements might gain even more interest, especially also for realizing nano-optoelectronic circuits.
## Acknowledgements
We acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG) through joint DFG-ANR project TOPELEC (EN 434/41-1 and ANR-18-CE92-0052-1), the CRC 1415 (ID: 417590517) the FOR 5044 (ID: 426703838; [https://www.for5044.de](https://www.for5044.de)) as well as through the Wurzburg-Dresden Cluster of Excellence on "Complexity and Topology in Quantum Matter" - ct.qmat (EXC 2147, ID: 39085490). This work was supported by the Light Microscopy Facility, a Core Facility of the CMCB Technology Platform at TU Dresden.
## Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request. |
2309.03899 | The Making and Breaking of Camouflage | Not all camouflages are equally effective, as even a partially visible
contour or a slight color difference can make the animal stand out and break
its camouflage. In this paper, we address the question of what makes a
camouflage successful, by proposing three scores for automatically assessing
its effectiveness. In particular, we show that camouflage can be measured by
the similarity between background and foreground features and boundary
visibility. We use these camouflage scores to assess and compare all available
camouflage datasets. We also incorporate the proposed camouflage score into a
generative model as an auxiliary loss and show that effective camouflage images
or videos can be synthesised in a scalable manner. The generated synthetic
dataset is used to train a transformer-based model for segmenting camouflaged
animals in videos. Experimentally, we demonstrate state-of-the-art camouflage
breaking performance on the public MoCA-Mask benchmark. | Hala Lamdouar, Weidi Xie, Andrew Zisserman | 2023-09-07T17:58:05Z | http://arxiv.org/abs/2309.03899v1 | # The Making and Breaking of Camouflage
###### Abstract
Not all camouflages are equally effective, as even a partially visible contour or a slight color difference can make the animal stand out and break its camouflage. In this paper, we address the question of what makes a camouflage successful, by proposing three scores for automatically assessing its effectiveness. In particular, we show that camouflage can be measured by the similarity between background and foreground features and boundary visibility. We use these camouflage scores to assess and compare all available camouflage datasets. We also incorporate the proposed camouflage score into a generative model as an auxiliary loss and show that effective camouflage images or videos can be synthesised in a scalable manner. The generated synthetic dataset is used to train a transformer-based model for segmenting camouflaged animals in videos. Experimentally, we demonstrate state-of-the-art camouflage breaking performance on the public MoCA-Mask benchmark.
## 1 Introduction
Camouflage has long been a subject of interest and fascination for the scientific community, especially evolutionary biologists who consider it as an excellent example of species adaptation. In order to confuse predators or to hide from prey, and increase their chances of survival in their natural habitat, animals have developed numerous camouflage mechanisms, _e.g_., disruptive coloration and background matching. Some species have even evolved to develop an adaptive camouflage, _e.g_., an arctic fox loses its white fur to better match the brown grey of the new season's landscape. Perhaps the most dramatic camouflage adaptation is the cuttlefish; it changes its patterns dynamically and rapidly as it moves from one spot to the other, constantly adapting and improving its camouflage.
This search for optimal camouflage has inspired numerous works in the computer vision community, such as [12, 27], that tackled camouflage as a problem of optimal texture synthesis, making 3D objects non-detectable in a given scene. Others have addressed camouflage as a highly challenging object segmentation task in [9, 15, 20, 23]. Efforts have been made in collecting large scale camouflage datasets [9, 20] with costly annotation. In fact, camouflaged animals often exhibit complex shapes and thin structures that add to the boundary ambiguity and make the annotation highly time-consuming. Fan _et al_. report up to \(60\)min per image to provide accurate pixel-level annotations for their dataset COD10K [9]. While another line of research turned to camouflage breaking in sequences by taking advantage of motion cues [1, 5, 18, 19, 40], the camouflage data scarcity is even more extreme for videos. Recently, Sim2Real [18, 39] training has shown to be very effective for motion segmentation. By training on the optical flows of synthetic videos, these models can generalise to real videos without suffering from the domain gaps.
In this paper, we start by asking the question: "**what are the properties of a successful camouflage?**" To answer this question, we investigate three scoring functions for quantifying the effectiveness of camouflage, namely, reconstruction fidelity score (\(S_{R_{f}}\)), boundary score (\(S_{b}\)) and intra-image Frechet score (\(d_{F}\)). These scores are later adopted for two critical roles, (i) they are used to assess the relevance of existing camouflage datasets and act as quality-indicator in collecting new camouflage data; (ii)
Figure 1: The three images depict the same animal, an Arctic fox, as it adapts its appearance to better blend with the changing landscape of the new season. While images \(a\) and \(c\) exhibit better background similarity than image \(b\), the fox boundary is more visible in image \(a\) than image \(c\). We assess the effectiveness of camouflage by measuring the degree of ambiguity it creates with respect to its background.
they can be used as a proxy loss for image and video inpainting/generation, where we establish a synthetic camouflage data generation pipeline with a Generative Adversarial Network (GAN), that can simultaneously generate high-quality camouflage examples and masks of the camouflaged animals. We further train a Transformer-based architecture on the generated camouflage video sequences, and demonstrate a performance boost over training on only the (small scale) real data.
In summary, we make the following contributions: _First_, we introduce three scoring functions to measure the effectiveness of a given camouflage in still images and videos. We use these camouflage scores to rank the images in existing datasets in terms of camouflage success, and also show that the rankings are largely in accordance with human-produced rankings. _Second_, we incorporate the camouflage score into a generative model, establishing a scalable pipeline for generating high-quality camouflage images or videos, along with the pixelwise segmentation mask of the camouflaged animals; _Third_, we show that a Transformer-based model trained on the synthetically generated data can achieve state-of-the-art performance on the MoCA-Mask video camouflaged animal segmentation benchmark.
## 2 Related work
Camouflage Evaluation.Although there are no previous computational works that directly assess camouflage, as far as we know, there have been a number of human studies. Previous works proposed methods to evaluate camouflage by analysing the human viewers' eye movements [22, 37] or conducting subjective perceptual experiments [12, 27, 32]. In [12, 27], participants were asked to indicate the camouflaged object and their answers were analysed in terms of accuracy and time needed per image. Similarly, Skurowski _et al._[32] asked human volunteers to rate CHAMELEON [32] images from 1 to 5 in terms of camouflage effectiveness. They produced a score after compensating for personal bias. We validate our proposed camouflage scoring functions by comparing our rankings to these human-based rankings.
**Motion Segmentation.** The goal of this task is to partition the frames of a video sequence into background and independently moving objects. Early approaches focused on clustering motion patterns by grouping pixels with similar trajectories [3, 17, 21, 24, 25, 26, 31]. Another line of research tackled the problem by compensating for the background motion [1, 2, 19], via registering consecutive frames, or explicitly estimating camera motion. More recently, in [40], a Transformer-like architecture is used to reconstruct the input flows, and the segmentation masks are generated as a side product, while [6] decomposes the flow filed into regions by fitting affine transformations. The most closely related to ours is [18, 39], where the authors adopt a Sim2Real strategy by training the model on optical flow computed from synthetic videos. These models can generalise towards real videos without fine-tuning. In this paper, we train a model on synthetic camouflage _RGB_ sequences.
**Frechet Inception Distance and its variants.** Also known as Wasserstein-2 distance [38], Frechet distance (FD) is a metric quantifying the difference between two probability distributions [10]. Lately, [13] introduced the Frechet Inception Distance (FID) in the context of generative models. Under the assumption that real images and generated images follow Gaussian distributions, they compute the FD of the two distributions. Note that here, each image is represented by a vector obtained from the last pooling layer of InceptionV3 [35]. Following their steps, [30] adapted FID to the image re-targeting task and introduced the Single Image Frechet Inception Distance (SIFID). Instead of comparing entire image datasets, they compare two images and consider the distributions of their feature vectors obtained from earlier layer of InceptionV3 [35]. Recently, [7] included FID in the training objective and introduced Frechet-GAN. Our work takes inspiration from [7, 30] and investigates a FID of regions within the same image as an auxiliary loss.
## 3 Measuring the effectiveness of camouflage
Assuming there exists a dataset, \(\mathcal{D}=\{(I_{1},m_{1}),\ldots,\)\((I_{N},m_{N})\}\), where \(I_{i}\in\mathbb{R}^{H\times W\times 3}\) refers to the images, and \(m_{i}\in\{0,1\}^{H\times W\times 1}\) denotes a binary segmentation mask of the camouflaged animal. Here, we aim to design a scoring function that takes the image and segmentation mask as input, and outputs a scalar value \(S\) that can quantify the effectiveness of camouflage, _i.e._, how successfully an animal blends into its background, termed as the camouflage score,
\[S:(I_{i},m_{i})\longmapsto s_{i}\in[0,1]\]
specifically, for \(i\neq j\) and \(s_{i}<s_{j}\), the animal in \(I_{j}\), indicated by the mask \(m_{j}\), is more concealed than the animal in \(I_{i}\), indicated by the mask \(m_{i}\). Having such a scoring function enables various applications, _first_, it enables to rank the images from one dataset, in terms of the effectiveness of camouflage, and generate camouflage-relevant statistics for the dataset; _second_, it can serve as the objective for low-level image generation tasks, for example, image inpainting, editing, etc.
In the following sections, we investigate three such scoring functions, specifically, we exploit the animal's mask and quantify the key aspects that determine the camouflage success both perceptually (Sec. 3.1) and probabilistically (Sec. 3.2). Note that, we take into account the local aspect of camouflage, _i.e._, instead of processing the entire background region, we only focus on the immediate
surround of the animal. Hence, when referring to background, we mean the local background and only consider the cropped images centred around the object.
### Perceptual camouflage score
In this section, we define the camouflage score through a perceptual lens, specifically, we measure the foreground and background similarity (Sect. 3.1.1), along with the boundary visibility (Sect. 3.1.2).
#### 3.1.1 Reconstruction fidelity score
We attempt to reconstruct the foreground region with patches from the background, and propose a score to quantitatively measure the discrepancy between the original image and its reconstruction. Intuitively, for successful camouflage, we should be able to reconstruct the foreground perfectly by copying patches from its close neighborhood.
Formally, for a given image (\(I\)) and segmentation mask (\(m\)), subscript has been ignored for simplicity, we consider a trimap separation and define the foreground and background regions using morphological erosion and dilation of the mask,
\[I_{\text{fg}},\;I_{\text{bg}}=m_{\text{fg}}\odot I,\;m_{\text{bg}}\odot I \tag{1}\]
where \(m_{\text{fg}}=\text{erode}(m)\) and \(m_{\text{bg}}=1-\text{dilate}(m)\).
To reconstruct the foreground region, we replace the foreground patch with the closest one in the background. Here a patch is a \(n\times n\) pixel region (\(n=7\) in our case with an overlap of 3) and the the patchwise similarity is computed by exploiting the low-level perceptual similarity, _i.e_., comparing corresponding RGB values. This reconstruction method is inspired by the texture generation and inpainting approaches, such as [8]. In practice, we implement it with fast approximate nearest neighbor search for efficiency.
The reconstruction fidelity score is computed by assessing the difference value between the foreground region and its reconstruction, specifically, we count the number of foreground pixels that have been successfully reconstructed from the background:
\[S_{R_{f}}(I,m)=\frac{1}{N_{\text{fg}}}\sum_{(i,j)\in I_{\text{fg} }}R_{f}(i,j) \tag{2}\] \[R_{f}(i,j)=\begin{cases}1&\text{if }||I_{\text{fg}}-\Psi_{I_{ \text{fg}}}(I_{\text{fg}})||_{2}<\lambda||I_{\text{fg}}||_{2}\\ 0&\text{otherwise}\end{cases} \tag{3}\]
\(\Psi_{I_{\text{bg}}}(.)\) denotes the reconstruction operation, \(N_{\text{fg}}=|m_{\text{fg}}|\) is the total number of pixels in the foreground region, and \(\lambda\) is a threshold (\(\lambda\)=0.2 in our case). A higher \(S_{R_{f}}\) score, means that the animal's visual attributes are well represented in the background. Conversely, a low \(S_{R_{f}}\) indicates that the animal's appearance is composed of unique features that makes it standout from its surrounding. In Fig. 2, we present examples of camouflage evaluation by reconstruction fidelity.
#### 3.1.2 Boundary visibility score
With the trimap representation, we can easily extract the boundary region, \(I_{\text{b}}=m_{\text{b}}\odot I\), where \(m_{\text{b}}=(1-m_{\text{bg}})-m_{\text{fg}}\). Here, we aim to measure the animal's boundary properties, _e.g_., contour visibility, as they can also provide visually evident cues for camouflage breaking. In particular, we adopt an off-the-shelf contour extraction method [29], and run it on the original image and foreground mask, to generate the images' contour (\(C\)) and ground truth animal's contour (\(C_{\text{gt}}\)), as shown in Fig. 3.
We express the agreement between predicted contours \(C\) and ground truth contours \(C_{\text{gt}}\) over the boundary region with the F1 metric:
\[S_{b}(I,m)=1-\text{F1}(m_{b}\odot C_{\text{gt}},\;m_{b}\odot C) \tag{4}\]
This score penalises the boundary pixels that are predicted as contour in both \(C\) and \(C_{\text{gt}}\). While we do not expect a
Figure 3: Trimap regions and animal contours. From left to right: Input image; Trimap partition (foreground region in yellow, background in purple, and boundary region in green); Contours \(C\) computed on the original image; and ground truth Contours \(C_{gt}\). The top example shows less contour agreement along the boundary region (\(S_{b}\)=0.72) than the bottom example (\(S_{b}\)=0.40).
Figure 2: The reconstruction fidelity score \(S_{R_{f}}\) evaluates the similarities between the original foreground \(I_{fg}\) and its reconstruction from background features \(\Psi_{bg}(I_{fg})\). The top example shows a case where the animal exhibits different visual patterns (color) from its background (\(S_{R_{f}}\)=0.11), while the bottom example shows a better background matching (\(S_{R_{f}}\)=0.82).
perfect contour agreement for visible edges, we consider \(S_{b}\) as a reasonable approximation, given the shape and size of the boundary region as a thin margin centred around the animal. If a boundary pixel is predicted as a contour in \(C_{\text{gt}}\) but not in \(C\), this means that the animal's edge is not visible in the original image. If a pixel is predicted as a contour in \(C\) but not \(C_{\text{gt}}\), this indicates the presence of distracting elements, such as complex texture patterns on the foreground animal, which also affects the visibility of the animal's contour and therefore improves its camouflage. A perfectly camouflaged animal would have little contour agreement and \(S_{b}=1\).
To get the final perceptual camouflage score, we linearly combine both reconstruction fidelity score and boundary visibility score:
\[S_{\alpha}=(1-\alpha)S_{R_{f}}+\alpha S_{b} \tag{5}\]
We will describe our method for setting the weighting parameter \(\alpha\) in the Experiment section (Sec. 6).
### Probabilistic scoring function
In addition to using the low-level RGB representation, in this section, we consider the pixelwise representation in the feature space, and propose a differentiable metric that compares the probabilistic distribution between the foreground and background regions, which acts as a proxy for the score \(S_{\alpha}\) described in previous section, and can be used directly as a differentiable loss function in image generation.
We consider the **Intra-Image Frechet Inception Distance**, specifically, we compute the feature maps for the foreground and background image regions:
\[f_{fg},\;f_{bg}=\Phi_{\text{v}}(I_{fg}),\;\Phi_{\text{v}}(I_{bg}) \tag{6}\]
Here \(\Phi_{\text{v}}(\cdot)\) denotes a pre-trained Inception network [34, 35], truncated at early layers. We take inspiration from [30], and adapt the single image Frechet distance to measure the deviation between feature distributions of different regions within the same image. We adopt the Frechet hypothesis with respect to our regions, _i.e._, the features of the foreground and background follow multivariate Gaussian distributions: \(f_{\text{fg}}\sim\mathcal{N}(\mu_{\text{fg}},\Sigma_{\text{fg}})\) and \(f_{\text{bg}}\sim\mathcal{N}(\mu_{\text{bg}},\Sigma_{\text{bg}})\). The intra-image Frechet distance can be formulated as follows:
\[d_{\mathcal{F}}^{2}(I,m)=||\mu_{\text{fg}}-\mu_{\text{bg}}||_{2}^{2}+Tr( \Sigma_{\text{fg}}+\Sigma_{\text{bg}}-2(\Sigma_{\text{fg}}\Sigma_{\text{bg}} )^{1/2})\]
**Note that**, when \((I,m)\) is the output from a generative model, \(d_{\mathcal{F}}^{2}\) is differentiable with respect to its parameters, and can therefore be used as an auxiliary loss for optimising the image generation procedure, _i.e._, generate effective camouflage examples.
## 4 Generating camouflage video sequences
In this section, we propose a scalable pipeline for generating images with concealed animals. In particular, we incorporate the differentiable intra-image Frechet distance in image generation, explicitly optimising the camouflage score of the image. We first detail the static image generation framework, then describe how we use the images it generates to create camouflage video sequences.
### Camouflage image generation
In order to generate the image with a camouflaged animal, and its corresponding segmentation mask, we adopt a Generative Adversarial Network (GAN), where a generator is fed with a latent vector \(z\sim\mathcal{N}(0,1)\) and learns to predict the pair of realistic image and segmentation mask, such that a discriminator cannot differentiate it from real samples.
Let \((x_{i},m_{i})\) denote an image and segmentation pair sampled from the training dataset, a subset of COD10K [9] is used here. We adopt the conventional label notation, _i.e._\(y=1\) for (real) examples sampled from the training dataset and \(y=0\) for (fake) examples from the generator, and consider the generative adversarial loss functions [11]:
\[\mathcal{L}_{D} =E_{(x_{i},m_{i})}[-\log(D(x_{i},m_{i}))]+E_{z}[-\log(1-D(G(z)))]\] \[\mathcal{L}_{G} =E_{z}[-\log(D(G(z)))]\]
To enforce coherence between images and their segmentation masks, we feed the discriminator with additional fake pairs, consisting of real images coupled with unpaired real masks. This can be formulated as an additional coherence loss term that minimises the probability of assigning incorrect labels, _i.e._\(y=1\), to \((x_{i},m_{j})\) sampled from the training set such that \(i\neq j\):
\[\mathcal{L}_{\text{Coh}}=E_{(x_{i},m_{i})_{i\neq j}}[-\log(1-D(x_{i},m_{j}))] \tag{7}\]
To increase the camouflage effectiveness in the generated examples, we adopt the intra-image Frechet distance in the generator loss, as an auxiliary loss term, and train our camouflage image generator with the following loss:
\[\mathcal{L}_{\tilde{D}}=\mathcal{L}_{D}+\mathcal{L}_{\text{Coh}}\;\;\;\;\; \mathcal{L}_{\tilde{G}}=\mathcal{L}_{G}+\beta d_{\mathcal{F}}^{2} \tag{8}\]
We present an overview of our data generation framework in Fig. 4.
### Camouflage video generation
Given a camouflage image and corresponding segmentation mask, generated using the method above, we can create video sequences by applying different motions to the foreground and background, in a similar manner to [18]. Specifically, we first inpaint the backgrounds with an off-the-shelf model proposed by Suvorov _et al._[33] and overlay the synthetic animal (extracted from the original generated image
by using the mask) at a random location within the image in the first frame, then following a random translational trajectory in the following frames. We incorporate a different translational motion for the background and include random sub-sequences where the foreground and background undergo the same motion to simulate momentarily static objects. As we train our generator with intra-image Frechet loss (\(\mathcal{L}_{\mathcal{F}}\)), the generated images exhibit strong similarities between the foreground and the background, and even if the object is placed at a different location from the original, we expect it to remain concealed within its surrounding.
## 5 Learning to break camouflage
In this section, we train a transformer-based architecture on the synthetic dataset, and demonstrate its effectiveness for breaking the camouflage in real videos, for example, MoCA [19]. We build on two previous architectures, namely, the motion segmentation from [18] and the Search Identification Network from [9]. While the first architecture processes sequences of optical flow, the second takes single RGB images as inputs and treats them separately. Our proposed model, shown in Fig. 4, takes both the optical flow and RGB frame sequences as input. The flow is computed with RAFT [36] from the RGB sequence, and processed with a motion encoder, followed by a transformer encoder. The RGB sequence is processed with the appearance encoder from [9], pre-trained for framewise camouflage segmentation. Then both streams are aggregated as inputs to a Mask2Former-like [4] transformer decoder and pixel decoder to produce the high resolution segmentation mask from the motion stream.
**Motion encoder.** To encode the motion cues, we use a light-weight convNet that takes as input a sequence of optical flows, \(\mathbf{I_{m}}\)\(=\)\(\{I_{m_{1}},I_{m_{2}},..,I_{m_{t}}\}\)\(\in\)\(\mathcal{R}^{t\times c_{0}\times h\times w}\) and outputs motion features:
\[\{f_{m_{1}},f_{m_{2}},..,f_{m_{t}}\}=\Phi_{\text{motion}}(\mathbf{I_{m}})\]
where each flow frame is separately embedded.
**Motion transformer encoder.** A transformer encoder takes as input the motion features, concatenated along the sequence, together with learned spatial and temporal positional encodings (Pos\({}_{s}\), Pos\({}_{t}\)), as indicated in Fig 4. Pos\({}_{t}\) specifies the frame, and Pos\({}_{s}\) specifies the position in the spatial feature map output of the motion encoder. The output of the transformer is a set of enriched motion features.
**Appearance encoder.** Here, we adopt a SINet-v2 [9] architecture, that encodes the RGB sequence, \(\mathbf{I_{a}}\)\(=\)\(\{I_{a_{1}},I_{a_{2}},..,I_{a_{t}}\}\)\(\in\)\(\mathcal{R}^{t\times c_{1}\times h\times w}\) into appearance features:
\[\{f_{a_{1}},f_{a_{2}},..,f_{a_{t}}\}=\Phi_{\text{app}}(\mathbf{I_{a}})\]
Again, each RGB frame is processed separately by SINet.
**Transformer decoder.** A transformer-based decoder takes the output of the motion transformer encoder together with the appearance features and a learnable query for the mask embedding. In a similar manner to Mask2Former [4], the query attends to multiple resolutions of the motion features concatenated with the appearance features and produces a mask embedding for the moving object.
**Pixel Decoder.** Similarly to the pixel decoder in Mask2Former, a light-weight convNet decoder is used with skip-connections to recover high-resolution segmentation masks \(\{\hat{m}_{1},\hat{m}_{2},..,\hat{m}_{t}\}\) from the motion features and the mask embedding. This is shown as the blue box in Fig. 4.
**Training objective.** We train the motion segmentation model on our synthetic video sequences using the binary cross-entropy loss \(\mathcal{L}_{BCE}\).
Figure 4: Our framework consists of a synthetic camouflage data generation pipeline (left), where we train a generator \(G\), in a GAN setting, to create camouflage images and masks, while encouraging camouflage effectiveness via minimising \(\mathcal{L}_{\mathcal{F}}\). The generated samples are then transformed into synthetic video sequences (middle), following our method presented in Sec. 4.2. The transformer-based motion segmentation architecture (right), for camouflage breaking in videos, is trained on the synthetic video sequences. The architecture is described in Sec. 5.
## 6 Experiments
In this section, we start by introducing the datasets involved in this paper, followed by the implementation details. Our experiments present a thorough analysis of the proposed camouflage scores and demonstrate their effectiveness in our training framework.
### Datasets
Here we describe the publicly available camouflage datasets that we included in our experiments, as shown in Fig. 5, as well as the synthetic camouflage datasets that we generated.
**CHAMELEON [32]** is one of the first camouflage image datasets. It contains 76 images collected using Google image search with the 'camouflaged animal' keyword and include ground truth manual annotations.
**CAMO [20].** The Camouflaged Object dataset consists of 2500 images collected from the internet, of which 1250 images (1000 training sub-set and 250 testing sub-set) contain at least one camouflage instance from 8 categories with manual pixelwise annotations provided.
**COD10K [9].** COD10K contains 10,000 images collected from photography websites of which 5066 depict camouflaged animals (3040 training sub-set and 2026 testing sub-set), organised into 10 classes, and 78 sub-classes (69 camouflaged). Note that, in our camouflage evaluation experiments we only used the camouflage subset of this dataset.
**Camouflaged Animals [1].** This dataset consists of 9 video sequences of camouflage animals from 6 categories.
**MoCA [19]** is the first large-scale camouflage video dataset. It contains 141 video sequences (37K frames) totalling 67 animal categories. Recently, other works have curated this dataset by selecting only the videos with more prominent motion (locomotion) [40], and others have provided dense pixel annotation in MoCA-Mask [5]. We use the latter version in our experiments.
**Camouflaged cuboids [12, 27].** This dataset was created for texture synthesis and consists of multiple-view scenes, where cuboids were placed at a predefined location then synthetically camouflaged. In our evaluation, we consider the 4-views generated textures [12] from 36 scenes as well as the cuboids masks.
**Synthetic Camouflage Images.** Using the method described in Sec. 4.1, we generate a synthetic camouflage dataset, of \(5K\) frames, by discriminating against real camouflage image and annotation masks from COD10K.
**Synthetic Camouflage Videos.** We generate \(1K\) sequences of \(30\) frames each, incorporating static sequences using the framework described in Sec. 4.2. We split these into \(800\) sequences for training and \(200\) for testing.
### Implementation details
In our experiments, for a given camouflage example, the kernels for the morphological operations are selected from a range of values [1, 10], so that the resulting annotation mask
Figure 5: **Randomly sampled examples from the camouflage datasets included in our work**. For the video datasets, Camouflaged Animals and MoCA-Mask, we show an example sequence. For the multiple-view dataset Camouflaged cuboids (bottom), we show example views from a scene with the 4-view texture synthesis method from [12].
is reduced by 20% for erosion and extended by 20% for dilation. This allows the erosion/dilation to be adapted to the size of the camouflaged animal, while always keeping a reasonably large region for addressing the boundary effects.
For our synthetic camouflage image generation, we adopt a Style-GAN architecture [41] and train it on the camouflage images from COD10K to generate \(256\times 256\) images and masks pairs. We use \(\beta=0.1\) to weight our intra-image Frechet auxiliary loss. For numerical stability, we adopt the Newton Schultz iterations when calculating the matrix square root term in Intra-Image Frechet Inception Distance, with \(T=100\) iterations.
When generating the image sequences, we compute optical flows with RAFT [36], and train our moving camouflaged animal segmentation model. We use batches of size 2 and sequences of size 5. We first train on our synthetically generated dataset with a learning rate of \(5\times 10^{-4}\) for 500 iterations then fine-tune on the training subset of MoCA-Mask.
### Results
This section presents qualitative and quantitative results, to demonstrate the effectiveness of our score functions and the benefit of including \(d_{\mathcal{F}}^{2}\) in the training loop.
#### 6.3.1 Evaluation on camouflage effectiveness
**Ranking camouflage images in COD10K.** We can rank the images based on the effectiveness of the animal's camouflage, with our proposed scoring functions. In the left part of Fig. 6, we show the four best scored images from the large scale COD10K dataset. We can make the following observations: (i) the best examples with respect to \(S_{R_{f}}\) (top), which favours the **background matching** of the animals without taking into account the boundary region, all exhibit visible boundaries, with the exception of image (a). This is especially noticeable in the rabbit example (d) along the ears and shadow regions; (ii) the best four examples with respect to the **boundary score**\(S_{b}\) are all from the caterpillar subclass, mostly the baron type. These insects have thin, and transparent boundaries, that makes them the perfect candidates for this score. However, with the exception of (a) and (c), the animals still stands out as they exhibit colors and patterns that are not present within their background; (iii) the \(S_{\alpha}\)**combining both scores** shows highly effective camouflage examples. In contrast, the right part of Fig. 6 shows the lowest scored images for each approach. For \(S_{R_{f}}\), we find examples with low background matching in the first row, with two ant examples, _i.e._, (e), (f). While the boundary score penalises examples with high contrasts between the animal and its background, that results in more visible contours. This is also the case for the lowest \(S_{\alpha}\) score examples.
**Dataset level camouflage comparison.** We compute the camouflage score for all the camouflage image and video datasets and report the results in Tab. 1. For the image datasets, the dataset level score is computed as the mean of the image scores; for the video dataset the score is first computed per frame, and per-video average computed, then the dataset level score is the mean of the averages. Our experiments show that MoCA-Mask [5, 19] contains the most successful camouflages according to our scoring. While COD10K [9] subsets are balanced in terms of camouflage effectiveness, we find that, for CAMO [20], the test subset yields higher scores than the training subset and therefore contains more challenging examples.
We note that Camouflaged cuboids dataset yields higher \(S_{R_{f}}\) and \(S_{\alpha}\) scores compared to our synthetic datasets. This is due to the fact that the model used in [12, 27] is only trained to produce optimal texture for a predefined region (cuboid) in a particular location of a given scene. However, our model learns to output a _new_ image with a randomly located camouflaged object of random shape, thus offering more scalability and diversity compared to the cuboids dataset which cannot be used to train a model for breaking real camouflage.
**Comparison to human-produced rankings.** We compare our rankings to the human scoring system based on the rating of CHAMELEON [32] and the time-to-find for Camouflaged cuboids [12]. To search for the optimal \(\alpha\) parameter in Equation (5), we select \(15\) images from CHAMELEON and compare their \(S_{\alpha}\) ranking with ground truth using kendall-\(\tau\) metric [16]. For \(\alpha=0.35\), we obtain kendall-\(\tau=0.51\) on this validation set that we excluded from our test reported in Tab. 2. We can draw the following observations: (i) we find that the boundary score \(S_{b}\) produces more agreement with the ground truth ranking than \(S_{R_{f}}\), suggesting that human observers tend to pay more attention to contour visibility than background matching; (ii) while comparing to the ranking from Camouflaged cuboids, we found negative correlations for \(S_{R_{f}}\) and \(d_{\mathcal{F}}\), we conjecture that this may be due to the nature of the dataset in [12], _i.e._, all the textured cuboids are synthetically generated with very high background matching; (iii) for both experiments, we obtain the strongest agreement with the \(S_{\alpha}\) score combining both background similarity and boundary blending.
**Further analysis of the synthetic datasets.** We adopt the FID [13] and IS [30] metrics in Table 4. These metrics assess: (i) the similarity between the set of synthetic images output by the generator and the set of real images used to train it (FID); and (ii) the clarity of the object and the diversity of images in terms of classes (IS). While they are not intended for measuring camouflage success, which is the focus of this work, IS could somewhat be (inversely) linked to \(S_{b}\) as the object clarity and boundary visibility are close. Note that the object clarity (IS) is decreased with \(\mathcal{L_{F}}\) which is the intended purpose of improving camouflage, however,
this effect is not maintained in the sequence generation as the animal changes location within its background.
#### 6.3.2 Model ablations
**On the effectiveness of \(\mathcal{L_{F}}\).** Fig. 7 shows the generated samples from the GAN pipeline, trained with and without the intra-image Frechet Distance. Our generator is able to produce realistic object masks of complex shapes and thin structures similar to those usually encountered in the camouflage datasets. The produced images exhibit rich backgrounds with realistic nature-like elements mimicking rocks and coral structures present in the real camouflage data. Adding the \(\mathcal{L_{F}}\) loss results in generating images with better blending in the background, both qualitatively and quantitatively, as shown by the \(S_{\alpha}\) computed for both datasets generated with and without the intra-image Frechet Distance in Tab. 1. We refer the reader to the supplementary material for more examples of generated samples.
**On the architecture design and gain from the synthetic dataset.** We present in Tab. 3 an ablation study detailing the gain from the main components of our architecture, as well as the benefit of first training on our synthetic dataset then fine-tuning on MoCA-Mask (S+MM), for model H, as opposed to only training on MoCA-Mask (MM), for G.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Datasets & Data type & \(S_{R_{f}}\uparrow\) & \(S_{b}\uparrow\) & \(S_{\alpha}\uparrow\) & \(d_{\mathcal{F}}^{2}\downarrow\) \\ \hline CHAMELEON [32] & Image & 0.694 & 0.445 & 0.607 & **0.70** \\ CAMO Train [20] & Image & 0.672 & 0.451 & 0.595 & 1.01 \\ CAMO Test [20] & Image & 0.683 & 0.470 & 0.608 & 1.00 \\ COD10K Train [9] & Image & 0.655 & 0.433 & 0.577 & 0.90 \\ COD10K Test [9] & Image & 0.657 & 0.431 & 0.578 & 0.90 \\ Camouflaged Animals [1] & Video & 0.674 & **0.536** & 0.626 & 1.60 \\ MoCA-Mask Train [5, 19] & Video & **0.850** & 0.443 & **0.707** & 1.14 \\ MoCA-Mask Test [5, 19] & Video & 0.733 & 0.464 & 0.639 & 2.51 \\ \hline \multicolumn{5}{l}{Cumouflaged cuboids[27, 20]} & Multi-view & **0.894** & 0.433 & **0.733** & 6.2 \\ Syn. Camouflage w.o. \(\mathcal{L_{F}}\) & Image & 0.608 & 0.432 & 0.546 & 1.36 \\ Syn. Camouflage w.o. \(\mathcal{L_{F}}\) & Image & 0.679 & **0.447** & 0.598 & **1.13** \\ Syn. Camouflage Video & Video & 0.658 & 0.430 & 0.578 & 1.18 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of the proposed scores on natural camouflage datasets (top) and synthetically generated camouflage datasets (bottom). We report the mean scores for the single image datasets. For the video and multiple view datasets, we compute the mean per sequence and scene respectively. _Syn. Camouflage w. \(\mathcal{L_{F}}\)_ refers to the synthetic image dataset that we generated while minimising \(\mathcal{L_{F}}\) and _Syn. Camouflage Video_ its sequence version.
\begin{table}
\begin{tabular}{l|l c c|c c c} \hline \hline Model & Training & Appearance & Transformer & ImIoU & \(F\uparrow\) & \(E\uparrow\) & MAE \\ dataset & Encoder & Encoder & Aggregation & & & \\ \hline \hline A & S & & & & 14.5 & 20.4 & 57.3 & 9.4 \\ B & MM & & & & 15.3 & 20.6 & 57.3 & 5.1 \\ C & S+MM & & & & 16.0 & 21.8 & 59.8 & 4.3 \\ D & S+MM & ✓ & & & 20.5 & 22.6 & 59.8 & 3.8 \\ E & S+MM & ✓ & ✓ & & 23.0 & 23.5 & 57.0 & **1.6** \\ F & S+MM & ✓ & & ✓ & 22.7 & 23.2 & 58.8 & 3.6 \\ G & MM & ✓ & ✓ & ✓ & 23.4 & 24.7 & 62.0 & 2.4 \\ H & S+MM & ✓ & ✓ & ✓ & **30.8** & **34.3** & **77.0** & 1.8 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Kendall-\(\tau\) metric between rankings produced via our scores and human scoring ground truth.
\begin{table}
\begin{tabular}{l|c c} \hline \hline Scores & CHAMELEON [32] & Camouflaged cuboids[12, 27] \\ \hline \(S_{R_{f}}\) & 0.01 & -0.07 \\ \(S_{b}\) & 0.03 & 0.42 \\ \(S_{\alpha}\) & **0.42** & **0.41** \\ \(d_{\mathcal{F}}^{2}\) & 0.10 & -0.17 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Kendall-\(\tau\) metric between rankings produced via our scores and human scoring ground truth.
Figure 6: **Left:** Top-4 scored examples from COD10K for \(S_{Rf}(top)\), \(S_{b}(middle)\) and \(S_{\alpha}(bottom)\). **Right:** Lowest scored examples from COD10K. For each example, we show the obtained score and the corresponding ground truth mask.
#### 6.3.3 Results on MoCA
We train our model on the synthetically generated camouflage images and fine-tune on the training set of MoCA-Mask. Tab. 5 presents our camouflage object segmentation results on the test set of MoCA-Mask. Ours refers to our proposed method with SINet-V2 as the appearance encoder and Ours-flow refers to the optical flow only version of our architecture, excluding the appearance stream. Our main model outperforms RGB and Motion-based methods on MoCA-Mask. Fig. 8 presents qualitative results of our segmentation model. Note that our model is robust to degraded optical flow and static animals.
### Limitations
While our image generation method encourages background matching through the \(\mathcal{L}_{\mathcal{F}}\) loss term minimization, the sequence generating is not guaranteed to maintain object concealment. A possible solution could be to use the proposed \(S_{\alpha}\) camouflage score to curate the generated sequence dataset and filter out the most visible objects.
Our proposed camouflage scores use the ground truth annotation and analyse the different regions it defines. We found that our scoring system penalises specific cases of camouflage by occlusion, where elements from the background are partially occluding the animal and helping improve its camouflage. For instance, the grass in example (g) from the last row of Fig. 6 is not treated as part of the animal and therefore not considered in our background similarity assessment. One can argue that this is due to the ambiguity in the provided annotations and for such cases, extra amodal annotation should be also considered.
## 7 Conclusion
We present three score functions for computationally assessing the effectiveness of camouflage in images and videos. By evaluating the similarity with background and the boundary visibility, our combined perceptual score is strongly correlated with human perceptual ranking systems on two different datasets. We demonstrate that training a generative model with our differentiable camouflage function improves the effectiveness of generated camouflage examples and can be used to generate challenging synthetic camouflage datasets to train models to break camouflage.
## Acknowledgement
We are grateful to Przemyslaw Skurowski for providing the human perception study data on the CHAMELON dataset and Andrew Owens, Rui Guo and Oscar de Lima for providing the data for the Camouflaged cuboids dataset. We thank Tengda Han for fruitful discussions. This research is supported by the UK EPSRC funded CDT in Autonomous Intelligent Machines and Systems (AIMS), the EPSRC Programme Grant VisualAI EP/T028572/1, a Schlumberger Studentship, and a Royal Society Research Professorship. WX is supported by the National Key R&D Program of China (No. 2022ZD0161400).
Figure 8: **Qualitative results on MoCA-Mask. From top to bottom: Appearance sequence \(I_{a}\), flow sequence \(I_{m}\), predicted segmentations and ground truth segmentations.**
\begin{table}
\begin{tabular}{l|c|c c c c} \hline \hline Model & RGB Motion & mIoU\(\uparrow\) & \(F\uparrow\) & \(E\uparrow\) & MAE\(\downarrow\) \\ \hline SINet [9] & ✓ & & 20.2 & 23.1 & 69.9 & 2.8 \\ SINet-V2 [9] & ✓ & & 18.0 & 20.4 & 64.2 & 3.1 \\ SegMaR [14] & ✓ & & 12.2 & 22.5 & **80.3** & 4.5 \\ ZoomNet [28] & ✓ & & 18.8 & 28.7 & 70.8 & 2.5 \\ SLTNet [5] & ✓ & ✓ & 27.2 & 31.1 & 75.9 & 2.7 \\ MG [40] & & ✓ & 12.7 & 16.8 & 56.1 & 6.7 \\ Ours-flow & & ✓ & 17.8 & 21.5 & 60.7 & 3.7 \\ Ours & ✓ & ✓ & **30.8** & **34.3** & 77.0 & **1.8** \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Results on MoCA-Mask. Ours-flow refers to the optical flow only version of our architecture, excluding the appearance stream.**
Figure 7: **Generated camouflage images and masks: For the examples on the right, the generator was trained with \(\mathcal{L}_{\mathcal{F}}\).** |
2309.16502 | FeGe1-xSbx:a series of novel kagome metals with noncollinear
antiferromagnetism | Kagome metals are important for exploring emergent phenomena due to the
interplay between band topology and electron correlation.Motivated by the
recent discovery of charge density wave in a kagome lattice antiferromagnetic
FeGe,we investigate the impact of Sb doping on the structural,charge and
magnetic order of FeGe.The charge density wave is rapidly suppressed by Sb
doping(~1.5%) and the antiferromagnetic ordering temperature gradually shifts
to 280K for FeGe0.7Sb0.3.For FeGe1-xSbx with x>0.1,crystal structures with
slightly distorted Fe kagome lattice are formed.Their magnetic anisotropy has
significant change,temperature driven spin-reorientation and field-induced
spin-flop transition are identified from magnetization
measurement.Interestingly,neutron diffraction reveals noncollinear
antiferromagnetic structures widely exist below TN for all sample with
x>0.1.This noncollinear magnetic orders could possibly be unconventional and
resulted from onsite repulsion and filling condition of kagome flat band,as
predicted by a recent theoretical work. | Jiale Huang, Chenglin Shang, Jianfei Qin, Feihao Pan, Bingxian Shi, Jinchen Wang, Juanjuan Liu, Daye Xu, Hongxia Zhang, Hongliang Wang, Lijie Hao, Peng Cheng | 2023-09-28T15:09:51Z | http://arxiv.org/abs/2309.16502v3 | # FeGe\({}_{1-x}\)Sb\({}_{x}\): a series of novel kagome metals with noncollinear antiferromagnetism
###### Abstract
Kagome metals are important for exploring emergent phenomena due to the interplay between band topology and electron correlation. Motivated by the recent discovery of charge density wave in a kagome lattice antiferromagnet FeGe, we investigate the impact of Sb doping on the structural, charge and magnetic order of FeGe. The charge density wave is rapidly suppressed by Sb doping (\(\sim\)1.5%) and the antiferromagnetic ordering temperature gradually shifts to 280 K for FeGe\({}_{0.7}\)Sb\({}_{0.3}\). For FeGe\({}_{1-x}\)Sb\({}_{x}\) with x\(\geqslant\) 0.1, crystal structures with slightly distorted Fe kagome lattice are formed. Their magnetic anisotropy has significant change, temperature driven spin-reorientation and field-induced spin-flop transitions are identified from magnetization measurement. Interestingly, neutron diffraction reveals noncollinear antiferromagnetic structures widely exists below T\({}_{N}\) for all samples with x\(\geqslant\)0.1. These noncollinear magnetic orders could possibly be unconventional and resulted from onsite repulsion and fillings conditions of kagome flat band, as predicted by a recent thoeretical work.
## I Introduction
Kagome lattice hosts peculiar electronic structure with the coexistence of Dirac cones, flat bands and van Hove singularities[1; 2; 3]. In metallic materials with 3\(d\) transitional metal kagome networks, various novel emergent phenomena including superconductivity, magnetism, anomalous Hall effect and charge order have been observed in recent years[4; 5; 6; 7; 8; 9; 10; 11; 12]. Therefore, they have become an important platform to explore correlated quantum states intertwined with topological band structures.
The kagome charge density wave (CDW) has drawn great attentions due to its many-body correlations and topological features[1]. It was initially discovered in kagome superconductors AV\({}_{3}\)Sb\({}_{5}\) (A=K,Cs,Rb) and found to break time-reversal symmetry with the absence of any long range magnetic order[13]. This CDW order is considered to be unconventional, arising from Fermi surface nesting of van Hove singularities and hosting a chiral flux phase which induces anomalous Hall effect[14; 15; 16; 17; 18]. On the other hand, the magnetism in kagome metals may also be unconventional. It has been proposed that the large density of states from the kagome flat bands could induce ferromagnetism[6; 7]. However, the coexistence and interplay between CDW and long range magnetic order has not been observed in kagome metals until recently. Hexagonal FeGe with kagome lattice was reported to display a CDW transition at 100 K coupled to the long range antiferromagnetic order below T\({}_{N}\)=410 K[19].
Spectroscopic experiments have reveals an intimate interaction between the CDW order and magnetism in FeGe[20; 21]. However the origin of this CDW order remains elusive, as well as its relation with anomalous Hall effect and magnetic order. Furthermore, a recent Hartree-Fock analysis shows that unconventional noncollinear antiferromagnetic (AFM) order may exist in the magnetic phase diagram of FeGe tuned by onsite repulsion and flat-band fillings[22]. For materials with noncollinear antiferromagnetismtism, the scalar spin chirality or a nonzero Berry curvature with the spin-orbital coupling may induce strong anisotropic anomalous Hall effect and spin Hall effect[23; 24]. These intriguing effects have been realized in Mn\({}_{3}\)Sn and Mn\({}_{3}\)Ge with the kagome lattice and received great research interests[10; 25; 26]. Although there are a large number of magnetic kagome metals discovered so far, the noncollinear antiferromagnet seem to be quite rare besides Mn\({}_{3}\)X (X= Ge, Sn, Ga, Ir) material family[27; 28; 24].
Here we report the Sb doping effect on FeGe and mapping the phase diagram of FeGe\({}_{1-x}\)Sb\({}_{x}\) (0\(<\)\(x\)\(<\)0.4). Using x-ray, transport, magnetic susceptibility and neutron scattering measurements, we characterize the evolution of crystal structure, CDW and magnetic order with Sb doping. Intriguingly, noncollinear AFM structures are found to widely exist in FeGe\({}_{1-x}\)Sb\({}_{x}\). The studies on this new series of kagome metals may not only provide opportunities to understand the origin of unconventional CDW and its interplay with magnetic order in FeGe, but also could stimulate future researches on exploring novel topological and correlated phenomena driven by kagome physics.
## II Methods
Polycrystalline FeGe\({}_{1-x}\)Sb\({}_{x}\) samples were synthesized by solid state reaction of stoichiometric Fe, Ge and Sb powders at 700 \({}^{\circ}\)C for 4 days, then furnace-cooled to
room temperature. The samples with x\(<\)0.3 are characterized by powder x-ray diffraction (XRD) using a Bruker D8 Advance X-ray diffractometer and appear to be phase-pure. For x=0.33, some minor impurity phases including Fe\({}_{3}\)Ge\({}_{2}\) and Sb could be identified (less than 9%).
Single crystals of FeGe\({}_{1-x}\)Sb\({}_{x}\) were grown by the chemical vapour transport method using the synthesized polycrystalline samples similar as previous reports[19]. The obtained crystals are three-dimensional with typical size of 1 mm. The elemental composition of all single crystals were characterized with energy dispersive x-ray spectroscopy (EDS, Oxford X-Max 50). The doping concentration x determined by EDS may have slight deviation from the nominal doping value. For example, the single crystals with nominal x=0.01 are determined to be x=0.015 by EDS. All values of \(x\) refer to the EDS values in this manuscript except for polycrystalline samples. The crystal structure of single crystals were all examined by a Bruker D8 VENTURE single-crystal diffractometer using Cu K\({}_{\alpha}\) radiation and the lattice parameters are determined by refinement.
Magnetization and electrical transport measurements were carried out in Quantum Design MPMS3 and PPMS-14T, respectively. The powder neutron diffraction experiments were carried out on Xingzhi cold neutron triple-axis spectrometer at the China Advanced Research Reactor (CARR)[29]. About 4-6 g FeGe\({}_{1-x}\)Sb\({}_{x}\) powders for each doping were used in the neutron experiments. The incident neutron energy is fixed at 16 meV. The program FullProf Suite package was used in the representational analysis and Rietveld refinement of neutron powder diffraction data[30].
## II Results and Discussions
Hexagonal FeGe adopts a CoSn-type crystal structure with alternating stacking of Fe\({}_{3}\)Ge kagome layer and Ge honeycomb layer. Our XRD analysis on both single crystals and polycrystalline samples reveal that FeGe\({}_{1-x}\)Sb\({}_{x}\) maintains the crystal structure of FeGe for x\(\leqslant\) 0.05. However at higher doping level, the results show that Sb does not simply replace Ge and new chemical phases are formed as illustrated in Fig. 1(a). It should be mentioned that the crystal structures of FeGe\({}_{1-x}\)Sb\({}_{x}\) with x\(\geqslant\) 0.1 were initially determined by Mills \(et\)\(al.\) in an early publication[31] and consistent with our results here. We name the FeGe\({}_{1-x}\)Sb\({}_{x}\) with 0.1 \(\leqslant\)x\(\leqslant\) 0.2 as Sb-phase1 and that with x=0.3 and 0.33 as Sb-phase2. As shown in Fig. 1(a), the unit cells of new phases are all about six times larger than that of FeGe-phase (\(a^{\prime}\)=\(\sqrt{3}a\), \(c^{\prime}\)=2\(c\)). The Sb-phase1 adopts a different space group \(P6_{3}/mmm\). The Ge atoms in the honeycomb layer are gradually removed while the Sb atoms form Sb\({}_{2}\) pairs whose center of mass lying at the center of hexagons in the Fe\({}_{3}\)Ge kagome plane. The occupancy of Sb\({}_{2}\) pairs is only partial. For Sb-phase2, the structure can be best described using the chemical formula Fe\({}_{3}\)Ge\({}_{2}\)Sb. Comparing with FeGe-phase, the Ge honeycomb layer in the
Figure 1: (a) Illustration of the crystal structures of FeGe\({}_{1-x}\)Sb\({}_{x}\) at different doping concentration x. The crystal unit cell is marked by blue dotted lines. For x\(>\)0.1, some adjacent atoms in extended cells along \(c\)-axis are also shown for clarity. (b) For x=0.3, The Fe kagome lattice in the \(ab\)-plane is shown on the top. On the bottom, another view shows that the kagome lattice is slightly distorted along the \(c\)-axis. (c) Compositional-temperature phase diagram for FeGe\({}_{1-x}\)Sb\({}_{x}\).
Sb-phase2 remains unchanged while the Ge atoms in the Fe\({}_{3}\)Ge kagome layer are completely replaced by Sb atoms whose positions have an ordered shift along the \(c\)-axis.
Then we focus on the structural details in the Fe kagome lattice. As shown in Fig. 1(b), different from the FeGe-phase, the Fe ions occupy two inequivalent Wyckoff positions for both Sb-phase1 and Sb-phase2. The nearest Fe1-Fe1 distance between two adjacent kagome layer is larger than the Fe2-Fe2 distance as illustrated in Fig. 1(b). This results in slight distortion of the kagome layer along the \(c\)-axis comparing with the perfect flat kagome net in FeGe-Phase.
Fig. 1(c) presents the phase diagram of FeGe\({}_{1-x}\)Sb\({}_{x}\) which shows the evolution of different solid phases with doping concentration. The room temperature lattice parameters determined from XRD for different samples are presented in Table 1. A general tendency is that with increasing \(x\), the \(a\)-axis lattice constant increases while the \(c\)-axis constant decreases. The lattice constants of FeGe-phase could be transformed by using \(a^{\prime}\)=\(\sqrt{3}a\) and \(c^{\prime}\)=\(2c\) formulas for comparison. It seems that the doping of Sb causes a lattice compression effect along the \(c\)-axis. As a result, the nearest Fe-Fe distance in one kagome layer increases from 2.54\(\AA\) for FeGe to 2.60\(\AA\) for Fe\({}_{3}\)Ge\({}_{2}\)Sb. The buckled kagome structure of Fe\({}_{3}\)Ge\({}_{2}\)Sb is also confirmed by a very recent publication[32]. For the physical properties of FeGe\({}_{1-x}\)Sb\({}_{x}\), so far as we know, only that of Fe\({}_{3}\)Ge\({}_{2}\)Sb (close to our samples with x=0.3 and 0.33) was reported recently[32].
FeGe serves as a very rare example for the coexistence of CDW and AFM order. We firstly investigate how these orders would evolve with Sb doping via magnetization measurements. Fig. 2(a) and (b) shows the temperature dependent magnetic susceptibility of single crystals with FeGe-Phase. Consistent with previous report, for the parent compound FeGe, there is a hump at 100 K in the \(\chi\)(T) curve under H\(\parallel\)ab due to the development of CDW order[19]. However this feature disappear in x=0.015, indicating a rapid suppression of CDW with Sb doping. In
\begin{table}
\begin{tabular}{l c c c} sample type & x & \(a\)(Å) & \(c\)(Å) \\ \hline Single crystal & 0 & 5.003 & 4.055 \\ Single crystal & 0.015 & 5.031 & 4.055 \\ Single crystal & 0.05 & 5.063 & 4.056 \\ Single crystal & 0.1 & 8.830 & 8.108 \\ Single crystal & 0.2 & 8.930 & 7.990 \\ Single crystal & 0.3 & 8.976 & 7.952 \\ Polycrystalline & 0.1 & 8.769 & 8.037 \\ Polycrystalline & 0.2 & 8.873 & 7.978 \\ Polycrystalline & 0.33 & 8.931 & 7.942 \\ \end{tabular}
\end{table}
Table 1: Room temperature lattice parameters for FeGe\({}_{1-x}\)Sb\({}_{x}\) obtained from XRD.
Figure 2: Temperature dependent magnetic susceptibilities for FeGe\({}_{1-x}\)Sb\({}_{x}\) single crystals under magnetic field applied parallel to the \(ab\)-plane and along the \(c\)-axis. The data of x=0, 0.015 and 0.05 with FeGe-phase are plotted in (a) and (b). For x=0.1 with Sb-phase1, the transition at T\({}^{*}\)=170 K and its evolution with field can be seen in (c) and (d). The data of x=0.2 and 0.3 are shown in (e) and(f). The insets show the isothermal M(H) curves measured at different field and \(dM/dH\) as a function of field is plot in the inset of (e) for a clear view of field-induced magnetic transition.
addition, FeGe was reported to have a spin-flop transition under H\(\parallel\)c. It is found that the transition field shifts from 7 T to about 5 T at 2 K as shown in the inset of Fig. 2(b).
As revealed from Fig. 2, the AFM transition temperature is gradually suppressed to lower temperature with increasing x. T\({}_{N}\) is determined to be 350 K for x=0.2 and 280 K for x=0.3. Another important feature is that, for samples with FeGe-Phase, the susceptibility has a much sharper drop below T\({}_{N}\) under H\(\parallel\)c in contrast with that under H\(\parallel\)ab. This is a typical feature for antiferromagnet with ordered moment parallel to the \(c\)-axis. However for samples with Sb-phase1 and Sb-phase2, this feature is reversed. The susceptibility drop is much sharper under H\(\parallel\)ab for x=0.2 and 0.3 (Fig. 2(e) and (d)), which suggests the magnetic moments tend to lie in the \(ab\)-plane. This doping induced change of magnetic anisotropy is also confirmed by the following neutron diffraction studies. Besides, a sudden jump of susceptibility occurs at T\({}^{*}\)=170 K under \(\mu_{0}\)H=0.1T and moves to higher temperature with increasing field. It is likely caused by a temperature-driven spin-reorientation transition since magnetic field has strong impact on it. In addition, magnetic field induced spin-flop transitions could be identified for x=0.1 under H\(\parallel\)ab and H\(\parallel\)c from the M(H) curves in the insets of Fig. 2(c) and (d). For x=0.2 and 0.3, field-induced spin-flop transitions may exist under H\(\parallel\)ab as revealed from the dM/dH curves in the inset of Fig. 2(e), while similar features are not observed in M(H) curves for H\(\parallel\)c within the field limit.
Temperature dependent electrical resistivity measured under \(\mu_{0}\)H=0 and 5 T are displayed in Fig. 3. For FeGe with the charge order, a kink occurs at the CDW transition temperature in the \(d\rho/dT\) curve as reported previously[19]. However, this feature disappears for x=0.015 as shown in the inset of Fig. 3. For x=0.1, no distinguishable anomaly is identified in the \(d\rho/dT\) curve across the possible spin-reorientation transition at T\({}^{*}\)=170 K. A weak negative magnetoresistance (MR) could be observed below T\({}^{*}\) and becomes notable below 30 K. Interestingly, the MR is positive for x=0.015 and nearly zero for x=0.3, negative MR only becomes visible for x=0.1 and 0.2. We speculate the MR behavior might be associate with the magnetic structure, magnetic field may reduce the strong spin scattering caused by the noncollinear AFM structure and results in negative MR. In addition, for both x=0.1 and 0.2, an upturn of resistivity occurs below 30 K which may possibly be due to the disorder induced localization effect. As these samples with Sb-phase1 have significant atomic vacancies in the Ge and Sb sites.
Next, we present powder neutron diffraction results on x=0.1, 0.2 and 0.33. For all three samples, the most prominent and well defined magnetic Bragg peak is indexed as (0,0,1) as seen from the insets in Fig. 4. According to the basic magnetic neutron scattering rules, if the ordered moments strictly lie parallel to the \(c\)-axis, then (0,0,1) should have zero intensity contribution from magnetic scattering, which is the case of FeGe at between 400 K and 60 K as seen from previous neutron scattering experiments[19; 33; 34]. Please be aware that since the \(c\)-lattice constant is doubled for FeGe\({}_{1-x}\)Sb\({}_{x}\) comparing with that of FeGe, the (0,0,1) for x\(\geqslant\)0.1 should be considered as (0,0,0.5) for FeGe. So the significant magnetic contribution for (0,0,1) clearly means that the ordered moments of FeGe\({}_{1-x}\)Sb\({}_{x}\) should have dominant in-plane components.
For x=0.1 and 0.2 with Sb-phase1, (0,0,1) and other notable magnetic peaks includes (1,1,1), (0,0,3) and (2,2,3) which should be in structural extinction (Fig. 4(a)). This set of magnetic peaks are well defined
Figure 3: Temperature dependent resistivity of FeGe\({}_{1-x}\)Sb\({}_{x}\) single crystals under \(\mu_{0}\)H=0 and 5 T. The insets show the \(d\rho/dT\) curves for some samples.
by a propagation vector **k**=(0,0,1). Then we employed the BasIreps program to carry out representational analysis[30]. The result reveals twelve irreducible representations (IR) for Fe1 and six IRs for Fe2 which are compatible with this propagation vector. Each IR describes a possible magnetic model and we find only one IR for both Fe1 and Fe2 could give the best fit of the diffraction data, the fitting with other IRs yields unacceptable \(R_{P}\) and \(\chi^{2}\) factors. The refinement results and corresponding magnetic structure for x=0.1 and x=0.2 are shown in Fig. 4(a), (b) and (c). Apparently, all magnetic structures are noncollinear and the ordered moments at different Fe-sites are ranging from \(\sim\)1\(\mu_{B}\) to \(\sim\)4\(\mu_{B}\). For x=0.1 at 100 K, the ordered moments have small components along \(c\)-axis (less than 0.9\(\mu_{B}\)) which makes the AFM structure noncoplanar. While at 300 K, the \(c\)-axis component becomes negligible small (less than 0.01\(\mu_{B}\)) and the orientation of the in-plane components also have some changes. This might explains the spin-reorientation transition at T\({}^{*}\)=170 K. For x=0.2 at 250 K, the \(c\)-axis component is zero and the AFM structure is coplanar. The AFM structures at 4 K are similar since the magnetic peaks are the same and only have some intensity changes. The detailed data of the magnetic structures of all samples at different temperatures derived from refinement are recorded as '\(.mci\)_f_' files provided in the supplemental materials.
For x=0.33 with Sb-phase2, the indexed magnetic peaks are similar but they are no longer in structural extinction due to the different crystal symmetry, so a propagation vector **k**=(0,0,0) is chosen. Similar representation analysis and Rietveld refinement process also reveal that only one IR could best fit the diffraction data. Interestingly, for all the AFM structures in Sb-phase1, the in-plane components of the ordered moments are antiparallel between adjacent layers suggesting an interlayer AFM interaction. However for x=0.33, the in-plane basic vectors of the only IR which could fit the data are parallel with each other for atoms with the same \(z\)-axis coordinate, which yields an magnetic structure with interlayer ferromagnetic coupling. This spin configuration is illustrated from a view in the \(ab\)-plane as shown in Fig. 4(d). According to the fitting result, the spins are coplanar and aligned in a 120\({}^{\circ}\) triangle AFM type.
We should mention that typically for a complex noncollinear AFM structure, neutron diffraction on single crystals might be essential for an accurate determination of the magnetic structures. Currently the limited size of FeGe\({}_{1-x}\)Sb\({}_{x}\) is hindering us to step forward. However, our the powder neutron diffraction results could at least confirm the existence of noncollinear magnetic structure. Actually, all basic vectors of the possible IRs have noncollinear components in the \(ab\)-plane, therefore a noncollinear AFM structure is inevitable for the propagation vector determined by the indices of magnetic peaks.
Finally, we would like to discuss the above results in two aspects. First of all, since slight Sb doping could remove the CDW order, it may provide opportunities to uncover the origin of CDW in FeGe. The recent angle-resolved photo emission spectroscopy (ARPES) study on FeGe proposes that magnetism induced band-splitting pushes the van Hove singularities to the Fermi level, re
Figure 4: Neutron diffraction patterns, Rietveld refinement results and corresponding magnetic structures of x=0.1, 0.2 and 0.33 at different temperatures are shown respectively. The indices of four magnetic Bragg peaks are labeled in (a). The insets of (b), (c) and (d) show the (0,0,1) magnetic peak at different temperatures. The magnetic unit cell is marked by dotted lines and solid lines indicate the nearest Fe-Fe bond in the \(ab\)-plane.
sults in the formation of unconventional charge order[20]. So it would be an interesting topic to study how the band structure would be affected by slight Sb doping via ARPES and band calculations, which may provide critical information about the origin of CDW order.
Secondly, a recent theoretical work has predicted that an evolution from intralayer ferromagnetism to 120\({}^{\circ}\) AFM and noncoplanar spin orders could be realized in kagome metals by tuning onsite repulsion and flat band fillings[22]. FeGe was proposed in the border of these noncollinear AFM orders as shown in the theoretical phase diagram[22]. These intriguing unconventional noncollinear AFM orders that are closely related to kagome flat band might be realized in FeGe\({}_{1-x}\)Sb\({}_{x}\) as demonstrated in our results, although further theoretical and experimental evidences are needed for a final confirmation. The noncollinear AFM structures are very rare in kagome metals besides Mn\({}_{3}\)X (X= Ge, Sn, Ga, Ir) material family[24; 27; 28]. Our results may stimulate future researches on exploring anomalous Hall effect and spin Hall effect in FeGe\({}_{1-x}\)Sb\({}_{x}\) which may be induced by noncollinear antiferromagnetism.
## IV Conclusions
In summary, the physical properties and phase diagram of kagome metals FeGe\({}_{1-x}\)Sb\({}_{x}\) are presented. The drastic suppression of CDW order and change of magnetic anisotropy with Sb doping are observed. Neutron diffraction investigations reveal that noncollinear magnetic structures develop in FeGe\({}_{1-x}\)Sb\({}_{x}\) with buckled kagome lattice which is substantially different from the magnetic structure in the parent compound FeGe. We argue that this noncollinear antiferromagnetism might be unconventional and closely related to the kagome flat band. FeGe\({}_{1-x}\)Sb\({}_{x}\) could become a new material platform to explore novel emergent phenomena related to kagome physics.
## V Acknowledgement
This work was supported by the National Natural Science Foundation of China (No. 12074426, No. 12004426, No. 11227906), the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China (Grants No. 22XNKJ40), NSAF (Grant No. U2030106) and the Outstanding Innovative Talents Cultivation Funded Programs 2023 of Renmin Univertity of China.
|
2309.06711 | Epps Effect and the Signature of Short-Term Momentum Traders | It is a well-documented fact that the correlation function of the returns on
two "related" assets is generally increasing as a function of the horizon $h$
of these returns. This phenomenon, termed the Epps Effect, holds true in a wide
variety of markets, and there is a large body of literature devoted to its
theoretical justification. Our focus here is to describe and understand a
deviation to the Epps effect, observed in the context of the foreign exchange
and cryptocurrency markets. Specifically, we document a sharp local maximum of
the cross-correlation function of returns on the Euro EUR/USD and Bitcoin
BTC/USD pairs as a function of $h$. Our claim is that this anomaly reveals the
activity of short-term momentum traders. | Jérôme Busca, Léon Thomir | 2023-09-13T04:17:45Z | http://arxiv.org/abs/2309.06711v1 | # EPPS effect and the signature of short-term momentum traders+
###### Abstract
It is a well-documented fact that the correlation function of the returns on two "related" assets is generally increasing as a function of the horizon \(h\) of these returns. This phenomenon, termed the Epps Effect, holds true in a wide variety of markets, and there is a large body of literature devoted to its theoretical justification. Our focus here is to describe and understand a deviation to the Epps effect, observed in the context of the foreign exchange and cryptocurrency markets. Specifically, we document a sharp local maximum of the cross-correlation function of returns on the Euro EUR/USD and Bitcoin BTC/USD pairs as a function of \(h\). Our claim is that this anomaly reveals the activity of short-term momentum traders.
E pps Effect, Momentum, Cryptocurrency, Forex
60G15, 91G60, 60G55, 91B28, 91B70
## 1 Introduction
In 1979, Epps [5] highlighted the presence of a significant drop in the correlation between stocks when decreasing the time horizon of returns \(h\). This phenomenon was first observed on stocks [11, 3, 2] and then in other markets such as foreign exchange [7]. Several theoretical justifications have been proposed to account for the phenomenon. The Epps effect can, for instance, be explained by a lead-lag phenomenon among specific stocks [9], or by the asynchronous nature of ticks in liquid markets [8], although Toth et al. established in a subsequent paper that tick asynchrony can't fully explains the formation of Epps curves [10] (an observation that was confirmed in a recent study [4]).
Our focus in this paper is the cross-correlation function of the returns on the EUR/USD (Euro rate in US dollar) and BTC/USD (Bitcoin price) pairs. This choice was driven by the fact that each pair is the flagship asset in its own market -- forex and crypto-currencies, respectively -- and that one can strongly suspect an interesting interplay between the traditional currency market, and the relatively new crypto-currency market.
It is widely known that the collective actions of traders play a central role in the dynamics of financial markets. The interactions and decisions of individual traders, motivated by a myriad of factors, combine to form a composite force known as the market factor. We will use this key insight to model traders' actions to first order as buying and selling both assets EUR/USD and BTC/USD at the same time, in the manner of an index (this can also be seen as "buying or selling the dollar").
This paper is designed as follows. In section 2, we introduce the data set we use and compute the experimental cross-correlation function \(\rho(h)\). In section 3, we use a simple Gaussian model to explain the peak we observe in \(\rho\). Finally, in section 4, we build a more realistic agent-based Monte Carlo simulation which, once calibrated, shows good agreement with the data.
## 2 Empirical Analysis
In this section, we conduct an empirical analysis on forex exchange and cryptocurrency markets through the leading pairs EUR/USD and BTC/USD and highlight the presence of the Epps curve, along with a deviation to it.
### Data
To be specific, we actually chose to conduct our analysis on the EUR/USDT and BTC/USDT pairs. Tether (USDT) is a type of cryptocurrency referred to as a stablecoin, designed to have a value which is pegged to the US Dollar. Our choice was guided by the availability of high-quality data sets, as well as the necessity to avoid data issues such as asynchrony between the FX and crypto markets, among other factors.
We used a best bid/best offer data set from Binance (the largest crypto exchange), provided by Tardis, over the period 27 November 2020 - 19 July 2022. As the cryptocurrency market operates continuously, unlike the FX market, we removed weekends to align Bitcoin data with FX data. Then, we calculated the simple-mid prices \(P(t)\) for further analysis.
### Correlations
We define the log-return of length \(h\) for an asset of mid price \(P(t)\) to be:
\[r_{h}(t)=\log\left(\frac{P(t)}{P(t-h)}\right),\]
and we denote by \(\rho(h)\) the correlations between the log-returns on EUR/USDT (1) and BTC/USDT (2) as a function of the length \(h\):
\[\rho(h)=\frac{\left\langle\left(r_{h}^{1}-\langle r_{h}^{1}\rangle\right) \left(r_{h}^{2}-\langle r_{h}^{2}\rangle\right)\right\rangle}{\sigma^{1} \sigma^{2}}, \tag{1}\]
where the bracket \(\langle.\rangle\) denotes time average, and the standard deviations of the returns are defined as usual by:
\[\sigma^{k}=\sqrt{\left\langle\left(r_{h}^{k}\right)^{2}\right\rangle-\langle r _{h}^{k}\rangle^{2}},\quad k=1,2.\]
We also compute a confidence interval at the 95% level around (1) using a standard Fisher transformation [6].
### Empirical Results
Figure 1 represents the cross-correlation \(\rho(h)\) between EUR/USDT and BTC/USDT returns. We observe a classic Epps effect -- an overall increase in correlation with \(h\). However, if we zoom in on higher frequencies (Figure 2),
Figure 1: Correlation between EUR/USDT and BTC/USDT returns as a function of horizon \(h\) (1)
we observe an anomaly which manifests itself as a sharp peak around 60 seconds, with value 0.15, followed by a decrease to 0.12. These fluctuations are statistically significant.
## 3 A Gaussian Model of Momentum
In this section we describe a simple Gaussian model which explains how momentum traders' activity can generate a peak in the cross-correlation function. We assume in the following there are only two assets, with prices \(s_{t}=(s_{t}^{1},s_{t}^{2})\), and that the market consists of: i) a momentum trader; ii) a noise trader; and iii) a market-maker. The momentum trader is assumed to trade solely in an equally-weighted index based on \(s^{1}\) and \(s^{2}\) and to use a simple momentum rule with window \(\tau>0\). For computational ease, we assume the trader looks at the simple returns (as opposed to percentage or log returns) of the two assets. With these rules, his inventory (in both asset 1 and 2) at time \(t\) is given by:
\[p_{t}=\bar{p}\left(s_{t}^{1}-s_{t-\tau}^{1}+s_{t}^{2}-s_{t-\tau}^{2}\right), \tag{1}\]
where \(\bar{p}>0\) is a constant. As for the noise trader, we model his inventory \(X_{t}\) using an Ornstein-Uhlenbeck process with zero mean:
\[dX_{t}^{k}=-\lambda X_{t}^{k}dt+\sigma dW_{t}^{k},\ \ k=1,2, \tag{2}\]
with \(\lambda,\sigma>0\) constant (independent of \(k\) for simplicity) and \(W_{t}^{1}\), \(W_{t}^{2}\) independent Brownians, and with independent Gaussian initial conditions equal to the stationary distribution :
\[X_{0}^{k}\mathop{=}^{d}\mathcal{N}\left(0,\sigma^{2}/2\lambda\right),\ \ k=1,2. \tag{3}\]
The assumption that the noise trader's inventory should follow (2) is fairly natural and can be shown to hold, for instance, in the classic Avellaneda-Stoikov model [1], in the limit of large speed of trading (when Poisson processes reach their diffusion limit).
Lastly, the market-maker is simply the counterparty (liquidity provider) to the momentum and noise traders. If we further assume a linear market impact for all liquidity-taking trades, with elasticity \(\theta>0\), along with some noise with volatility \(\nu>0\), we can easily write down the price process increment for the two assets:
\[ds_{t}^{k}=\theta\left(dp_{t}+dX_{t}^{k}\right)+\nu dZ_{t}^{k},\ \ k=1,2, \tag{4}\]
Figure 2: Correlation between EUR/USDT and BTC/USDT returns as a function of horizon \(h\) (1)
where \(Z^{k}_{t}\) are standard Brownian motions, independent of each other and of the \(W^{k}_{t}\)'s. Integrating (3.4) and choosing initial conditions that don't generate extra constants for simplicity, we get:
\[\begin{cases}p_{t}=\bar{p}\left(s^{1}_{t}-s^{1}_{t-\tau}+s^{2}_{t}-s^{2}_{t- \tau}\right)\\ s^{k}_{t}=\theta\left(p_{t}+X^{k}_{t}\right)+\nu Z^{k}_{t},\ \ k=1,2.\end{cases} \tag{3.5}\]
The price process therefore satisfies:
\[s^{k}_{t}=\varepsilon\left(s^{1}_{t}-s^{1}_{t-\tau}+s^{2}_{t}-s^{2}_{t-\tau} \right)+\theta X^{k}_{t}+\nu Z^{k}_{t},\ \ k=1,2, \tag{3.6}\]
where \(\varepsilon=\bar{p}\theta>0\) can be interpreted as a non-dimensional coupling parameter of the system. In the following, we always assume \(\varepsilon\ll 1\) and expand all relevant quantities to first order in \(\varepsilon\). From (3.6), the price process can be rewritten
\[\begin{cases}\ \ (1-\varepsilon)s^{1}_{t}-\varepsilon s^{2}_{t}=-\varepsilon \left(s^{1}_{t-\tau}+s^{2}_{t-\tau}\right)+\theta X^{1}_{t}+\nu Z^{1}_{t}\\ \\ -\varepsilon s^{1}_{t}+(1-\varepsilon)s^{2}_{t}=-\varepsilon\left(s^{1}_{t- \tau}+s^{2}_{t-\tau}\right)+\theta X^{2}_{t}+\nu Z^{2}_{t}.\end{cases} \tag{3.7}\]
Noting that \(\frac{1-\varepsilon}{1-2\varepsilon}\simeq 1+\varepsilon\), from (3.7) we have, to first order in \(\varepsilon\)
\[\begin{cases}s^{1}_{t}\simeq-\varepsilon\left(s^{1}_{t-\tau}+s^{2}_{t-\tau} \right)+(1+\varepsilon)\left(\theta X^{1}_{t}+\nu Z^{1}_{t}\right)+\varepsilon \left(\theta X^{2}_{t}+\nu Z^{2}_{t}\right)\\ \\ s^{2}_{t}\simeq-\varepsilon\left(s^{1}_{t-\tau}+s^{2}_{t-\tau}\right)+ \varepsilon\left(\theta X^{1}_{t}+\nu Z^{1}_{t}\right)+(1+\varepsilon)\left( \theta X^{2}_{t}+\nu Z^{2}_{t}\right),\end{cases} \tag{3.8}\]
so that
\[\begin{split} s^{k}_{t}&\simeq\theta X^{k}_{t}+\nu Z^{k}_{ t}+\varepsilon\theta\left(X^{1}_{t}-X^{1}_{t-\tau}\right)+\varepsilon\nu \left(Z^{1}_{t}-Z^{1}_{t-\tau}\right)\\ &+\varepsilon\theta\left(X^{2}_{t}-X^{2}_{t-\tau}\right)+ \varepsilon\nu\left(Z^{2}_{t}-Z^{2}_{t-\tau}\right),\ \ k=1,2\end{split} \tag{3.9}\]
and the \(h\)-horizon return is given by:
(3.10) \[\begin{split} s^{k}_{t+h}-s^{k}_{t}&=\theta\left(X^ {k}_{t+h}-X^{k}_{t}\right)+\nu\left(Z^{k}_{t+h}-Z^{k}_{t}\right)\\ &\
**Theorem 3.1**: _To first order in \(\varepsilon\), we have:_
\[\frac{1}{2\varepsilon}\rho(h)=\frac{\theta^{2}\left(1-e^{-\lambda h}-e^{-\lambda \tau}+\frac{1}{2}e^{-\lambda(h+\tau)}+\frac{1}{2}e^{-\lambda|h-\tau|}\right)+ \xi h\wedge\tau}{\theta^{2}\left(1-e^{-\lambda h}\right)+\xi h}, \tag{3.13}\]
_where \(h\wedge\tau=\min(h,\tau)\), and \(\xi\) is the parameter \(\xi=\frac{\nu^{2}}{\sigma^{2}/\lambda}\)_
To establish this result, we will need the following lemma.
**Lemma 3.2**: _If \(m\in\{0,1\}\), we have, for all \(t,h>0\):_
\[\left\langle Z_{t+h}-Z_{t},Z_{t+h-m\tau}-Z_{t-m\tau}\right\rangle=\left(h-m \tau\right)_{+}, \tag{3.14}\]
_where \(a_{+}=\max(a,0)\); and_
\[\left\langle X_{t+h}-X_{t},X_{t+h-m\tau}-X_{t-m\tau}\right\rangle=\frac{ \sigma^{2}}{2\lambda}\left(2e^{-\lambda m\tau}-e^{-\lambda(h+m\tau)}-e^{- \lambda|h-m\tau|}\right), \tag{3.15}\]
_where \(Z_{t}\) is either \(Z_{t}^{1}\) or \(Z_{t}^{2}\), and \(X_{t}\) is either \(X_{t}^{1}\) or \(X_{t}^{2}\)._
**Proof of Lemma 3.2**
\(Z\) is a standard Brownian motion. Thus, using ordinary stochastic calculus
\[\left\langle Z_{t},Z_{s}\right\rangle=t\wedge s\ \ \text{for all $s,t>0$}.\]
Therefore
\[\left\langle Z_{t+h}-Z_{t},Z_{t+h-m\tau}-Z_{t-m\tau}\right\rangle =(t+h-m\tau)+(t-m\tau)\] \[-t\wedge(t+h-m\tau)-(t-m\tau)\wedge(t+h)\] \[=(t+h-m\tau)+(t-m\tau)+(-t)\vee(-t-h+m\tau)\] \[+(-t+m\tau)\vee(-t-h),\]
where \(a\lor b=\max(a,b)\). Since \(\mu+a\lor b=(a+\mu)\vee(b+\mu)\) for all \(\mu\), we find the above is equal to
\[(h-m\tau)_{+}+(-h-m\tau)_{+}=(h-m\tau)_{+},\]
since \(m,h,\tau\geq 0\), which establishes (3.14).
\(X\) is an Ornstein-Uhlenbeck process following (3.2) and (3.3). Therefore
\[X_{t}=e^{-\lambda t}X_{0}+\sigma\int_{0}^{t}e^{-\lambda(t-u)}dW_{u},\]
and for all \(s,t>0\) we have
\[\left\langle X_{t},X_{s}\right\rangle =e^{-\lambda(t+s)}\frac{\sigma^{2}}{2\lambda}+\sigma^{2}e^{- \lambda(t+s)}\int_{0}^{t\wedge s}e^{2\lambda u}du\] \[=e^{-\lambda(t+s)}\frac{\sigma^{2}}{2\lambda}\left(1+\left(e^{2 \lambda t\wedge s}-1\right)\right)\] \[=\frac{\sigma^{2}}{2\lambda}e^{-\lambda|t-s|},\]
since \(t+s-2t\wedge s=|t-s|\), from which (3.15) follows easily.
Let's now prove the main result, Theorem 3.1. By symmetry \(c_{11}(h)=c_{22}(h)\) for all \(h>0\) and, using (3.10) and Lemma 3.2, we have, to first order in \(\varepsilon\):
\[c_{11}(h)=c_{22}(h)=\frac{\sigma^{2}\theta^{2}}{\lambda}\left(1-e^{-\lambda h} \right)+\nu^{2}h \tag{3.16}\]
and
\[\frac{1}{2\varepsilon}c_{12}(h)=\frac{\sigma^{2}\theta^{2}}{\lambda}\left(1-e ^{-\lambda h}-e^{-\lambda\tau}+\frac{1}{2}e^{-\lambda(h+\tau)}+\frac{1}{2}e^{ -\lambda|h-\tau|}\right)+\nu^{2}\left(h-(h-\tau)_{+}\right); \tag{3.17}\]
and since \(h-(h-\tau)_{+}=h\wedge\tau\), combining (3.16) and (3.17) and dividing by \(\sigma^{2}/\lambda\), we proved (3.13).
Now that we have an explicit formula for the cross-correlation function \(\rho(h)\) with (3.13), we can plot its typical shape. The plot below illustrates \(\rho(h)\) for the following values of the parameters: \(\lambda=0.03162\), \(\xi=0.0001\), \(\theta=0.6\), \(\tau=66\), \(\varepsilon=0.0505\). These values come from the calibration of the model to our data set (see next section).
We observe on Figure 3 that the cross-correlation \(\rho(h)\) in (3.13) **exhibits a sharp peak (kink) at \(\mathbf{h=\tau}\), the trading horizon of the momentum trader**. We therefore established -- in the simplified setting of our Gaussian model -- how the signature of index momentum traders can easily be detected with the help of the cross-correlation function.
## 4 Agent-based Simulation
In this section, we build an agent-based Monte Carlo simulation which extends our simplistic Gaussian model by incorporating more realistic features. As in the Gaussian model, we assume in the following there are only two assets, with prices \(s_{t}=(s_{t}^{1},s_{t}^{2})\), and that the market consists of: i) a momentum trader; ii) a noise trader; and iii) a market-maker.
In this simulation, we relax the constraint that Poisson processes reach their diffusion limit. Thus, we use the "original" form of Avellaneda Stoikov's model [1].
On the other hand, to avoid unbounded inventories, we impose a cap on the momentum trader's positions. For the sake of realism, we also take into account the tick sizes \(\eta\).
### Noise trader
We model the noise trader following [1]. Specifically, we assume trade executions occur at the ask (resp. bid) price at a rate described by Poisson
processes \(N_{t}^{a}\) and \(N_{t}^{b}\) with respective intensity parameters \(\lambda_{t}^{a,b}(\delta_{t}^{a,b})=A^{a,b}e^{-k^{a,b}\delta_{t}^{a,b}}\), where \(\delta_{t}^{a,b}\) is the half-spread between the mid and the ask (resp. bid) price, and \(A^{a,b},k^{a,b}>0\) are constants measuring the liquidity of the market.
The inventory \(q_{t}^{n}\) is then simply given by
\[q_{t}^{n}=\psi^{n}(N_{t}^{a}-N_{t}^{b}),\]
where \(\psi^{n}>0\) is a parameter (trade size).
### Momentum trader
We assume the momentum trader buys and sells an equally-weighted index based on \(s^{1}\) and \(s^{2}\), denoted by \(\mathrm{index}_{t}\), and uses a simple momentum rule with window \(\tau>0\). To manage risk, the momentum trader has a maximum position constraint. We also force a long (resp. short) position when the index moves back above (resp. below) the moving average.
Their inventory \(q_{t}^{m}\) is modeled as
\[q_{t}^{m}=\begin{cases}\min(\max(q_{t-1}^{m},0)+\psi^{m},\ q_{\max}^{m})&\text {if \ index}_{t-1}>\mu_{t-1}\\ \\ \max(\min(q_{t-1}^{m},0)-\psi^{m},\ -q_{\max}^{m})&\text{otherwise},\end{cases}\]
where \(\mu_{t}\) is the \(\tau-\)moving average of \(\mathrm{index}_{t}\), \(\psi^{m}>0\) is the trade size, and \(q_{\max}^{m}>0\), is the maximum absolute inventory.
### Market maker
The market-maker is the liquidity provider. It is therefore modeled as the counterparty to the noise and momentum traders. His inventory is simply \(q_{t}^{mm}=-q_{t}^{n}-q_{t}^{m}\). He adjusts the half-spread \(\delta_{t}^{a,b}\) as a function of k, q, the tick size \(\eta\), the volatility, and a risk aversion parameter \(\gamma\) as described in [1].
Furthermore, the impact of liquidity-taking orders on the mid price is assumed to be linear with elasticity \(\theta>0\), on top of a Brownian noise with volatility \(\nu>0\). Before rounding to tick size \(\eta\), the mid price processes are therefore given by
\[s_{t}^{k}=s_{t-1}^{k}+\theta^{k}(q_{t}^{n}+q_{t}^{m}-q_{t-1}^{n}-q_{t-1}^{m}) +\nu^{k}dZ_{t}^{k},\ \ k=1,2,\]
where \(dZ_{t}^{k}\) are Brownian increments.
### Monte Carlo Simulation
We ran a discrete Monte-Carlo simulation of the model above, with time resolution \(\mathrm{dt}=0.5s\), over a duration \(T=2\cdot 10^{7}\mathrm{dt}\), i.e. around \(115\) trading days. We took realistic initial values for \(s^{1}\) and \(s^{2}\), respectively \(1.10\) USD and \(30,000\) USD.
We then calibrated the parameters of our model to our data set. Here are the fitted values: \(\eta=[1e-4,1e-2]\), \(A=[1,1]\), \(\theta=[2.7e-11,2.7e-6]\), \(\nu=[3e-5,2.04]\), \(\gamma=[6.46e-6,3.47e-9],k=[3466,34.66]\), \(\psi^{n}=[100000,4]\), \(\psi^{m}=[3000000,120]\), \(\tau=500s\), \(q_{\mathrm{rm}}^{m}=[6500000,250]\).
Figure 4 shows the empirical correlation function (blue), the agent-based simulated correlation function (orange) and the correlation function of our simple Gaussian model (green) as a function of horizon h, expressed in seconds.
We observe that the cross-correlation function of our agent-based simulation exhibits a good fit to the data up to h = 120 seconds, before dropping and failing to maintain a correlation level of 0.12. We suspect that the impact of agents with longer trading horizons is the cause of a persistent correlation for larger values of \(h\).
As to the Gaussian model, we see a sharp peak around h = \(\tau=66\) seconds, followed by a slow drop to the 0.08 level. Due to its limitations, it is impossible to create a "smoother" or larger peak in correlation.
The main difference in the choice of parameters between the Gaussian and agent-based models is the value \(h=\tau\), where the peak of \(\rho\) is reached in the Gaussian model. Indeed, for the simulated model, it is necessary to set a higher value for \(\tau\) because of the switch to a discrete model and the resulting effect on the noise trader's orders. We made available the Python code for the simulation at
[https://github.com/RimohtL/EppsEffect](https://github.com/RimohtL/EppsEffect)
|
2309.10173 | GCNIDS: Graph Convolutional Network-Based Intrusion Detection System for
CAN Bus | The Controller Area Network (CAN) bus serves as a standard protocol for
facilitating communication among various electronic control units (ECUs) within
contemporary vehicles. However, it has been demonstrated that the CAN bus is
susceptible to remote attacks, which pose risks to the vehicle's safety and
functionality. To tackle this concern, researchers have introduced intrusion
detection systems (IDSs) to identify and thwart such attacks. In this paper, we
present an innovative approach to intruder detection within the CAN bus,
leveraging Graph Convolutional Network (GCN) techniques as introduced by Zhang,
Tong, Xu, and Maciejewski in 2019. By harnessing the capabilities of deep
learning, we aim to enhance attack detection accuracy while minimizing the
requirement for manual feature engineering. Our experimental findings
substantiate that the proposed GCN-based method surpasses existing IDSs in
terms of accuracy, precision, and recall. Additionally, our approach
demonstrates efficacy in detecting mixed attacks, which are more challenging to
identify than single attacks. Furthermore, it reduces the necessity for
extensive feature engineering and is particularly well-suited for real-time
detection systems. To the best of our knowledge, this represents the pioneering
application of GCN to CAN data for intrusion detection. Our proposed approach
holds significant potential in fortifying the security and safety of modern
vehicles, safeguarding against attacks and preventing them from undermining
vehicle functionality. | Maloy Kumar Devnath | 2023-09-18T21:42:09Z | http://arxiv.org/abs/2309.10173v2 | # GCNIDS: Graph Convolutional Network-Based Intrusion Detection System for CAN Bus
###### Abstract
The Controller Area Network (CAN) bus is a standard protocol used for communication between various electronic control units (ECUs) in modern vehicles. However, it has been demonstrated that the CAN bus is vulnerable to remote attacks, which can compromise the safety and functionality of the vehicle. To address this issue, intrusion detection systems (IDSs) have been proposed to detect and prevent such attacks. In this paper, we propose a novel approach for intrusion detection in the CAN bus using a Graph Convolutional Network (GCN) (Zhang, Tong, Xu, & Maciejewski, 2019). By leveraging the power of deep learning, we can achieve higher accuracy in detecting attacks while minimizing the need for feature engineering. Our experimental results demonstrate that the proposed GCN-based approach outperforms state-of-the-art IDSs in terms of accuracy, precision, and recall. We also show that the GCN-based approach can effectively detect mixed attacks, which are more challenging to detect than single attacks. Moreover, the proposed approach requires less feature engineering and is more suitable for real-time detection systems. To the best of our knowledge, this is the first work that applies the GCN to CAN data for intrusion detection. Our proposed approach can significantly enhance the security and safety of modern vehicles by detecting attacks and preventing them from compromising the functionality of the vehicle.
GCN; CAN Bus Network; DoS Attack; Fuzzy Attack; Replay Attack; Spoofing Attack; Mixed Attack.
## 1 Introduction
A standard for communication between the many modules that make up the electrical system of a vehicle is known as the Controller Area Network bus or CAN bus for short. The CAN bus is used in Tesla vehicles to connect many systems and components Nie, Liu, and Du (2017); Zniti and EL Ouazzani (2023). Some of these systems and components include the powertrain, the battery, the sensors (lidar, mmWave radar), and the displays Devnath et al. (2023); Spencer, Mateus, Torres, Dionisio, and Martins (2021). Through the sending and receiving of digital messages, the CAN bus makes it possible for various components to communicate with one another. Because of this, the various systems can collaborate and share information with one another to guarantee that the vehicle functions in the most effective manner. For instance, the powertrain control module can receive information about the level of charge from the battery management system over the CAN bus. Based on this information, the module can then change the amount of power that is produced. In general, the CAN bus is a crucial component of the electrical architecture of the Tesla vehicle since it enables communication and co
ordination between all of the different components in an effortless manner. Researchers have already demonstrated remote attacks on crucial electronic control units (ECUs) for vehicles by leveraging controller area networks (CANs) Wei, Ai, Zhao, and Zhang (2023); Zniti and EL Ouazzani (2023). The CAN bus is also a potential target for attackers, who may exploit vulnerabilities in the system to launch attacks on electronic control units (ECUs). Such attacks can compromise the security and safety of the vehicle, leading to potential harm to passengers and property. Therefore, maintaining the security of the CAN bus is of utmost importance in ensuring the safety and reliability of modern vehicles. Several security measures have been proposed to mitigate the risks associated with CAN bus attacks, including intrusion detection systems (IDSs), encryption techniques, and access control mechanisms. IDSs are particularly important in detecting and preventing attacks, as they can identify malicious activities in the network and alert the appropriate authorities to take action Farag (2017). However, IDSs are often limited in their ability to detect mixed attacks, in which multiple types of attacks are launched simultaneously or sequentially. Therefore, there is a need for more robust and generalizable IDSs that can detect a wide range of attacks in real time. In addition, present intrusion detection systems (IDSs) frequently promise to defend against a certain kind of attack, which may leave a system open to countless other types of attacks. A generalizable intrusion detection system (IDS) that is able to recognize a wide variety of assaults in the shortest amount of time possible has more practical utility than an attack-specific IDS, which is not an easy undertaking to complete successfully. In this work Refat, Elkhail, Hafeez, and Malik (2022), the researchers have used the graph properties as a feature for applying machine learning techniques. They also have detected mix attacks which are a combination of DoS, Fuzzy, and Spoofing but the most difficult one is the replay attack, they do not consider the replay attacks in mixed attacks. They have to conduct a lot of feature engineering in order to make the machine learning model work, which can make it more difficult to make decisions during real-time identification. It would be fantastic for autonomous systems if we could take a step that would require less feature engineering but still have a high level of accuracy. We are going to look into the possibility of a mixed attack which is a combination of DoS, Fuzzy, Spoof, and Replay attack because we are aware that, at the present moment, we do not know which attack will take place. In the event that a DOS assault takes place, it will not reveal that I am a DOS attacker. Any kind of attack could take place at any moment. Therefore, if we think about attacking the CAN bus in a mixed (any time any attacks can happen) fashion, we will have more success. Our proposed model shows that it works better than the state art work Refat et al. (2022) for the mixed (combination of DoS, Fuzzy, and Spoof attacks).
In this project, our aim is to tackle the following research problems with a comprehensive and scholarly approach:
* We want to minimize the amount of feature engineering than the existing work Refat et al. (2022) so that it can be readily included in a real-time detection system.
* In order to make the primary protocol more useful in the real-time detection system, one of our goals is to improve the accuracy of other and mixed attacks (a combination of DoS, Fuzzy, Spoof, and Replay attacks) without having to change the protocol itself which is not considered in the existing study Refat et al. (2022).
* To the best of our knowledge, it is first to apply the Graph Convolutional Network using only two graph-based features maximum indegree and maximum
outdegree on graph-based CAN data Rahman et al. (n.d.); Zhang et al. (2019) and Our proposed GCN model has yielded superior results compared to the existing methodology Refat et al. (2022).
The study is organized as follows - Section 2 discusses recent works on CAN bus security challenges and Graph-based anomaly detection. Data description and processing techniques are described in Section 3. The model training mechanism is elaborated in Section 4. Section 5 reports the experimental results and findings, and Section 7 concludes the study.
## 2 Related Works
Koscher et al. are first able to demonstrate that an attacker who can get access to virtually any ECU can go through a broad array of safety-critical systems by directly interfacing with the OBD-IIport Koscher et al. (2010).They have full control of a wide range of functions: disabling the brakes, stopping the engine, and controlling other vehicle functions by using reverse engineering code. Checkoway et al. later demonstrate that a vehicle can be accessed remotely Checkoway et al. (2011). Earlier research has shown that vehicles are insecure within internal networks. They have gained successful access non-physically. They have attacked the redtooth and infotainment systems of vehicles. Miller and Valasek have analyzed the rate of messages for in-vehicle network intrusion detection Miller and Valasek (2014).It should be possible to detect anomalous messages by analyzing the distribution rate of messages. Valasek and Miller have demonstrated that it is possible to have real-world attacks on multiple vehicles by using the CAN bus Miller and Valasek (2015). The brakes of a Jeep Cherokee are successful while it is on a live highway. Moore et al. have proposed that the regularity of CAN message frequency detects the anomaly Moore, Bridges, Combs, Starr, and Prowell (2017). A similar detection method has been proposed by Gmiden says that Moore's detector relies on the time intervals of CAN messages Gmiden, Gmiden, and Trabelsi (2016). Regularity in the signal frequencies is seen by them, and from there they hypothesize that accurate detection of regular-frequency signal injection attacks is possible by monitoring the inter-signal wait times of CAN bus traffic will provide. Zhou et al. present an advanced CAN bus anomaly detection system for intelligent vehicles by integrating DNN technology and a triple loss network Zhou, Li, and Shen (2019). Firstly, the methodology first extracts data features as a set of vectors by using the deep network, and then calculates the similarity between two real-time extracted CAN data sequences. From there, the triple loss uses another calibrated data sequence to find out the abnormal data. They use only malicious data. Verendel et al. have proposed a honeypot security mechanism that has been placed at the wireless gateway acting as a decoy in simulating the in-vehicle network Verendel, Nilsson, Larson, and Jonsson (2008). Attacking information is collected and analyzed to update the later version of the system. The most challenging in deploying a honeypot is that it must be realistic as possible. The attacker should not have information about that. Wolf et al. have proposed an architecture that is based on firewall signatures for securing vehicular communication gateways Lemke, Paar, and Wolf (2006). It filters authorized controllers to exchange valid messages. However, they have also said that it could not fully shield the vehicle network as most modern vehicles have interfaces that enable access to the entire car system. Marchetti et al. propose the first algorithm based on the analysis of the sequences of messages which flow on the CAN bus Marchetti
and Stabili (2017). Without knowing the message specifications, the feature can be extracted from the CAN messages. The computational requirements of the proposed algorithm are low enough with compatible with fewer hardware resources. Kang et al. propose an intrusion detection system that is based on the deep neural network (DNN) to secure the CAN network Kang and Kang (2016). After reducing high-dimensional CAN packet data, the method figures out the underlying statistical properties of normal and attack packets. Then, identify the attack after extracting the corresponding features. Graph-based anomaly detection is not a new idea. Paudel et al. use the publicly available tool graph-based anomaly detection tool (GBAD) (Eberle and Holder 2007) Paudel, Harlan, and Eberle (2019). It is a graph-based approach proposed by them. They demonstrate that GBAD not only focuses on anomalies within an entity but also it allows us to find the anomalies that exist in an entity's relationship. The authors introduce a novel approach for graph-based anomaly detection by adding background knowledge to the evaluation metrics used in a traditional graph-mining approach Velampalli and Eberle (2017). Background knowledge is added in the form of rule coverage reporting the percentage of the final graph covered by the instances of the substructure. The authors hypothesize that by assigning negative weights to the rule coverage, they can discover anomalous substructures. Velampalli et al. uses a graph-based approach that analyzes the data for suspicious employee activities at Kasios Velampalli, Mookiah, and Eberle (2019). Graph-based approaches are so much power to handle rich contextual data and provide a deeper understanding of data due to the ability to discover patterns in databases that are not easily found using traditional query or statistical tools. They focus on graph-based knowledge discovery in structural data to mine for interesting patterns and anomalies. Paudel et al. have proposed a sketching approach in a graph stream called SNAPSKETCH Paudel and Eberle (n.d.). From a biased-random walk, a simplified hashing of the discriminative shingles is generated that is used for anomaly detection in the dynamic graphs. Eberle et al. propose a novel graph-based anomaly detection approach named Graph-based Outlier Detection by representing home IoT traffic as a real-time graph stream Paudel, Muncy, and Eberle (2019). They detect DoS attacks in real-time by processing graph data efficiently. Tanksale et al. propose an intrusion detection system based on Support Vector Machine that can detect anomalous behavior with high accuracy Tanksale (2019). They also give a process for selecting parameters and features. The drawback of their works is that they only consider dos attacks. Song et al. propose an Intrusion Detection System to secure the CAN bus from cyber-attacks which is based on a deep convolutional neural network (DCNN) Song, Woo, and Kim (2020). They make a frame builder that converts the CAN bus data into a grid-like structure so that CAN bus data can be fitted to the DCNN. The drawback of their works is that they do not consider replay and mixed attacks. Adding computers in automobiles has brought several benefits such as driver comfort, vehicle efficiency, and performance. Along with these kinds of benefits, the dependence of automobiles on such devices has increased potential attacks and threats to mankind. So many authors agree that the CAN network lacks security. Some critics think that it has not been built with security in mind. It does not provide any security against malicious attacks Ueda et al. (2015), Studnia et al. (2013), Carsten, Andel, Yampolskiy, McDonald, and Russ (2015), Boudguiga, Klaudel, Boulanger, and Chiron (2016), Staggs (2013) and Hoppe, Kiltz, and Dittmann (2011). Hiroshi et al. tells that in-vehicle networks message spoofing is considered one of the main threats as it is possible to take control of a critical safety systems display a falsified value to a vehicle's maintainer Ueda et al. (2015). Being not capable of distinguishing between a legitimate ECU and a malicious one,
replay attacks are made in the CAN network. As an unauthorized device can be easily connected to the CAN-Bus, it is possible to transmit spoof messages Ueda et al. (2015), Foster and Koscher (2015), Carsten et al. (2015), Staggs (2013) and Hoppe et al. (2011). Moreover, the CAN network is also not freed from Denial of Service and fuzzy attacks. DoS attacks can be performed by sending high-priority messages again and again by assigning successive dominant bits on the bus Staggs (2013), Foster and Koscher (2015), Boudguiga et al. (2016) and Hoppe et al. (2011). It is possible for an attacker to cause fuzzy attacks that can inject messages of randomly spoofed identifiers having arbitrary data. For this reason, receiving lots of functional messages can cause unintended vehicle behaviors which may cause damage to human life Lee, Jeong, and Kim (2017).
Machine learning or deep learning algorithm has been applied in multiple applications Anwar et al. (2023); Devanth et al. (2023); Protikuzzaman, Baowaly, Devanth, and Singh (2020); Sarker (2021). Following these different machine learning models in recent years, various approaches have been proposed to detect attacks on Controller Area Network (CAN) bus along with anomaly detection Paudel and Eberle (n.d.); Paudel et al. (2019); Velampalli et al. (2019), a crucial component in modern vehicles. Some of these methods relied on feature engineering Islam, Devanth, Samad, and Al Kadry (2022); Refat et al. (2022), while others utilized deep learning models to classify CAN bus attacks Song et al. (2020). Some of them consider only DoS attacks Tanksale (2019). However, there remains room for improvement in terms of accuracy and reducing the reliance on feature engineering. In this study, we present a graph-based approach that incorporates only two features, namely, the maximum in-degree and maximum outdegree of nodes in the graph. Our approach leverages Graph Convolutional Networks (GCN) to achieve reliable accuracy in detecting mixed attacks on the CAN bus, a problem that has not been adequately addressed by previous research. We demonstrate the effectiveness of our method using real-life CAN bus data and compare our results with those obtained from other state-of-the-art techniques. Our findings show that our approach outperforms existing methods in terms of accuracy and reduces the need for extensive feature engineering.
## 3 Dataset
### Dataset Description
The OTIDS (Operational Technology Intrusion Detection System) dataset is a publicly available dataset of CAN bus traffic designed specifically for intrusion detection research in operational technology (OT) environments. The dataset has been collected by researchers from the University of Twente and the Netherlands Organization for Applied Scientific Research (TNO). The dataset contains CAN bus traffic captured from a real-world OT environment, specifically a moving vehicle. The data has been collected using a CAN bus logger in real-time while the vehicle is in operation. The data has been preprocessed to remove any duplicate or corrupt packets. The dataset contains 5 sessions of CAN bus traffic, each with a duration of approximately 1 hour. Each session contains thousands of CAN bus packets, with a total of over 8 million packets across all sessions. The packets are labeled as either normal or anomalous based on their behavior, making the dataset suitable for both normal and anomaly detection research. The OTIDS dataset also includes metadata, such as the timestamp, ID, and data field of each packet. This metadata can be used to extract features and
train machine-learning models for intrusion detection. Overall, the OTIDS dataset is a valuable resource for researchers interested in developing intrusion detection systems for OT environments, particularly those using CAN bus networks.
The OTIDS dataset contains various types of attacks on the CAN bus network, with different message sizes. Here are some examples:
* Denial of Service (DoS) attack: In this attack, the attacker floods the network with a large number of messages, causing the network to become congested and unresponsive. The messages in this attack are typically small in size, usually less than 8 bytes Lee et al. (2017).
* Spoofing attack: In this attack, the attacker sends messages with forged source addresses, making it appear as if the messages are coming from a legitimate source. The messages in this attack are typically small in size, similar to normal messages on the network Lee et al. (2017).
* Fuzzing attack: In this attack, the attacker sends messages with intentionally malformed data to try to crash or exploit a vulnerable device on the network. The messages in this attack can vary in size, depending on the specific payload used Lee et al. (2017).
* Replay attack: An replay attack is a type of cyber attack in which an attacker pretends to be someone else in order to gain access to sensitive information or resources. This type of attack is often used in phishing and social engineering attacks, where the attacker impersonates a trusted individual or entity to trick the victim into divulging sensitive information or performing an action that is harmful to the victim or their organization Lee et al. (2017).
Overall, the message sizes in the OTIDS dataset vary widely depending on the type of attack being performed. It's important to note that the message sizes alone may not be sufficient for detecting attacks, and other features such as message frequency and content may also need to be considered. For our case, we have considered DoS, Fuzzy, Spoof, Replay, and a mix of all four attacks.
The number of messages is considered in our experiment shown in Table 1.
### CAN Data Frame:
CAN bus is developed by Robert Bosch in 1986 Van Herrewege, Singelee, and Verbauwhede (2011). It is a broadcast type of bus which is a message-based protocol. Here, no host is required. It maintains serial half-duplex asynchronous communication using two differential communication. Earlier, it was designed for automobiles. Later, it was used in other perspectives. In the CAN data frame, we have five fields. They are arbitration, control, Data CRC, and ACK field. Along with these, there must be a Start of frame bit and an End of frame bit. Now we go through the one-by-one component of the data frame which is shown in Table 2.
\begin{table}
\begin{tabular}{c c} \hline Attack type & Number of messages \\ \hline DoS attack & 656,579 \\ Fuzzy attack & 591,990 \\ Spoofing attack & 500,900 \\ Replay attack & 995,472 \\ Attack free state & 2,369,868 \\ \hline \end{tabular}
\end{table}
Table 1: Number of attacked and attack-free messages.
* SOF- Start of frame (1 bit). It indicates the start of a new frame in CAN network HPL (2002).
* Here, an 11 bits message identifier and a Remote Transmission Request bit. It is used to set the priority of the data frame at the time arbitration process HPL (2002). RTR defines whether the frame is a data frame or a remote frame. It is for 1 bit.
* It is user-defined. IDE bit means identifier extension. A dominant IDE bit indicates 11 bits standard frame identifier HPL (2002). Recessive IDE bit indicates an extended 29 bits identifier. Then, in the control field, we have a data length code (4 bits) that defines the length of the data in the data field.
* Data field- We can have a maximum of 8 bytes in the data field. 4 bits of DLC actually control how many bytes of data will be available in the data field HPL (2002). It is user-defined.
* Cyclic Redundancy Check field consists of 15 bits for Error detection during transmission. CRC will be computed by the sender before sending the frame. After receiving, the receiver will again compute CRC HPL (2002). The receiver will generate the error frame if the CRC does not match.
* ACK filed- There are two bits in the Acknowledge field as ACK and ACK delimiter bit HPL (2002). After receiving a valid message normally, a node replaces the ACK part with a dominant bit which was a recessive bit.
* CAN frame is ended by seven recessive bits HPL (2002).
### Dataset Processing
The proposed IDS aims to enhance the security of the CAN bus communication system by using deep learning analysis as a basis for detecting anomalies. We are going through several steps:
* The first step is to transform the CAN bus messages into a more meaningful graph structure using graph theory Islam et al. (2022); Refat et al. (2022). This
\begin{table}
\begin{tabular}{c c c} \hline Field & Size & Description \\ \hline Start of frame & 1 bit & Indicates the beginning of a new frame \\ Arbitration & 12 bits & Identifies the message priority and sender ID \\ Control & 6 bits & Contains the message control information \\ Data & 0-8 bytes & Carries the actual message data \\ CRC & 16 bits & Provides error detection for the message \\ Ack & 2 bits & Indicates message received successfully or not \\ End of frame & 7 bits & Indicates the end of the frame \\ \hline \end{tabular}
\end{table}
Table 2: CAN bus dataframe
\begin{table}
\begin{tabular}{c c c c} \hline Timestamp & Arbitration ID & DLC & Data \\ \hline
1478198376 & 0316 & 8 & 05 21 68 09 21 21 00 6f \\
1478198376 & 018f & 8 & fe 5b 00 00 00 3c 00 00 \\
1478198376 & 0260 & 8 & 19 21 22 30 08 8e 6d 3a \\
1478198376 & 02a0 & 8 & 64 00 9a 1d 97 02 bd 00 \\
1478198376 & 0329 & 8 & 40 bb 7f 14 11 20 00 14 \\ \hline \end{tabular}
\end{table}
Table 3: Raw CAN bus data
is achieved by dividing the CAN bus messages which are shown in Table 3 into a number of windows and deriving the relationships among all the arbitration IDs for each window. Graphs are a popular method for indicating relationships among data that are too complicated to express using simple text or other forms of data structure. As graph theory can represent complex relationships of data in a very simple manner, the proposed IDS leverages this to represent CAN bus data windows in a meaningful structure.
* The algorithm constructs graphs for each window of 200 messages and returns the overall constructed graph lists Islam et al. (2022); Refat et al. (2022). The algorithm initializes all necessary variables and computes the total number of messages in the given CAN dataset. It then uses a loop to iterate over every CAN bus message from the dataset, extracting the adjacent CAN messages and their corresponding IDs. The algorithm constructs an adjacency list from the arbitration IDs extracted from two sequential CAN messages.
## 4 Methodology
The proposed intrusion detection methodology in this study utilizes a graph-based approach that incorporates statistical analysis for detecting anomalies in Controller Area Network (CAN) bus communication system. The algorithm consists of several steps, starting with the construction of a graph from CAN bus messages using the following equation:
\[G=(V,E) \tag{1}\]
where \(G\) represents the constructed graph, \(V\) represents the nodes, and \(E\) represents the edges. Each node \(v_{i}\) in the graph represents the arbitration ID of a CAN bus message, and each edge \(e_{i,j}\) represents a sequential relationship between two adjacent CAN messages. The attacked and attack-free graphs are shown in Figure 1. In Figure 1(b), the red node indicates the attacked message id.
The graph-based features are extracted from the constructed graph. We consider the indegree and outdegree of the nodes as the features of the node. The GCN model takes the node features as input and propagates them through the graph structure to obtain the final embeddings Zhang et al. (2019). The node features are represented as a feature matrix \(X\) with dimensions \(n\times f\), where \(n\) is the number of nodes in the graph and \(f\) is the number of features per node.
The basic graph convolution operation can be defined as:
\[H=f(A,X)=\sigma(AXW) \tag{2}\]
where \(A\) is the adjacency matrix of the graph, \(X\) is the node feature matrix, \(W\) is the weight matrix, \(\sigma\) is the activation function, and \(H\) is the new node feature matrix after convolution.
Now, we can define the GCN as a series of graph convolution layers:
\[H^{(0)}=X \tag{3}\]
\[H^{(l+1)}=f(A,H^{(l)})=\sigma(AH^{(l)}W^{(l)}) \tag{4}\]
for \(l=0,1,\ldots,L-1\), where \(L\) is the number of layers.
Here, \(H^{(0)}\) is the initial node feature matrix, which is typically set to be the input feature matrix \(X\). \(H^{(l+1)}\) is the output of the \(l\)-th layer of the GCN, and it is computed by applying the graph convolution operation to the input feature matrix \(H^{(l)}\). The weight matrix \(W^{(l)}\) in each layer is learned during the training process, and it determines the linear transformation applied to the node features in that layer. The activation function \(\sigma\) is applied element-wise to the output of the graph convolution operation, and it introduces non-linearity to the model.
Overall, the GCN equation can be seen as a way to learn node representations that capture both the local and global information of the graph structure, by applying a series of graph convolution operations to the input node features.
The method defines a GCN model with two graph convolution layers and a readout layer followed by a final classifier. The model takes as input the graph structure of CAN messages and extracts features from it to classify the input as anomalous or normal. The loss function used in the training of the GCN model is binary cross-entropy loss shown in the following Equation.
\[L=-\frac{1}{N}\sum_{i=1}^{N}[y_{i}\log(\hat{y_{i}})+(1-y_{i})\log(1-\hat{y_{i} })] \tag{5}\]
where:
* \(L\) is the binary cross-entropy loss.
* \(N\) is the total number of samples.
* \(y_{i}\) is the true label of the \(i\)-th sample (either 0 or 1).
* \(\hat{y_{i}}\) is the predicted probability of the \(i\)-th sample being in class 1 (i.e., the output of the model for the \(i\)-th sample)
Figure 1: (a) Attack-free graph where all the attack-free nodes (message ids) are shown in green color. (b)Attacked graph (DoS-attacked graphs) where attacked node (message id) is shown in red color.
* log is the natural logarithm function.
where \(y_{i}\) is the true label of the \(i\)-th sample, \(\hat{y}_{i}\) is the predicted probability of the \(i\)-th sample belonging to the positive class, and \(N\) is the total number of samples. The loss function is minimized during the training process to optimize the model's parameters. In the forward pass of the model, the input data (represented as a graph) is passed through the two graph convolutional layers. Each graph convolution layer takes as input the node features and the graph structure, represented as edge indices, and applies a linear transformation to generate new node embeddings. The first layer takes the initial node features (in this case, the node is the arbitration ID) and maps them to a hidden dimension of size 8. The second layer takes the output of the first layer as input and generates a new set of node embeddings with the same hidden dimension.
After the graph convolution layers, the model applies a readout layer to obtain a single feature vector representing the entire graph. This is done by computing the mean of the node embeddings across all nodes in the graph. Finally, a linear classifier with two output nodes is used to classify the input graph as either normal or anomalous.The dropout function is applied after the readout layer, with a probability of 0.5, to reduce overfitting during training.The model's hyperparameter is shown in Table 4.
## 5 Systematic Study
For experimenting with our proposed methodology, we have used a real CAN dataset and performed analysis on NVIDIA GeForce RTX 3060 graphics card by using Python language. Firstly, we have collected the Raw CAN data which is described in this pa
\begin{table}
\begin{tabular}{c c} \hline Hyperparameter & Value \\ \hline GCNConv1 input channels & 2 \\ GCNConv1 output channels & 8 \\ GCNConv2 input channels & 8 \\ GCNConv2 output channels & 8 \\ Linear input size & 8 \\ Linear output size & 2 \\ Dropout probability & 0.5 \\ \hline \end{tabular}
\end{table}
Table 4: Hyperparameter of the proposed GCN model
Figure 2: (a)Dos attacked graphs detection: accuracy=0.9917 and wrong classification=0.0083 (b)Fuzzy attacked graphs detection: accuracy=0.9989 and wrong classification=0.0011 (c)Spoof attacked graphs detection: accuracy=0.9929 and wrong classification=0.0071
per Lee et al. (2017). Then, we have converted the Raw data into graph data as graphs can capture information from any relationship. Conversion of CAN data into graph data is based on a time window as CAN Data has a fixed message injection rate Islam et al. (2022); Refat et al. (2022). We have converted the CAN data into five different graphs data. We make the graph in such a way that these graphs are mutually independent. They are attack-free graphs, DoS-attacked graphs, fuzzy-attacked graphs, spoofing-attacked graphs, and replay-attacked graphs. Every graph is a collection of so many graphs. Like, 18565 graphs are making attack-free graphs. Here, attack free graph is completely attacked-free. Otherwise, the rest of them are a combination of attack-free and attacked graphs.
For detecting the DoS attacked graphs, we merge the attack-free and attacked graphs as attacked messages will not come distinctly from the attack-free messages in a real CAN messages scenario. For, detecting fuzzy, spoofing, and replay-attacked graphs, we also merge the attack free and attacked graphs. For designing a real-life CAN bus, we have to assume that any kind of attack may occur at any time. So, we merge all the attacked and attack-free graphs for detecting any kind of attack. It is named mixed attacks in the rest of the paper.
We have applied GCN to our graph data set which is a class of deep learning methods Zhang et al. (2019). Generally, deep learning methods take more training time than other methods. We try to speed up the computation. As our graphs data set is usually small and we want to guarantee full GPU utilization, it may seem to us that it is a good idea to batch the graphs before inputting them into a Graph Neural Network. In the image domain, this method is typically done by rescaling or padding each example into a set of equally-sized shapes grouping the examples in an additional dimension Chua (1998). But it is not feasible for our purposes. The above procedures may result in a lot of unnecessary memory consumption. PyTorch Geometric opts for another approach to achieve parallelization across numerous examples where adjacency matrices are stacked in a diagonal fashion Fey and Lenssen (2019); Foster and Koscher (2015). It makes a big graph that holds multiple isolated subgraphs and concatenates simple nodes and target features in the node dimension.
Along with the less training time apparently, we have some crucial advantages over other batching procedures as GCN operators rely on a message-passing scheme. As messages are not exchanged between two nodes that belong to distinct graphs, they do not need to be modified. There is no computational or memory overhead as adjacency matrices are saved sparsely. It holds only non-zero entries like the edges. We will analyze our confusion matrix from Figure 2. First, we concentrate on the DoS attacked
Figure 3: (a) Replay attacked graphs detection: accuracy=0.9343 and wrong classification=0.0657 (b) Mixed attacked graphs (DOS, Fuzzy, Spoofing) detection: accuracy=0.9892 and wrong classification=0.0108 (c) Mixed attacked graphs (DOS, Fuzzy, Spoofing, Replay) detection: accuracy=0.9835 and wrong classification=0.0165
graphs which are shown in Figure 2(a). Here, we see that our proposed methodology wrongly detects 87 attack-free graphs which are attacked graphs. Our accuracy for detecting DoS-attacked graphs is quite satisfactory. It is 99%. For detecting fuzzy attacked graphs, we observe that it does quite well. Figure 2(b) illustrates the scenario. It has been classified wrongly as only 0.11%. The wrong classification numbers are so small only 12 overall from the test cases. Detecting spoof attacked graphs, we can that from Figure 2(c) it has classified incorrectly 86 attacked graphs as attack-free graphs from the overall 3130 attacked graphs. We know that replay-attacked graphs detection is one of the toughest attacks for detection. But our Figure 3(a) shows that our proposed methodology works quite well for detecting replay-attacked graphs. It only makes 38 wrong detections which attack free graphs in the test cases. We have already been informed that in real life mixed attacks will occur. We hope that it is our specialty to detect the mixed attack. For detecting the mixed attacks (only a combination of DoS, Fuzzy, and Spoof attacks), the confusion matrix 3(b) shows that we have good accuracy of 99% which is better than state of art Refat et al. (2022). The proposed methodology can handle the detection of mixed attacks quite well. Apart from the detection of mixed attacks of the state of the art Refat et al. (2022), we also analyze the mixed attacks ( a combination of DoS, fuzzy, spoof, and replay attacks ) illustrated in Figure 3(c). We see that our proposed methodology is strong enough to detect any kind of attack.
From Table 5, we can see that our proposed methodology has better performance than SVM Tanksale (2019) classifier. In DCN Song et al. (2020), they are considering a couple of attacks but they do not consider the mixed attacks. But in our proposed methodology, we have considered mixed attacks which may occur in real life. Our proposed methodology even works better than the state of art graph-based features (GBF) Refat et al. (2022) when we consider mixed attacks (only a combination of DoS, Fuzzy, and Spoof attacks).
## 6 Discussion
We address the issues of developing and implementing a deep learning model that utilizes raw CAN data. These include challenges in real-time implementation with the label and without label data and the system's potential impact on daily life.
* When we apply the GCN model, we consider the labeled dataset for training. If any unknown attack happens, the model will not capture it. It is the limitation of our model. We need to move to build a model in an unsupervised manner so that it can easily detect the abnormal behavior of the messages without the label.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline Attack Type & \multicolumn{2}{c}{Tanksale (2019)} & Song et al. (2020) & Refat et al. (2022) & GCNIDS (Proposed) \\ \hline Evaluation metrics & Pr & Re & P1 & Pr & Re & P1 & Pr & Re & P1 & Pr & Re & P1 \\ \hline DOS & 0.46 & 0.89 & 0.61 & 1 & 1 & 1 & 1 & 1 & 0.99 & 1 & 1 \\ \hline Fuzzy & - & - & - & 1 & 1 & 1 & 0.99 & 0.99 & 1 & 1 & 1 \\ \hline Spoofing & - & - & - & 1 & 1 & 1 & 0.98 & 0.94 & 0.96 & 0.99 & 1 & 1 \\ \hline Replay & - & - & - & - & - & - & - & - & 0.99 & 0.88 & 0.93 \\ \hline Mixed (D, P, S) & - & - & - & - & - & - & 0.99 & 0.96 & 0.97 & 1 & 0.99 & 0.99 \\ \hline Mixed (D, P, S, R) & - & - & - & - & - & - & - & - & 0.98 & 1 & 0.99 \\ \hline \multicolumn{10}{l}{D means DoS, P means Fuzzy, S means Spoofing, R means Replay} \\ \end{tabular}
\end{table}
Table 5: When considering DoS and mixed attacks, the proposed methodology has better precision, recall, and F1 scores, respectively, than the state-of-the-art SVM classifier Tanksale (2019) and GBF Refat et al. (2022).
* Integrating the intrusion detection pipeline into a real-time system can be straightforward, but there are various implementation challenges to consider, such as power consumption and unwanted messages from the surrounding automotives. These challenges present concrete obstacles to achieving real-time implementation.
* One limitation of our study is that we have used a single real-world data set for our experiments. It may be valuable to validate our methodology on other data sets to see how it generalizes to different CAN bus scenarios.
* In addition, while GCN is a powerful machine-learning technique, it can be computationally expensive, particularly for large data sets. In this study, we have tried to speed up the computation, but it is still an area for improvement. Future work may explore ways to further optimize the GCN-based approach to improve its scalability and reduce computation time.
## 7 Conclusion
In conclusion, we have proposed a novel GCN model which is a class of deep learning methods for detecting various types of attacks on CAN networks by converting the raw CAN data into graph data. Our experiments on a real CAN dataset show that our proposed methodology outperforms the state-of-the-art SVM classifier Tanksale (2019) and GBF Refat et al. (2022) in terms of precision, recall, and F1 scores. We have also analyzed the performance of our methodology on other types of attacks, including fuzzy, spoofing, replay, and mixed attacks. Our methodology provides an effective and efficient solution for detecting different types of attacks on CAN networks, which is crucial for ensuring the security and reliability of automotive systems. Further research may be conducted to improve the real-time application of our methodology, as well as explore its application in other domains beyond automotive systems.
## Acknowledgement
We want to thank Dr. Busime Ngambeki for her collaboration on this project.
|
2309.03669 | Hyperfine structure and isotope shifts of the $^1P_1 \leftarrow{}
^{1}S_0$ transition in atomic zinc | We report absolute frequency, isotope shift, radiative lifetime and hyperfine
structure measurements of the $^1P_1 \leftarrow{} ^{1}S_0$ (213.8 nm)
transition in Zn I using a cryogenic buffer gas beam. Laser-induced
fluorescence is collected with two orthogonally oriented detectors to take
advantage of differences in the emission pattern of the isotopes. This enables
clear distinction between isotopes whose resonances are otherwise unresolved,
and a measurement of the fermion hyperfine structure parameters,
$A(^{67}$Zn)$=20(2)$ MHz and $B(^{67}$Zn)$=10(5)$ MHz. We reference our
frequency measurements to an ultralow expansion cavity and achieve an
uncertainty at the level of 1 MHz, about 1 percent of the natural linewidth of
the transition. | David Röser, J. Eduardo Padilla-Castillo, Ben Ohayon, Russell Thomas, Stefan Truppe, Gerard Meijer, Simon Stellmer, Sid C. Wright | 2023-09-07T12:18:02Z | http://arxiv.org/abs/2309.03669v1 | Hyperfine structure and isotope shifts of the \({}^{1}P_{1}\leftarrow{}^{1}S_{0}\) transition in atomic zinc
###### Abstract
We report absolute frequency, isotope shift, radiative lifetime and hyperfine structure measurements of the \({}^{1}P_{1}\leftarrow{}^{1}S_{0}\) (213.8 nm) transition in Zn I using a cryogenic buffer gas beam. Laser-induced fluorescence is collected with two orthogonally oriented detectors to take advantage of differences in the emission pattern of the isotopes. This enables clear distinction between isotopes whose resonances are otherwise unresolved, and a measurement of the fermion hyperfine structure parameters, \(A(^{67}\mathrm{Zn})\)= 20(2) MHz and \(B(^{67}\mathrm{Zn})\)= 10(5) MHz. We reference our frequency measurements to an ultralow expansion cavity and achieve an uncertainty at the level of 1 MHz, about 1 percent of the natural linewidth of the transition.
+
Footnote †: These authors contributed equally to this work.
+
Footnote †: These authors contributed equally to this work.
## I Introduction
The alkaline-earth-metal (AEM) elements are identified by two valence electrons and a \(J=0\) electronic ground state. These two features give rise to a number of unique properties. Firstly, the level structure decomposes into singlet and triplet states, with broad transitions within each system and narrow intercombination lines between them. Just as in the helium atom, the lowest triplet states are metastable. Second, states with zero electronic spin are free of hyperfine structure. In addition, bosonic isotopes have even proton and even neutron numbers, leading to zero nuclear spin and absence of hyperfine structure in all electronic states. These properties enable a wealth of applications, including optical clocks [1], precision metrology [2], quantum computing [3; 4; 5], and Rydberg physics [6; 7]. In recent years, AEM elements have played a major role in the search for yet undiscovered scalar gauge bosons through high-precision isotope shift spectroscopy [8], and various studies with neutral AEM atoms have been presented on this topic [9; 10; 11; 12; 13; 14].
Alongside the AEM elements, and sharing these attractive properties, are the so-called Group-IIB elements zinc, cadmium, and mercury. The broad singlet \({}^{1}P_{1}\leftarrow{}^{1}S_{0}\) transitions for these elements lie deep in the ultraviolet range of the spectrum, with natural linewidths in excess of 100 MHz. They possess a multitude of bosonic and fermionic isotopes, with the latter showing hyperfine structure. The resonance lines of the different isotopes are convoluted and often cannot be resolved in conventional Doppler-free spectroscopy. While Cd and Hg have already been employed for the development of optical clocks [15; 16; 17], there is very modest work towards this application with zinc thus far [18], mainly limited by the available laser technology. The wider chain of radioactive zinc isotopes is of interest for nuclear structure studies. For this reason, their isotope shifts have been measured in the triplet manifold [19; 20], with new experiments ongoing [21; 22].
Here, we present high-resolution spectroscopy of the \({}^{1}P_{1}\leftarrow{}^{1}S_{0}\) transition near 213.8 nm in neutral zinc. Experiments were conducted over a two week campaign in which an ultralow expansion cavity and required deep ultraviolet optics (University of Bonn) were transported to an atomic beam machine at the Fritz Haber Institute in Berlin. Our measurements are based on laser-induced fluorescence of a cryogenic beam of atoms extracted from a helium buffer gas cell. We employ a two-detector method to clearly separate the contributions from the spin-zero bosonic isotopes from the spin-5/2 fermionic isotope, enabling use of natural abundance Zn. Isotope shifts and hyperfine interaction constants are determined with an uncertainty of order 1 MHz. This method provides a blueprint for measurements of hyperfine structure in strong optical transitions, and a convenient and direct way to measure the true collection solid angle of a fluorescence detector. Our approach can be readily adapted to other species with several naturally occurring isotopes, e.g., Sn, Ni, and applied in the study of radioactive nuclei.
## II Experimental setup
Figure 1(a) illustrates our experimental apparatus and laser system. We use a cryogenic buffer gas source to produce a cold, slow atomic beam of zinc. The atoms are produced by laser ablation of a solid Zn target (natural abundance), are cooled in the cell by collisions with a He gas at a temperature of 3 K, and exit the cell with a typical velocity of 140 m/s along the \(z\)-axis. The ablation laser is fired at a rate of 1 Hz which sets the repetition
rate for the experiments. At a distance 70 cm downstream of the cell exit, we excite the \({}^{1}P_{1}\)\(\leftarrow\)\({}^{1}S_{0}\) transition with a single probe laser beam near 213.8 nm, which intersects the atomic beam perpendicularly. A \(2\times 2\) mm square aperture restricts the range of transverse velocities in the atomic beam to below 1 m/s.
Continuous wave laser light is produced by twice frequency doubling the infrared light of a Ti:Sa laser near 855.2 nm. Each frequency doubling stage consists of an enhancement resonator containing a nonlinear crystal; to reach 213.8 nm from 427.6 nm we use betaarium Borde (\(\beta\)-BBO). The 855.2 nm light from the Ti:Sa laser is frequency stabilised either by referencing to a commercial wavemeter (High Finesse WS8-10 calibrated with a temperature-stabilised HeNe laser), or via an ultralow expansion (ULE) optical cavity (Menlo ORC) with a measured free spectral range of 2.992184(30) GHz. We also record the intermediate 427.6 nm light on the wavemeter, since this light is immune from parasitic multi-mode content at the fundamental wavelength. The wavemeter option offers an absolute accuracy of about 30 MHz when measuring the 427.6 nm light, a resolution of about 1 MHz, and enables continuous scanning over the entire spectrum. Scanning via the reference cavity reduces the linewidth of the laser, and improves the linearity of the frequency axis. To do this, light at the fundamental wavelength of the Ti:Sa laser is coupled into a fiber phase modulator (EOM, Jenoptik) driven with two RF frequencies, \(\nu_{PDH}=18\) MHz and \(\nu_{scan}\sim\)1 GHz. The phase-modulated light reflected from the cavity is collected on a fast photodiode, demodulated at the frequency \(\nu_{PDH}\) which produces a Pound-Drever-Hall (PDH) signal with sharp zero crossing points when the laser frequency is at a cavity resonance \(\nu_{c}\), or at \(\nu_{c}\pm\nu_{scan}\). We lock the laser to the latter, and scan the laser frequency by varying \(\nu_{scan}\). A camera is used to monitor the light transmitted through the cavity and ensure locking to the TEM\({}_{00}\) mode. This locking scheme enables continuous scanning of the Ti:Sa laser frequency up to one half of the cavity free spectral range, corresponding to 6 GHz at the 213.8 nm detection wavelength.
At the atomic beam machine, we purify the laser polarisation with a polarising beam cube, and control its linear polarisation angle relative to the direction of the atomic beam with a \(\lambda/2\) plate. The probe light propagates along the \(x\)-axis and has a peak intensity \(I=10^{-3}I_{sat}\), where \(I_{sat}=\pi hc\Gamma/(3\lambda^{3})=1.5\) W/cm\({}^{2}\) is the two-level saturation intensity of the transition. We estimate that an atom travelling through the maximum intensity of the excitation light scatters five photons at resonance. The resulting laser-induced fluorescence (LIF) is collected and imaged onto two photomultiplier tubes (PMTs), whose photocurrents are delivered to separate transimpedance amplifiers and recorded as time-of-flight traces. The two PMTs are oriented to collect fluorescence emitted parallel and perpendicular to the direction of the atomic beam, as shown in figure 1(a). The angle \(\theta_{i}\) between the laser polarisation and the direction of detector \(i\), illustrated in the inset to the figure, determines the portion of the fluorescence emission pattern collected by the two detectors. This enables discriminating between fermionic and bosonic isotopes [13; 23; 24]. We record the laser power after the machine with a calibrated optical power meter and compensate for drifts in the probe intensity over a scan (typically \(5-10\%\)).
Figure 1(b) shows a typical time-of-flight trace observed in detector 1 when exciting the \({}^{64}\)Zn resonance. The signal comprises an initial intense peak from the buffer gas cooled atomic beam at roughly 5 ms, followed by an extended tail which appears for several tens of ms later. The extended tail consists of thermalised Zn atoms which leave the cell and collide with the vacuum walls; it persists even when direct line of sight from the source to the detector is blocked, and leads to a broad background signal in the fluorescence spectra, whose Doppler width is consistent with the laboratory temperature. Example spectra showing the two signal components are shown in Fig. 1(c).
## III Analysis of spectral lineshapes
In the following, we discuss the lineshape models used for the fermionic and bosonic isotopes, which are important in fitting the experimental spectra. We hereon use \(\nu_{L}\) to label the laser frequency and assume that the laser linewidth is much less than the natural linewidth of the transition.
_Boson lineshape -_ The bosonic isotopes, all with nuclear spin \(I_{N}=0\), exhibit no hyperfine structure and the total angular momenta of the ground and excited states are \(F=0\) and \(F^{\prime}=1\) respectively. The resonance line of a boson \(b\) can be simply described with the line function,
\[S^{(b)}=\frac{\Gamma^{2}/4}{\Gamma^{2}/4+\Delta_{b}^{2}}[1-P_{2}(\cos\theta)g( \theta_{C})]\ \ . \tag{1}\]
Here, \(\Gamma/(2\pi)\) is the Lorentzian linewidth of the transition, \(\Delta_{b}/(2\pi)=\nu_{L}-\nu_{b}\) is the detuning of the laser from the resonance frequency \(\nu_{b}\), and \(P_{2}(\cos\theta)=\frac{1}{2}(3\cos^{2}\theta-1)\) is the second Legendre polynomial, with \(\theta\) the angle between the detection direction and the electric field of the linearly polarised excitation light. The factor \(g(\theta_{C})=\cos(\theta_{C})\cos^{2}(\theta_{C}/2)\) corrects for the effect of the finite solid angle of the detection optics, with \(\theta_{C}\) the half angle of a circular collection lens. For \(\theta=0\), \(\mathcal{S}^{(b)}\to 0\) as \(\theta_{C}\to 0\), as would be expected from the well-known Hertzian dipole radiation pattern. Importantly, adjusting \(\theta\) or \(\theta_{C}\) changes the amplitude of the boson signal observed at the detector.
_Fermion lineshape -_ There exists a single naturally abundant fermionic isotope of Zn with nucleon number 67 and a nuclear spin \(I_{N}=5/2\). The nuclear spin couples with the electronic angular momentum \(J\) to give to
tal angular momentum \(F\), resulting in a single \({}^{1}S_{0},F=5/2\) hyperfine level and three \({}^{1}P_{1},F^{\prime}\) excited levels with \(F^{\prime}=3/2,5/2,7/2\). These energy levels are shown in figure 2(a). We assume the hyperfine energies \(E(F)\) are given by
\[E(F)=\frac{A}{2}C+B\frac{\frac{3}{4}C(C+1)-I_{N}(I_{N}+1)J(J+1)}{2I_{N}(2I_{N}- 1)J(2J-1)}\;, \tag{2}\]
with \(C=F(F+1)-I_{N}(I_{N}+1)-J(J+1)\). Here, \(A=A(^{1}P_{1})\) is the interaction between the electronic and nuclear angular momentum in the excited state, and \(B=B(^{1}P_{1})\) is the quadrupole interaction coefficient.
Following Brown et al. [25], the fluorescence spectrum of \({}^{67}\)Zn, \(S^{(f)}\) can be separated into three terms:
\[S^{(f)}= \frac{\Gamma^{2}}{4}\Big{(}\mathcal{A}+[\mathcal{B}+\mathcal{C} ]P_{2}(\cos\theta)g(\theta_{C})\Big{)}\;\;\;,\] \[\mathcal{A}= \frac{1}{9}\Big{(}\frac{2}{\Gamma^{2}/4+\Delta_{3/2}^{2}}+\frac{ 3}{\Gamma^{2}/4+\Delta_{5/2}^{2}}+\frac{4}{\Gamma^{2}/4+\Delta_{7/2}^{2}})\;\;,\] \[\mathcal{B}= -\frac{1}{225}\frac{1}{\Gamma^{2}/4+\Delta_{3/2}^{2}}\] \[-\frac{64}{525}\frac{1}{\Gamma^{2}/4+\Delta_{5/2}^{2}}-\frac{2}{ 21}\frac{1}{\Gamma^{2}/4+\Delta_{t/2}^{2}}\;\;\;,\] \[\mathcal{C}= \Big{[}-\frac{8}{45}\frac{1}{(\Gamma/2+i\Delta_{7/2})(\Gamma/2- i\Delta_{3/2})}\] \[-\frac{6}{35}\frac{1}{(\Gamma/2+i\Delta_{7/2})(\Gamma/2-i\Delta_{5/2})}\] \[-\frac{1}{25}\frac{1}{(\Gamma/2+i\Delta_{5/2})(\Gamma/2-i\Delta_{ 3/2})}\Big{]}+c.c. \tag{3}\]
Figure 1: (a) Experimental setup for \({}^{1}P_{1}\leftarrow{}^{1}S_{0}\) laser induced fluorescence spectroscopy of Zn, showing the cryogenic buffer gas beam, and 213.8 nm laser system. We also show a side view of the detector geometry, showing the two photomultipliers used, and the angles \(\theta_{1}\) and \(\theta_{2}\) relative to linear polarisation angle of the excitation light. These angles determine the emission pattern observed at the two detectors. (b) A typical time-of-flight fluorescence trace observed at the \({}^{64}\)Zn resonance. The inset shows a zoom-in of the region \(0<t<25\) ms. Observation windows for the buffer gas cooled and thermal background components of the signal are shown by the shaded bars. (c) Fluorescence spectra for the observation windows in (b).
Here, \(\Delta_{F^{\prime}}/(2\pi)=\nu-\nu_{F^{\prime}}\) is the detuning of the laser from the excited state with total angular momentum \(F^{\prime}\); We assumed all Zeeman sublevels in the \({}^{1}S_{0}\) state are equally populated in the source and neglected optical pumping during the interaction with the probe light.
Important for the experiments is the fact that when the hyperfine structure is barely resolved, the emission pattern and hyperfine structure become strongly coupled. This is illustrated by figure 2b, which shows simulated fluorescence spectra along \(\theta=0\) for different ratios of \(A/\Gamma\). Each panel compares equation (3) with the result when interference is removed from the model, i.e. \(\mathcal{C}\) is deliberately set to zero. The calculations show that as \(A/\Gamma\to 0\), interference between scattering paths is destructive, leading to complete suppression of the fluorescence along this direction. There is an intuitive explanation for this effect: when the hyperfine interaction with the nucleus becomes negligible, the emission pattern must converge to that of the (spin-less) bosonic isotopes. Conversely, one can produce the reverse effect in the bosonic isotopes by deliberately applying a magnetic field along \(\theta=90^{\circ}\). This is the so-called Hanle effect [26, 27] and, whilst understood for about a century, is often overlooked. The behaviour illustrated in figure 2b shows that interference in the emission pattern of barely resolved lines contains useful information which can be used to constrain the hyperfine structure. The central spectrum in the figure, where \(A/\Gamma=0.2\), is near the value observed in the experiments, and results in a total span of the \({}^{1}P_{1}\) levels of \(1.2~{}\Gamma\). This should largely prevent optical pumping to the \(m_{F}=\pm 5/2\) states when driving the \(F^{\prime}=3/2\gets F=5/2\) transition, which would be significant in the case of well-resolved lines.
Combined line function -The total fit function used in this study is given by,
\[\begin{split} S^{(tot)}=& a_{67}S^{(f)}+\sum_{b}a_{ b}S^{(b)}\\ &+a_{bg}e^{-(\nu_{L}-\nu_{bg})^{2}/(2w_{bg}^{2})}~{}~{}.\end{split} \tag{4}\]
Here, \(a_{67}\) and \(a_{b}\) represent the relative abundance of the fermionic and bosonic isotopes respectively. The final term in equation (4) approximates the residual thermal background in the spectrum, whose amplitude \(a_{bg}\) is typically 5 to 10 percent of the \({}^{64}\)Zn resonance peak. The centre frequency \(\nu_{bg}\) and width parameter \(w_{bg}\) can be either fitted as free parameters, or introduced as fixed parameters by first fitting the data at late arrival times when only the thermal background component is present. The fitted values for the isotope shifts in these two cases are consistent within the statistical error of the fits.
## IV Results
### Determination of the fermion hyperfine structure by a two-detector method
Figures 3(a) and (b) show two sets of spectra obtained using the High Finesse wavemeter as a frequency reference. The data constitutes two separate scans where the input polarisation of the laser is along the \(y\)-axis (panel (a)) and along the \(z\)-axis (panel (b)), and for each panel we show the fluorescence spectrum recorded by the two detectors 1 and 2. For clarity, each is labelled with a schematic showing the laser polarisation, the detector orientation and the dominant e
Figure 2: Simulated fluorescence spectra for \({}^{67}\)Zn. (a) Level scheme labelling the total angular momenta \(F\), \(F^{\prime}\) for the ground and excited states respectively. (b) Simulated spectra for detection along \(\theta=0\), for different values of the ratio \(A/\Gamma\) and with \(B=0\). We show the results with and without quantum interference included in the calculation. The sticks above the spectra correspond to the energies of the levels in (a). As \(A/\Gamma\) approaches zero (negligible hyperfine interaction), the emission pattern including interference converges to that of an ideal Hertzian dipole, meaning the emission is zero along \(\theta=0\).
bosonic isotopes. The different emission pattern of the fermionic \({}^{67}\)Zn isotope (relative natural abundance 4.1%) dramatically increases its visibility in detector \(i\) when \(\theta_{i}=0\). We show the fitted fermion lineshape with a black dashed line in each panel to illustrate this effect.
We use these spectra to determine solid angles of the collection optics and hyperfine structure of the \({}^{67}\)Zn isotopes. The four spectra in figure 3 were fitted as a single dataset, fixing the detection angles \(\theta_{i}\) to their values in the experiment, and enforcing the natural abundance of Zn isotopes [28]. This fixes the relative peak heights in each spectrum so that the detector solid angles \(\theta_{C,1},\theta_{C,2}\), and the hyperfine structure constants \(A\) and \(B\) of the fermionic isotope, can be determined. All resonance frequencies \(\nu_{b}\) for all bosons \(b\), \(\nu_{1/2,3/2,5/2}\) for the fermionic \({}^{67}\)Zn isotope, and a common Lorentzian linewidth \(\Gamma\) are shared fit parameters between the datasets. From this data, we conclude \(\theta_{C,1}=0.281\pm 0.005\), \(\theta_{C,2}=0.145\pm 0.005\) radians. The uncertainties are the range of values obtained when fitting the data with various reasonable assumptions, such as fixing the values of \(w_{bg}\) and \(\nu_{bg}\) in the fit function using the signal at late arrival times. The value of \(\theta_{C,2}\) is very close to the half angle subtended by the collection lens at the fluorescence region, 0.156(5) radians. The value of \(\theta_{C,1}\) is significantly below the half angle subtended by its in-vacuum collection lens, 0.43(1) radians, and consistent with this lens being placed about 5 mm too close to the atomic beam, a result of incorrectly extrapolating the focal length from the visible to the deep ultraviolet.
For the hyperfine interaction parameters, we obtain \(A(^{67}\)Zn\()=20\pm 2\) MHz and \(B(^{67}\)Zn\()=10\pm 5\) MHz. Fitting the data with \(B=0\) returns \(A=21\) MHz but noticeably reduces the goodness of the fit near the \({}^{67}\)Zn peak. Fitting to a model which ignores interference may be done by simply setting \(\mathcal{C}=0\) in equation (3); this gave the best fit values \(A=9.5\pm 1.8\) MHz, \(B=0.6\pm 2.2\) MHz, \(\theta_{C,1}=0.37\) and \(\theta_{C,2}=0.20\), and line centres consistent with the full interference model. In this case the fitted value of \(\theta_{C,2}\) is unphysically large, and the fit residuals clearly indicate that only the interference model can adequately describe the signal observed in both detectors.
### High resolution measurements with the cavity
Having constrained the fermion hyperfine structure and solid angle of the collection optics, we proceeded to scan the laser via the ULE reference cavity to more accurately determine the isotope shifts. Figure 4 shows spectra obtained when scanning \(\nu_{scan}\) with the laser locked to the ULE cavity, and with the probe laser horizontally polarised. The scan rate corresponds to approximately 0.8 MHz/s for the 213.8 nm probe light, where the frequency \(\nu_{scan}\) was measured near the time of the ablation laser pulse, and then stepped discretely after each measurement. By happenstance, the \({}^{66}\)Zn resonance appeared almost exactly at the midpoint between two cavity resonances, where the locking method fails. We therefore frequency-shifted the Ti:Sa laser light by 90 MHz before delivery to the cavity with an acoustic optic modulator, moving the unstable lock point by 360 MHz in the deep ultraviolet. The upper (lower) dataset in the figure is taken with (without) the frequency shifting method applied, on the same day but ablating different spots on the Zn target, which enabled measuring all isotopes. We fit the two spectra as a single dataset with shared resonance line positions in a Monte Carlo routine, where the values of \(\theta_{C,1},\theta_{C,2},A\) and \(B\) are drawn from uniform distributions whose ranges are given by the limits constrained in section IV.1. Enforcing the relative natural abundance of Zn in the fits leads to small changes in the best fit values compared to allowing the line intensities to float, and we include this when estimating the uncertainties. We combine the best-fit values and errors for the isotope shifts to give a weighted mean and statistical error for these parameters, and assume a 1 MHz systematic frequency uncertainty which derives from the \(\sim 200\) kHz uncertainty of \(\nu_{scan}\) and the frequency shifting AOM, considering the two successive stages of frequency doubling. Doppler shifts due to slight misalignment of the probe laser light contribute an uncertainty \(\sim 2\) MHz to the absolute resonance frequencies, and negligibly to the isotope shifts. Recoil from absorption of the probe laser light leads to a Doppler shift of roughly 0.6 MHz across the detection volume, and the differential shift across the range of isotopes is an order of magnitude smaller. We neglect this contribution to the isotope shift uncertainty. The ambient magnetic field in the detector was measured as below 0.3 Gauss, corresponding to an upper bound to the line shape broadening of 0.5 MHz.
## V Discussion
Table 1 summarises the results of our measurements, and compares them to the available literature values. Our final values for the isotope shifts are presented relative to the \({}^{64}\)Zn resonance, since this isotope is of highest abundance and its line centre has the smallest statistical uncertainty. We combine measurements from the cavity and spectra taken using the wavemeter, and include a systematic frequency error of 2.1 MHz for the wavemeter values, which derives from directly comparing frequency intervals measured by the cavity scan method with the wavemeter. Our values are two orders of magnitude more precise than previous measurements, also given in the table. Our value of \(A(^{67}\)Zn\()\) is consistent with the value measured by Kowalski and Trager [29], by level crossing spectroscopy of enriched \({}^{67}\)Zn. However, this study was unable to experimentally constrain the value of \(B(^{67}\)Zn\()\) and we therefore recommend the values from our measurements.
The absolute frequency of the \({}^{64}\)Zn resonance measured through our experiments is \(1,401,391.66(6)\) GHz (46745.394(2) cm\({}^{-1}\)). The isotope-averaged line cen
\begin{table}
\begin{tabular}{l c c c c c c} & This work & Ref.[30] & Ref.[31] & Ref.[29] & Ref.[32] & Ref.[33] \\ \hline \(\nu_{66}-\nu_{64}\) & 525.0(3.0) & 480(60) & 540(60) & & & \\ \(\nu_{67}^{(\rm CG)}-\nu_{64}\) & 835(4) & - & - & & & \\ \(\nu_{68}-\nu_{64}\) & 1039.8(1.7) & 989(60) & 960(85) & & & \\ \(\nu_{70}-\nu_{64}\) & 1495(4) & - & - & & & \\ \(A(^{67}{\rm Zn})\) & 20(2) & & & 17.7(5) & & \\ \(B(^{67}{\rm Zn})\) & 10(5) & & & - & & \\ \(\nu(^{1}P_{1}-{}^{1}S_{0})/{\rm cm}^{-1}\) & 46745.407(2) & & & & 46745.404(2) & \\ \(\tau/{\rm ns}\) & 1.440(18) & & & & 1.40(3) \\ \end{tabular}
\end{table}
Table 1: Summary of the results obtained for the \({}^{1}P_{1}\)\(\leftarrow\)\({}^{1}S_{0}\) transition in Zn. Hyperfine constants and the radiative lifetime refer to the excited state. Results are given in MHz unless otherwise stated. CG = centre of gravity.
Figure 3: Polarisation sensitive fluorescence detection of Zn isotopes. Each spectrum is labelled by the probe laser polarisation and detector configuration. The relative intensities of the boson and fermionic resonances are strong functions of the angle \(\theta\) between the laser polarisation and the detector direction, and the solid angle of the collection optics. Blue lines show experimental data, red solid lines are fits as described in the text, and black dashed lines show the fitted fermion lineshape. Underneath each spectrum the residuals are shown in a separate plot (’Res’). (a) Laser polarisation along the \(y\)-axis. (b) Laser polarisation along the \(z\)-axis.
tre, \(\nu(^{1}P_{1}-{}^{1}S_{0})\), is computed as the average of the line centres weighted by isotopic abundance and given in the table. The uncertainty in our value is dominated by the wavemeter accuracy specified by the manufacturer. We find excellent agreement with the results of (isotope-unresolved) hollow cathode lamp measurements presented in Ref. [32].
Our best fit value of the Lorentzian linewidth is \(\Gamma/(2\pi)=110.5(1.4)\) MHz, where the error derives from the standard deviation of the Monte Carlo fitted values combined with the frequency uncertainty from the cavity scanning method. Fitting with a Voigt lineshape did not change the value of \(\Gamma\) within the uncertainty of the fit. The radiative lifetime \(\tau=1/\Gamma\) given in the table is consistent with the weighted average of 5 measurements collated by Doidge [33], and is a factor of two more precise than previous measurements.
_King plot_ - We combine our isotope shift results with values reported for the \({}^{3}P_{1}\leftarrow{}^{1}S_{0}\) intercombination transition [34] on a King plot as follows. We calculate the reduced isotope shifts \(\delta\hat{\nu}^{A,A^{\prime}}=\delta\nu/\mu^{A,A^{\prime}}\) with \(\mu^{A,A^{\prime}}\) the inverse nuclear mass difference between isotopes \(A\) and \(A^{\prime}\), and present the data in figure 5. The data is fitted to a linear relationship according to the recipe described in Ref. [35]. Briefly, we first define a mixing matrix to shuffle the 308 nm reduced isotope shifts so that they are referred to \({}^{64}\)Zn. We then calculate the covariance matrices, taking into account the mixing matrix and the reported errors. The most-probable values of the adjusted parameters, the intercept and slope, are found by minimizing a generalized \(\chi^{2}\) test statistic. To assign confidence intervals to the fitted parameters, we perform a Monte-Carlo estimation procedure of repeated measurements drawn from a normal distribution centred at the most probable fitted values. The most probable value and 68% confidence interval of the fitted line is plotted in figure 5.
An additional hyperfine interaction between the different electronic states in Zn would additionally shift the gravity centre of the \({}^{67}\)Zn resonance lines, \(\nu_{67}^{(\rm CG)}\), relative to the bosons, and is often referred to as an "off-diagonal" hyperfine interaction. Such shifts should be detectable in a King plot as deviations of the fermionic isotopes from linear fits of the bosonic isotope data (see for example ref. [9]). To test for this, we repeat the fitting procedure without the (64,67) pair, and calculate its predicted values in each iteration of the Monte-Carlo procedure to obtain its distribution. The resulting 68% confidence interval for the frequency difference \(\nu_{67}^{(\rm CG)}-\nu_{64}\) is from 818 MHz to 834 MHz, which agrees with our measured value (835(4) MHz) within its uncertainty. We infer that the difference in off-diagonal hyperfine shifts of the \({}^{67}\)Zn resonances, for this pair of transitions, is less than 10 MHz. This closely follows results of isotope shift measurements for the lowest lying, and analogous, transitions in cadmium [13], another Group-IIB element. It may be of interest considering recently observed strong hyperfine
Figure 4: Fluorescence spectra taken with the probe laser locked to the ULE cavity. The shaded box shows the region near the midpoint of the cavity resonances in which the lock fails. In the upper spectrum, shown with a deliberate offset on the \(y\)-axis, the light delivered to the cavity was first frequency-shifted by 90 MHz. This enables a continuous scan over the \({}^{66}\)Zn resonance. The two spectra are fitted as a single dataset to recover the isotope shifts as discussed in the text. Dashed lines show the fit functions where data is not present for each spectrum. The plot underneath shows the residual from the fit function for the two datasets.
Figure 5: A King plot of the \({}^{1}P_{1}\leftarrow{}^{1}S_{0}\) (213.8 nm, this work) and \({}^{3}P_{1}\leftarrow{}^{1}S_{0}\) (308 nm, ref. [34]) transitions in Zn. The black solid line shows a linear fit to the data as discussed in the text. Red dashed lines indicate the 68% confidence interval of the fit.
mixing effects for higher-lying transitions in zinc in the CRIS experiment [22].
## VI Summary and Outlook
We have reported isotope shifts, radiative lifetime and hyperfine structure measurements for the \({}^{1}P_{1}\leftarrow{}^{1}S_{0}\) transition in neutral Zn by cw laser-induced fluorescence spectroscopy of an atomic beam. Our measurements considerably improve upon the published literature for this transition, and contribute to the study of the \({}^{67}\)Zn nucleus, where unexpected isotope shifts have recently been observed in collinear laser spectroscopy at the ISOLDE facility [22]. With its multitude of spin-zero bosonic isotopes and various narrow optical transitions, zinc is a candidate for further isotope shift spectroscopy at the sub-kHz level.
The two-detector method and analysis procedure described here has several advantages. First, it enables reliably extracting hyperfine parameters from barely resolved peaks. Second, it enables tuning of the fermion contribution to spectra of mixed isotopes within a single measurement run. This approach allows disentangling otherwise overlapping lines and in the case of atomic Zn studied here, enables a measurement of the isotope shifts and hyperfine structure at the \(\sim\)1 MHz level. The approach can readily be adopted to other elements which feature broad transitions and many isotopes, and may be particularly beneficial in accelerator-based experiments, where measurements typically come with a large overhead.
###### Acknowledgements.
We thank Sebastian Kray and the mechanical workshop of the Fritz Haber Institute for expert technical assistance. S.W. thanks Clara Bachorz for critical reading of the manuscript. We gratefully acknowledge financial support from the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Programme (CoCoMFun, Grant Agreement No. 949119, S.T.; "quMercury", GA No. 757386, S.S.), from the European Commission through project 101080164 "UVQuanT" (S.T. and S.S.), and from Deutsche Forschungsgemeinschaft (DFG) through grants 414709674 and 496941189 and through the Cluster of Excellence ML4Q (EXC 2004/1 - 390534769). B.O. is thankful for the support of the Council for Higher Education Program for Hiring Outstanding Faculty Members in Quantum Science and Technology.
|
2309.14315 | Structured random matrices and cyclic cumulants: A free probability
approach | We introduce a new class of large structured random matrices characterized by
four fundamental properties which we discuss. We prove that this class is
stable under matrix-valued and pointwise non-linear operations. We then
formulate an efficient method, based on an extremization problem, for computing
the spectrum of subblocks of such large structured random matrices. We present
different proofs -- combinatorial or algebraic -- of the validity of this
method, which all have some connection with free probability. We illustrate
this method with well known examples of unstructured matrices, including Haar
randomly rotated matrices, as well as with the example of structured random
matrices arising in the quantum symmetric simple exclusion process. tured
random matrices arising in the quantum symmetric simple exclusion process. | Denis Bernard, Ludwig Hruza | 2023-09-25T17:36:05Z | http://arxiv.org/abs/2309.14315v4 | # Spectrum of subblocks of structured random matrices : A free probability approach
###### Abstract
We present a new efficient method, based on an extremization problem, for computing the spectrum of subblocks of large structured random matrices. This method applies to ensembles of matrices satisfying three fundamental properties which we discuss. We present different proofs - combinatorial or algebraic - of the validity of this method, which all have some connection with free probability. We illustrate this method with well known examples of unstructured matrices, including Haar randomly rotated matrices, as well as with the example of structured random matrices arising in the quantum symmetric simple exclusion process.
###### Contents
* 1 Introduction and general statement
* 2 Proofs
* 2.1 Proof using a tree structure
* 2.2 Proof using Kreweras duality
* 2.3 Proof using operator valued free probability
* 2.4 No free multiplicative convolution
* 3 Applications
* 3.1 Wigner matrices
* 3.2 Haar-randomly rotated matrices
* A Free probability glossary
* B Local free cumulants for Haar-randomly rotated matrices
## 1 Introduction and general statement
The theory of large random matrices has a huge domain of applications ranging from chaotic systems to complex systems and random geometry to machine learning [1, 2, 3]. Given a large random matrix \(M\) one might not only be interested in its spectrum but also in the spectrum of its subblocks (or submatrices). Moreover, the matrix \(M\) might have some "structure", in the sense that joint moments of its entries can depend on the location of these entries inside the matrix. In other words, a structured matrix is, in law, not invariant under permutations of its entries [4] - contrary to well-known matrix ensembles such as say Wigner matrices.
Finding the spectrum of structured matrices and their subblocks is a problem that can occur in many situations, e.g. in the study of random band matrices [5]. Our main original motivation, however, comes from the problem of calculating the entanglement entropy of some many-body quantum systems that are subjected to noise. In this case the system density matrix \(\rho\) is a large random matrix, and to calculate the entanglement between a subregion \(I\) and the rest of the system \(I^{c}\) requires knowing the so-called reduced density matrix \(\rho_{I}=\operatorname{Tr}_{I^{c}}(\rho)\). More precisely, we encountered this problem in studying an one dimensional chain of noisy free fermions named the "Quantum Symmetric Simple Exclusion Process" (QSSEP) [6, 7, 8]. Here, the quadratic (but noisy) Hamiltonian ensures that all properties of the system can be expressed in terms of the two point function \(M_{ij}:=\operatorname{Tr}(\rho\,c_{i}^{\dagger}c_{j})\) where \(c_{i}^{\dagger}\) is a fermionic creation operator on site \(i\). Since the dynamics is noisy, \(M\) is a large random matrix and for the entanglement entropy of a region \(I\) we need to find the spectrum of any of its subblocks \(M_{I}=(M_{ij})_{ij\in I}\). The main physical output of the exact computation of [8] is that the mutual information in the driven out-of-equilibrium QSSEP fulfills a volume law1, in contrast with equilibrium systems for which the mutual information is sub-leading in the volume.
Footnote 1: That is, the mutual information between extensive sub-intervals scales proportionally to the volume.
Despite this specific motivation, the aim of this paper is to extract from these studies of noisy many-body quantum systems the random matrix related results and to make them available to a larger audience interested in random matrix theory. They apply to a large class of ensembles of structured random matrices characterized by specifying the large size limit of so-called "loop expectation values". This class and its characterization in terms of loop expectation values is new, to the best of our knowledge. It is stable under non-linear operations [9]. Loop expectation values are expectation values of the product of entries of random matrices whose indices follow a cyclic order (see below). It has recently been recognized that these loop expectation values play a peculiar role in abstract random (structured) matrix theory [10], but also in physical contexts [6, 7, 11, 12, 13] or in connection with machine learning [3, 14].
Specifying the ensemble of random matrices through the loop expectation values allows to make the connection with the combinatorics of partitions or with free probability transparent and efficient. An echo of this connection is a formula for the moments of the random matrix, see eq.(10) below, as a sum over non crossing partitions and their Kreweras duals. This formula was proved in [7, sec. II.B]. The main theorem stated below reduces the computation of the spectrum of random matrices in such ensembles to a variational problem, which may be viewed as some variante of the Legendre transform, see eq.(3), or as a local version of the known R-transform in free probability, see eq.(11). Its proof is based on the combinatorial formula (10). Some elements of those proofs have been evasively formulated in our previous paper [8]. We nevertheless believe that it is useful to present synthetically these results and proofs in a separate publication devoted to random matrices - and not keep them hidden in articles devoted to quantum physics. We also believe that presenting different proofs may be useful depending on the background of the readers. We complement the two combinatorial proofs by describing a third proof using free probability techniques, notably operator valued free probability or free amalgamation. One the one hand, this makes the connection of the combinatorial proofs with free probability explicit and, on the other hand, it makes concrete the applications of free amalgamation in the present context of structured random matrices. The variational method for computing spectra of random matrices is illustrated with known examples of rotation invariant matrices and with a new application to the quantum symmetric simple exclusion process.
Let us start by specifying the random matrix ensembles we shall deal with, and then formulate the theorem about their spectrum.
_Random Matrix Ensemble._
In this article we consider ensembles of random matrices \(M\) with measure \(\mathbb{E}\) that satisfy, in the large \(N\) limit, the three following properties :
1. Local \(U(1)\)-invariance, meaning that, in distribution, \(M_{ij}\stackrel{{ d}}{{=}}e^{-i\theta_{i}}M_{ij}e^{i\theta_{j}}\), for any angles \(\theta_{i}\), \(\theta_{j}\);
2. Expectation of "loops" without repeated indices scale as \(N^{1-n}\), meaning that \(\mathbb{E}[M_{i_{1}i_{2}}M_{i_{2}i_{3}}\cdots M_{i_{n}i_{1}}]=\mathcal{O}(N^{ 1-n})\), for all \(i_{k}\) distinct;
3. Factorization of the expectations of products of "loops" at leading order, meaning that \(\mathbb{E}[M_{i_{1}i_{2}}\cdots M_{i_{m}i_{1}}\,M_{j_{1}j_{2}}\cdots M_{j_{n}j _{1}}]=\mathbb{E}[M_{i_{1}i_{2}}\cdots M_{i_{m}i_{1}}]\,\mathbb{E}[M_{j_{1}j_{2 }}\cdots M_{j_{n}j_{1}}](1+O(N^{-1}))\), even if \(i_{1}=j_{1}\).
The only information on the random matrix ensemble we require are the joint cumulants of its entries when arranged in a loop (with \(x_{k}=i_{k}/N\in[0,1]\), \(i_{k}\) distinct) :
\[g_{n}(x_{1},\cdots,x_{n}):=\lim_{N\to\infty}N^{n-1}\mathbb{E}[M_{i_{1}i_{2}}M _{i_{2}i_{3}}\cdots M_{i_{n}i_{1}}]^{c}. \tag{1}\]
They are continuous due to properties (ii) and (iii). Of course, not any sequence of numbers can define cumulants. So we cannot construct an ensemble of random matrices by an arbitrary choice of \(g_{n}\) subjected to the three properties (i)-(iii). Rather one has to start from a known random matrix ensemble and check that it satisfies (i)-(iii).
Some well known matrix ensembles that satisfy these properties are Wigner matrices and matrices rotated by Haar random unitaries (see subsection 3.1 and 3.2), for which the functions \(g_{n}\) are all constant, implying that these ensembles are "structureless". In particular, for \(M=UDU^{\dagger}\) with \(D\) a diagonal matrix and \(U\) a Haar random unitary, \(g_{n}(x_{1},\cdots,x_{n})=\kappa_{n}\) with \(\kappa_{n}\) the free cumulants of the spectral measure of \(D\) (see e.g. Thrm. 7.5 in [15]). For structured ensembles, where the functions \(g_{n}\) are no longer constant, this observation suggests that we could call \(g_{n}\) the "local free cumulants" of \(M\). We will comment on this name later when the connection of our result to operator valued free probability has been made.
_Statement._
To handle the case of an arbitrary number of subblocks we consider the slightly more general aim of finding the spectrum of \(M_{h}:=h^{1/2}Mh^{1/2}\), with \(h\) a diagonal matrix. Choosing \(h(x)=1_{x\in I}\) (here \(h_{ii}=h(i/N)\)) to be the indicator function on some interval \(I\subset[0,1]\), one recovers the case of subblocks \(M_{I}\subset M\). All the spectral information about \(M_{h}\) is contained in the generating function,
\[F[h](z):=\mathbb{E}\,\underline{\mathrm{tr}}\log(z-M_{h}), \tag{2}\]
where \(\underline{\mathrm{tr}}=\mathrm{tr}/N\) is the normalized \(N\)-dimensional trace. This function can be viewed as a (formal) power series in \(1/z\), whose coefficients are expectations of traces of powers of \(M_{h}\). Statements about the domain of convergence of this series can be made if extra global information about the spectrum is available, say about its compactness. The theorem below is formulated with \(F[h](z)\) viewed as power series in \(1/z\) (and we use extra analytic inputs in the illustrative examples).
Our main result is :
**Theorem 1**.: \(F[h](z)\) _is determined by the variational principle_
\[F[h](z)=\operatorname*{extremum}_{a_{z},b_{z}}\left[\int_{0}^{1}\left[\log(z- h(x)b_{z}(x))+a_{z}(x)b_{z}(x)\right]dx-F_{0}[a_{z}]\right] \tag{3}\]
_where the information about local free cumulants \(g_{n}\), specific to the random matrix ensemble, is contained in (with \(\vec{x}=(x_{1},\cdots,x_{n})\))_
\[F_{0}[p]:=\sum_{n\geq 1}\frac{1}{n}\int_{0}^{1}(\prod_{k=1}^{n}dx_{k}p(x_{k})) \,g_{n}(\vec{x}). \tag{4}\]
Note that \(F_{0}[p]\) contains less information than the local free cumulants, since it depends only on a symmetrized version of the family \(\{g_{n}\}_{n}\). Nevertheless, in the large \(N\) limit, it represents the minimal amount of information about the measure \(\mathbb{E}\) that is necessary for the spectrum.
To obtain the spectrum of \(M_{h}\) one takes the derivative \(\partial_{z}F[h](z)=:G[h](z)\) which is the resolvent
\[G[h](z)=\mathbb{E}\,\underline{\mathrm{tr}}(z-M_{h})^{-1}.\]
From Eq.(3), we get
\[G[h](z)=\int_{0}^{1}\frac{dx}{z-h(x)b_{z}(x)}, \tag{5}\]
with \(b_{z}\) solution of the extremization conditions,
\[a_{z}(x)=\frac{h(x)}{z-h(x)b_{z}(x)},\quad b_{z}(x)=R_{0}[a_{z}](x), \tag{6}\]
where
\[R_{0}[a_{z}](x):=\frac{\delta F_{0}[a_{z}]}{\delta a_{z}(x)}. \tag{7}\]
In the special case where \(h(x)=1_{x\in I}\) is the indicator function on an interval \(I\) (or on unions of intervals) of length \(\ell_{I}\), we recover the spectral density \(\sigma_{I}\) of the subblock \(M_{I}\) from its resolvent
\[G_{I}(z):=\frac{1}{\ell_{I}}\int_{I}\frac{dx}{z-b_{z}(x)}=\int\frac{d\sigma_{ I}(\lambda)}{z-\lambda} \tag{8}\]
as \(G_{I}(\lambda-i\epsilon)-G_{I}(\lambda+i\epsilon)=\ell_{I}2i\pi\sigma_{I}(\lambda)\). Writing the total resolvent (including the pole at the origin)
\[G_{I}^{\rm tot}(z):=G[1_{x\in I}](z)=\frac{1-\ell_{I}}{z}+\ell_{I}\int\frac{d \sigma_{I}(\lambda)}{z-\lambda},\]
we can relate the total spectral measure of \(M_{h}\) (including the zero-eigenvalues) to that of a subblock \(M_{I}\subset M\) by
\[d\sigma_{I}^{\rm tot}(\lambda)=(1-\ell_{I})\delta(\lambda)d\lambda+\ell_{I}\, d\sigma_{I}(\lambda). \tag{9}\]
_Discussion._
Our result is very much related to the framework of free probability theory. A first evidence comes from the fact that we can express the trace of powers of \(M_{h}\) as a sum over non-crossing partitions - a statement which solely relies on the three properties (i)-(iii). More precisely, we have shown in [7, sec. II.B] that
\[\phi_{n}[h]:=\mathbb{E}\,\underline{\mbox{tr}}(M_{h}^{n})=\sum_{\pi\in NC(n) }\int\!g_{\pi^{*}}(\vec{x})\,\delta_{\pi}(\vec{x})\,h(x_{1})\cdots h(x_{n})d \vec{x} \tag{10}\]
where \(g_{\pi}(\vec{x}):=\prod_{p\in\pi}g_{|p|}(\vec{x}_{p})\) with \(\vec{x}_{p}=(x_{i})_{i\in p}\) the collection of variables \(x_{i}\) belonging to the part \(p\) of the partition \(\pi\), and \(|p|\) the number of elements in this part. By \(\delta_{\pi}(\vec{x})\) we denote a product of delta functions \(\delta(x_{i}-x_{j})\) that equate all \(x_{i},x_{j}\) with \(i\) and \(j\) in the same part \(p\in\pi\). And \(\pi^{*}\) is the Kreweras complement of \(\pi\) (see the proof for an example and [15] for the definition of the Kreweras dual). And \(NC(n)\) denotes the set of non-crossing partition of order \(n\).
To prevent a confusion, note that \(\tilde{\kappa}_{\pi}:=\int g_{\pi^{*}}(\vec{x})\,\delta_{\pi}(\vec{x})\,h(x_{1 })\cdots h(x_{n})d\vec{x}\) are not the free cumulants of \(M_{h}\), because they fail to be multiplicative, i.e. \(\tilde{\kappa}_{\pi}\tilde{\kappa}_{\sigma}\neq\tilde{\kappa}_{\pi\cup\sigma}\)
with \(\pi\cup\sigma\) the union of parts of \(\pi\) and \(\sigma\). The reason for this is the contraction with the delta function2.
Footnote 2: Formally, the free cumulants of \(M_{h}\) are defined as a multiplicative family \(\kappa_{\pi}=\prod_{b\in\pi}\kappa_{[b]}\) with \(\kappa_{n}:=\kappa_{\pi_{1}n}\) satisfying \(\phi_{n}[h]=\sum_{\pi\in NC(n)}\kappa_{\pi}\) and they can be related to the \(\tilde{\kappa}_{\pi}\) by Moebius inversion, \(\kappa_{n}=\sum_{\pi\in NC(n)}\mu(\pi,1_{n})\prod_{b\in\pi}\sum_{\sigma\in NC( [b])}\tilde{\kappa}_{\sigma}\).
A second evidence for the relation to free probability theory is due to Eq.(6), which can be rewritten as
\[zh(x)^{-1}=a_{z}(x)^{-1}+R_{0}[a_{z}](x). \tag{11}\]
This resembles a local version of the so-called R-transform of free probability theory - hence the choice for the name of "local free cumulants" for \(g_{n}\).
Finally, it turns out that our result can also be obtained from the general relation between the R-transform and the resolvent in the framework of operator-valued free probability theory (see section 2.3). However, the general form of this relation is quite abstract and some work is necessary to see that it can be applied to the more practical problem of finding the spectrum of subblocks of a class of random matrices satisfying properties (i)-(iii). This is one of the main contributions of this article - a second being a direct proof of our result that does not use operator-valued free probability theory (see sections 2.1 and 2.2)
Besides the general statement, the application of our method to the QSSEP random matrix ensemble also constitutes a new result (see section 3.3). In contrast to this, the other applications we present (see sections 3.1 and 3.2) are rather illustrations on how to use our method in some well-known matrix ensembles in order to make contact with known results.
Recalling that we parametrize an arbitrary collection of subblocks of \(M\) by the action with a diagonal matrix \(h^{1/2}\), i.e. \(M_{h}=h^{1/2}Mh^{1/2}\), one might wonder if the spectrum of \(M_{h}\) can be obtained (much faster) by free convolution? Indeed, as long as \(M\) and \(h\) are free in the sense of (scalar) free probability theory, the spectral measure of \(M_{h}\) can be obtained by free multiplicative convolution from the spectral measures of \(M\) and \(h\) (see e.g. Lecture 14 in [16]). But this hope turns out to be in vain (see section 2.4) : The spectrum of \(M_{h}\) does not coincide with the spectrum obtained by free multiplicative convolution from \(M\) and \(h\). In turn, this means that structured random matrices satisfying (i)-(iii) are not free from diagonal deterministic ones - highlighting the special role of structured random matrices.
However, there are special cases of structureless matrix ensembles that are free from diagonal deterministic matrices, for example if \(M\) is a matrix rotated by Haar random unitaries (Theorem 7.5 in [15]). In this case we show in section 3.2 (rather for illustrating than for original purposes) that the spectrum of \(M_{h}\) can be indeed obtained from free convolution. In the same section we also show, that for a single subblock our result from Theorem 1 reduces to "free compression". For Haar-randomly rotated matrices, this is a well known result.
To conclude, let us also note that we can invert the variational principle : Given a generating function \(F[h](z)\) that satisfies Eq.(3), we can retrieve the initial data \(F_{0}\) as the extremum of
\[F_{0}[a]=\operatorname*{extremum}_{h,b_{z}}\left[\int\left[\log(z-h(x)b_{z}(x)) +a_{z}(x)b_{z}(x)\right]dx-F[h](z)\right]. \tag{12}\]
This is very similar to the Legendre Transformation where the initial function can be retrieved by applying the transformation twice. Here the inversion works because in extremizing Eq.(3) we obtain \(a=a(h,z)\) and \(b=b(h,z)\) as functions of \(h\) (and \(z\)), while in extremizing Eq.(12) we obtain \(h=h(a,z)\) and \(b=b(a,z)\) as functions of \(a\) (and \(z\)). Through formal power series, the triple \((a,b,h)\) can be inverted which ensures the variational principle for \(F_{0}\) above.
## 2 Proofs
We propose three proofs for Theorem 1. The first is a direct proof that uses a bijection between non-crossing partitions and trees. The second is also direct and is based on organizing the summation on partitions according to their cardinal or that of their Kreweras dual. The third relies on operator valued free probability theory and shows that our result is a special case of the relation between operator-valued R-transform and resolvent. Of course, the three proofs have some elements in common.
### Proof using a tree structure
Expanding the generating function (2) in terms of the moments \(\phi_{n}[h]\), defined in Eq.(10), one has
\[F[h](z)=\log(z)-\sum_{n\geq 1}\frac{z^{-n}}{n}\phi_{n}[h]. \tag{13}\]
The difficulty in this function lies in organising the sum over non-crossing partitions of any possible integer \(n\). To better understand this structure, we note that non-crossing partitions \(\pi\in NC(n)\) are in one-to-one correspondence with planar bipartite rooted trees with \(n\) edges, if one labels its black and white vertices by the parts of \(\pi\) and \(\pi^{*}\). Here is an example for \(\pi=\{\{1,3\},\{2\},\{4,5\},\{6\}\}\) (dotted lines) whose Kreweras complement is \(\pi^{*}=\{\{\{1,2\},\{3,5,6\},\{4\}\}\) (solid lines).
The parts of \(\pi\) are associated with black vertices and parts of \(\pi^{*}\) with white vertices. Two vertices are connected if the corresponding parts of \(\pi\) and \(\pi^{*}\) have an element in common (identifying numbers with and without bar, \(k\sim\bar{k}\)). The root is (by convention) chosen to be the part \(p\) containing \(1\).
However, applying this correspondence to Eq.(10) is not directly straightforward, because two partitions \(\pi\) and \(\pi^{\prime}\) that are related by a rotation of its elements (in the circle representation) have the same contribution in the sum and thereby complicate the counting of terms. This is due to the integration over \(x_{1},\cdots,x_{n}\). If instead, we don't integrate over one of these variables, call it \(x\), then \(\pi\) and \(\pi^{\prime}\) will give rise to different contributions, because they now depend on \(x\).
This leads us to define
\[\phi_{n}[h](x):=\mathbb{E}\langle x|(M_{h})^{n}|x\rangle.\]
Note that \(\phi_{n}[h]\)=\(\int\!\phi_{n}[h](x)dx\). Associating the label \(x\) to the root of the tree \(T_{\bullet}\) that corresponds to a partition \(\pi\) we now have
\[z^{-n}\phi_{n}[h](x)=\sum_{T_{\bullet}\text{ with }n\text{ edges}}W(T_{\bullet}^{x}).\]
The weight \(W(T_{\bullet}^{x})\) of a tree \(T_{\bullet}\) with root label \(x\) is defined as follows : Assign an integration variables \(x_{i}\) to each black vertex, and assign \(x\) to the black vertex that constitutes the root. Then assign the value \(z^{-k}h(x_{1})\cdots h(x_{k})g_{k}(x_{1},\cdots,x_{k})\) to each white vertex whose neighbouring black vertices carry the variables \(x_{1},\cdots,x_{k}\). Finally, take the product over all vertices and integrates over all \(x_{i}\) (except for the root \(x\)). By definition we set the tree consisting of a root without legs to one. Graphically the rules for the weights \(W(T_{\bullet}^{x})\) are
\[\tikzfig{height=1.5}{x_{1}}=z^{-k}h(x_{1})\cdots h(x_{k})g_{k}(x_{1},\cdots,x_ {k})\]
\[\tikzfig{height=1.5}{x_{2}}=\int_{0}^{1}dx_{i}\]
Doing the sum over all \(n\) is now easy : We just relax the condition on the sum over trees with \(n\) edges to trees of arbitrary size. We consider a generating function involving a sum over \(\phi_{n}[h](x)\),
\[a_{z}(x):=\mathbb{E}\langle x|\frac{h}{z-M_{h}}|x\rangle=\frac{h(x)}{z}\sum_{ n\geq 0}\frac{\phi_{n}[h](x)}{z^{n}}\stackrel{{!}}{{=}}\frac{h(x)}{z}\sum_{ T_{\bullet}}W(T_{\bullet}^{x}),\]
where the last equality is due to the correspondence with trees.
In order to establish the relation (6) satisfied by \(a_{z}(x)\) we consider the subset of trees \(T_{\circ}\) whose root (still a black vertex) has a single leg only. This defines
\[b_{z}(x):=\frac{z}{h(x)}\sum_{T_{\circ}}W(T_{\circ}^{x}). \tag{14}\]
Note that the weight \(W(T_{\bullet}^{x})\) of a tree whose root has \(l\) legs is equal to the product of weights \(W(T_{\circ,1}^{x})\cdots W(T_{\circ,l}^{x})\) of trees with a single leg on their root that arise by cutting the \(l\) legs of \(T_{\bullet}^{x}\). This implies
\[\sum_{T_{\bullet}}W(T_{\bullet}^{x})=\Big{(}1-\sum_{T_{\circ}}W(T_{\circ}^{x} )\Big{)}^{-1},\]
which yields the first relation in Eq.(6).
For the second relation, we start with \(T_{\circ}^{x}\) and cut the \(l\) outgoing legs of the first white vertex. This generates a product of \(l\) trees \(T_{\bullet,i}^{x_{i}}\) whose weights satisfy
\[W(T_{\circ}^{x})=\frac{h(x)}{z}\int dx_{1}\cdots dx_{l}\,g_{l+1}(x,x_{1}, \cdots,x_{l})W(T_{\bullet,1}^{x_{1}})\cdots W(T_{\bullet,l}^{x_{l}}).\]
Therefore, taking the sum over all trees \(T_{\circ}\),
\[b_{z}(x)=\sum_{l\geq 0}\!\int\!\left(\prod_{i=1}^{l}dx_{i}\frac{h(x_{i})}{z} \!\sum_{T_{\bullet}}W(T_{\bullet}^{x_{i}})\right)g_{l+1}(x,x_{1},\cdots,x_{l}). \tag{15}\]
One recognizes the definition of \(a_{z}(x_{i})\) in this expression, which then implies the second relation in Eq.(6).
Both relations in Eq. (6) are the extremization conditions of the variational principle (3). As a last step we should therefore verify that \(F[h]\) as defined in Eq. (13) coincides with the solution of the extremization problem from Eq. (3). Here we show that their first derivates with respect to \(h\) coincide for any \(h\), as well as their value at \(h=0\).
Since \(h(x)\,\delta\phi_{n}[h]/\delta h(x)=n\,\phi_{n}[h](x)\), one calculates from Eq.(13) that
\[-h(x)\frac{\delta F[h](z)}{\delta h(x)}=\sum_{n\geq 1}\frac{\phi_{n}[h](x)}{z^{ -n}}=\sum_{T_{\bullet}}W(T_{\bullet}^{x})-1. \tag{16}\]
Furthermore, one easily sees from our discussion of the multiplication of weights below Eq.(14) that \(a_{z}(x)b_{z}(x)=\sum_{T_{\bullet}}W(T_{\bullet}^{x})-1\). This leads to
\[-h(x)\frac{\delta F[h](z)}{\delta h(x)}=a_{z}(x)b_{z}(x)\]
On the other hand, starting from Eq.(3), one has
\[h(x)\frac{\delta F[h](z)}{\delta h(x)}=-\frac{h(x)b_{z}(x)}{z-h(x)b_{z}(x)}=- a_{z}(x)b_{z}(x),\]
where we used Eq. (6) in the last line. Since \(F[h=0](z)=\log(z)\) for both definitions (3) and (6), the two expressions for \(F[h](z)\) coincide.
### Proof using Kreweras duality
Let \(\tilde{a}_{z}\) be the resolvent with a marked variable \(x\), such that \(\int\tilde{a}_{z}(x)\,dx=G[h](z)\) is the resolvent, that is:
\[\tilde{a}_{z}(x):=\mathbb{E}\langle x|\frac{1}{z-M_{h}}|x\rangle=\sum_{n\geq 0 }z^{-n-1}\,\mathbb{E}\langle x|(M_{h})^{n}|x\rangle\]
From Eq.(10), the moments with marked variable \(\phi_{n}[h](x):=\mathbb{E}\langle x|(M_{h})^{n}|x\rangle\) are sum over non-crossing partitions, but without the integration over \(x\).
Let \(p_{x}\) be the part of \(\pi\) containing \(x\), and \(p_{x}^{*}\) be the part of \(\pi^{*}\) (the Kreweras dual) containing \(x\) (we choose one of the two labelling of the edges of \(\pi^{*}\) be naming them with their left (right) point when representing the partition as a loop connecting the points that belong to the same part). There are two ways to organise the sum over the number of points/edges and over their non-crossing partitions : either by the cardinality of \(p_{x}\) or that of \(p_{x}^{*}\).
Let us first organise the sum by the cardinality of \(p_{x}\). If \(|p_{x}|=1\), then \(x\) is a singlet in \(\pi\). That is : we consider all marked partitions (with any number of points greater than one) such that \(x\), the marked point, is a singlet. Summing over such partition defines a function that we denote \(\tilde{b}_{z}(x)\). That is,
\[\tilde{b}_{z}(x):=\mathbb{E}\langle x|\frac{z\,M_{h}}{z-M_{h}}|x\rangle^{[ \operatorname{no}x]}=\sum_{n\geq 1}z^{1-n}\,\mathbb{E}\langle x|M_{h}^{n}|x \rangle^{[\operatorname{no}x]},\]
where the "no \(x\)" upper script means that \(x\) is not used in any of the intermediate indices in the product of matrices (i.e. when inserting a resolution of the identity). If \(k:=|p_{x}|\geq 2\), then the contribution of this partition to the product \(\mathbb{E}\langle x|M_{h}^{n}|x\rangle\) splits into the product of \(k\) contributions of the form \(\mathbb{E}\langle x|M_{h}^{n_{j}}|x\rangle^{[\operatorname{no}x]}\), with \(\sum_{j}n_{j}=n\). (Here we implicitly use the tree structure underlying the lattice of NC partitions). Since each NC partition appears only once, we get that
\[\tilde{a}_{z}(x)=\sum_{k\geq 0}z^{-1-k}\,[\tilde{b}_{z}(x)]^{k}=\frac{1}{z- \tilde{b}_{z}(x)}.\]
This is the first relation in (6) if we define \(a_{z}=h\,\tilde{a_{z}}\) and \(b_{z}=\tilde{b}_{z}/h\).
Let us now organise the sum by the cardinal of \(p_{x}^{*}\). Actually we shall organise the sum involved in \(\tilde{b}_{z}(x)\) (with no repetition of \(x\)). Let \(k:=|p_{x}^{*}|\geq 1\). The contributions of the marked NC partitions with \(|p_{x}^{*}|=k\) will each involve a factor \(g_{k+1}(x,x_{1},\cdots,x_{k})\). Using again the tree structure underlying the lattice of NC partitions, we then read that3
Footnote 3: In the discrete level, because of the ”no \(x\)” constraint in the definition of \(b(x)\), the indices to sum over, representing the integrals over \(x_{1},\cdots,x_{k}\), should be different from the marked point \(x\). Similarly, the functions \(\tilde{a}_{z}(x_{j})\) which arise in the relation \(\tilde{b}_{z}(x)=R_{0}[\tilde{a}_{z}](x)\) should actually be \(\tilde{a}_{z}(x_{j})^{\operatorname{no}x}\), involving the matrix element \(\langle x_{j}|M_{h}^{n}|x\rangle^{\operatorname{no}x_{j}}\). But this does not matter in the continuum limit, since all the functions to be integrated over are smooth and \(x\neq x_{j}\) in the integration.
\[\tilde{b}_{z}(x)=\sum_{k\geq 0}h(x)\int dx_{1}\cdots dx_{k}\,g_{k+1}(x,x_{1}, \cdots,x_{k})h(x_{1})\tilde{a}_{z}(x_{1})\cdots h(x_{k})\tilde{a}_{z}(x_{k}).\]
Using the definition for \(R_{0}[p](x)\) below Eq.(6) this reads
\[\tilde{b}_{z}(x)=h(x)\,R_{0}[h\,\tilde{a}_{z}](x).\]
With \(a_{z}=h\,\tilde{a_{z}}\) and \(b_{z}=\tilde{b}_{z}/h\), this is the second relation in Eq.(6)
Finally note that,
\[h(x)\frac{\delta F[h](z)}{\delta h(x)}=1-z\tilde{a}_{z}(x)=-\tilde{a}_{z}(x)\tilde {b}_{z}(x)=a_{z}(x)b_{z}(x).\]
Together with the boundary value, \(F[0](z)=\log z\), this fixes \(F[h](z)\) (at least as formula series in \(h\)).
### Proof using operator valued free probability
This section recalls some basic definitions of operator-valued free probability theory and shows how the relation between the R- and the Cauchy-transform (Theorem 2) can be used to deduce our main result (Theorem 1). Of course, the relation between \(R\)- and Cauchy transform also uses implicitly the tree structure of non-crossing partitions. We closely follow [17, chpt. 9] and [18].
**Definition 1**.: Let \(\mathcal{D}\subset\mathcal{A}\) be unital subalgebra. Then \(E^{\mathcal{D}}:\mathcal{A}\to\mathcal{D}\) is a _conditional expectation value_ if \(E^{\mathcal{D}}[d]\in\mathcal{D}\) and \(E^{\mathcal{D}}[dad^{\prime}]=dE^{\mathcal{D}}[a]d^{\prime}\) for all \(a\in\mathcal{A}\) and \(d,d^{\prime}\in\mathcal{D}\). Furthermore, the _operator-valued distribution_ of a random variable \(a\in\mathcal{A}\) is given by all _operator-valued_ moments \(E^{\mathcal{D}}[ab_{1}a\cdots ad_{n-1}a]\in\mathcal{D}\) where \(d_{1},\cdots,d_{n-1}\in\mathcal{D}\).
We will consider the case where \(\mathcal{A}\) is the ensemble of \(N\times N\) random matrices \(M\) satisfying properties (i)-(iii), and \(\mathcal{D}\) is the subalgebra of deterministic (bounded) diagonal matrices. In the large \(N\) limit we have \(\mathcal{D}\to L^{\infty}[0,1]\). Furthermore, for \(M\in\mathcal{A}\) we define the conditional expectation value to be
\[E^{\mathcal{D}}[M]:=\operatorname{diag}(\mathbb{E}[M_{11}],\cdots,\mathbb{E}[ M_{NN}]), \tag{17}\]
That is, one takes the usual expectation value of the matrix elements and sets all non-diagonal elements to zero. Since we are only interested in this concrete example, we will always denote elements of \(\mathcal{A}\) by \(M\) in the following definitions (instead of \(a\)).
As in the scalar case, one can define operator-valued free cumulants as follows.
**Definition 2**.: The \(\mathcal{D}\)_-valued free cumulants_\(\kappa_{n}^{\mathcal{D}}:\mathcal{A}^{n}\to\mathcal{D}\) are defined through the \(\mathcal{D}\)-valued moments by
\[E^{\mathcal{D}}[M_{1}\cdots M_{n}]=\sum_{\pi\in NC(n)}\kappa_{\pi}^{\mathcal{ D}}(M_{1},\cdots,M_{n}) \tag{18}\]
where the \(\kappa_{\pi}^{\mathcal{D}}\) are constructed from the family of linear functions \(\kappa_{n}^{\mathcal{D}}:=\kappa_{1_{n}}^{\mathcal{D}}\) respecting the nested structure of the parts appearing in \(\pi\) as explained in the following example.
**Example 1**.: For \(\pi=\{\{1,3\},\{2\},\{4,5\},\{6\}\}\) corresponding to the dotted lines in the following figure (solid lines are the Kreweras complement) \(\kappa_{\pi}^{\mathcal{D}}\) is defined as
\[\kappa_{\pi}^{\mathcal{D}}(M_{1},M_{2},M_{3},M_{4},M_{5},M_{6}):=\kappa_{2}^{ \mathcal{D}}(M_{1}\cdot\kappa_{1}^{\mathcal{D}}(M_{2}),M_{3})\cdot\kappa_{2}^{ \mathcal{D}}(M_{4},M_{5})\cdot\kappa_{1}^{\mathcal{D}}(M_{6}),\]
where \(\cdot\) is matrix multiplication.
Next we would like to relate \(\kappa_{n}^{\mathcal{D}}\) to the local free cumulants \(g_{n}\). In the large \(N\) limit with \(x=i/N\), we introduce the notation \(E^{\mathcal{D}}[M](x):=E^{\mathcal{D}}[M]_{ii}\in\mathbb{R}\) to denote a diagonal element of \(E^{\mathcal{D}}[M]\in\mathcal{D}\). Then, note that by Eq.(10), and due the choice for the conditional expectation value \(E^{\mathcal{D}}\), we have
\[E^{\mathcal{D}}[\underbrace{Md\cdots Md}_{n}M](x)=\sum_{\pi\in NC(n+1)}\int \mathrm{d}\vec{x}^{(n)}d(x_{1})\cdots d(x_{n})g_{\pi}(\vec{x}^{(n)},x)\delta_{ \pi^{*}}(\vec{x}^{(n)},x)\]
Here we interchanged the roles of \(\pi\) and \(\pi^{*}\) which does not change the sum. Comparing to Definition 2, this suggest the following identification.
**Proposition 1**.: \[\kappa_{\pi}^{\mathcal{D}}(Md,\cdots,Md,M)(x)=\int\mathrm{d}\vec{x}^{(n)}d(x_ {1})\cdots d(x_{n})g_{\pi}(\vec{x}^{(n)},x)\delta_{\pi^{*}}(\vec{x}^{(n)},x)\] (19)
Proof.: We must check that this identification can be consistently obtained from the case \(\pi=1_{n+1}\), i.e.
\[\kappa_{n+1}^{\mathcal{D}}(Md,\cdots,Md,M)(x)=\int\mathrm{d}\vec{x}^{(n)}d(x_ {1})\cdots d(x_{n})g_{\pi}(\vec{x}^{(n)},x), \tag{20}\]
thereby respecting the nested structure of \(\kappa_{\pi}^{\mathcal{D}}\). In fact, one soon notices that Eq.(19) is precisely the definition of the nested structure introduced in Example 1. We show it explicitly for this example where \(n+1=6\). Using Eq.(20), the r.h.s. of Eq.(19) becomes
\[\int\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{d}x_{3}\,d(x_{1})^{2}\,d(x_{3})\,d( x)^{3}\,g_{2}(x_{1},x)\,g_{2}(x_{1})\,g_{2}(x_{4},x)\,g_{1}(x).\]
This corresponds to its l.h.s. where \(\delta_{\pi^{*}}(x_{1},\cdots,x_{6})=\delta(x_{1}-x_{2})\delta(x_{3}-x_{5}) \delta(x_{5}-x)\) and we identified \(x_{6}\equiv x\). An arbitrary \(\pi\in NC(n+1)\) can be tackled in the same way identifying \(x_{n+1}\equiv x\).
This result also explains how the structure of \(g_{\pi}\delta_{\pi^{*}}\) which we first encountered in Eq.(10) fits into the free probability picture. Earlier, we could only ascertain that the family \(\tilde{\kappa}_{\pi}:=\int g_{\pi^{*}}(\vec{x})\,\delta_{\pi}(\vec{x})d\vec{x}\) are not the (scalar) free cumulants of \(M\). Now we understand that they are equal to the normalized trace of operator-valued free
cumulants \(\kappa_{\pi}^{\mathcal{D}}(M,\cdots,M).\) This also suggests that calling the family of functions \(g_{n}\) "local free cumulants" seems to be a good name choice.
**Definition 3**.: The \(\mathcal{D}\)_-valued \(R\)-transform_, \(R_{M}:\mathcal{D}\rightarrow\mathcal{D}\) of an element \(M\in\mathcal{A}\) is defined by
\[R_{M}(d)=\sum_{n\geq 0}\kappa_{n+1}^{\mathcal{D}}(Md,\cdots,Md,M) \tag{21}\]
and the \(\mathcal{D}\)_-valued Cauchy transform_ (or _resolvent_) \(G_{M}:\mathcal{D}\rightarrow\mathcal{D}\) is defined by
\[G_{M}(d)=E^{\mathcal{D}}[\frac{1}{d-M}]=\sum_{n\geq 0}E^{\mathcal{D}}[d^{-1}(Md )^{-1}] \tag{22}\]
**Theorem 2** (see Thrm. 11 in Chpt. 9 of [17]).: _As in the scalar-valued case, \(R\)- and Cauchy transforms satisfy_
\[G_{M}(d)=\frac{1}{d-R_{M}(G_{M}(d))}. \tag{23}\]
Note that the \(\mathcal{D}\)-valued Cauchy transform can be related to its scalar analogue \(G(z)\) by
\[G(z):=\underline{\mathrm{tr}}(G_{M}(z\mathbb{I}_{N}))=\underline{\mathrm{tr}} \,\mathbb{E}[\frac{1}{z-M}].\]
Let us now consider \(M_{h}=h^{1/2}Mh^{1/2}\in\mathcal{A}\) and define the diagonal elements of it's \(\mathcal{D}\)-valued Cauchy transform as
\[\tilde{a}(x):=\lim_{N\rightarrow\infty}G_{M_{h}}(z\mathbb{I}_{N})_{ii} \tag{24}\]
with \(x=i/N\) in the large \(N\) limit. Then the scalar Cauchy transform of \(M_{h}\) is \(G(z)=\int_{0}^{1}\mathrm{d}x\,\tilde{a}(x).\) From Eq.(20) one sees that the diagonal elements of the \(\mathcal{D}\)-valued \(R\)-transform of \(M_{h}\) are given by
\[R_{M_{h}}(d)(x)=\sum_{n\geq 0}\int\mathrm{d}\vec{x}^{(n)}\,d(x_{1})h(x_{1}) \cdots d(x_{n})h(x_{n})h(x)g_{n+1}(\vec{x}^{(n)},x) \tag{25}\]
To conclude, using Eq.(25) together with Theorem 2 we obtain,
\[\tilde{a}(x)=\frac{1}{z-R_{M_{h}}[\tilde{a}](x)}=\frac{1}{z-h(x)R_{0}[h\tilde {a}](x)} \tag{26}\]
where we used \(R_{0}\) from Eq.(7). Redefining \(a(x)=h(x)\tilde{a}(x)\) we obtain our result in Eq.(6).
### No free multiplicative convolution
In the case of matrices rotated by Haar random unitaries we will see in subsection 3.2 that the spectral measure \(\sigma_{I}\) of \(M_{h}\) is related to that of \(h\) and \(M\), respectively
denoted by \(\nu\) and \(\sigma\), via a free multiplicative convolution as \(\sigma_{I}=\nu\boxtimes\sigma\). This is, in fact, always true if \(M\) and \(h\) are free in the sense of free probability. For \(M\) a Haar-randomly rotated matrix, this is the case : \(h\) and \(M\) are free. Here we will show that we cannot obtain \(\sigma_{I}\) by free multiplicative convolution of \(\nu\) and \(\sigma\) in the general case of structured random matrices where the local free cumulants \(g_{n}\) are not constant. In return, this means, that structured random matrices \(M\) are not free from deterministic diagonal matrices \(h\).
Let \(S_{I}\) (resp. \(S_{0}\)) be the \(S\)-transform of the measure \(\sigma_{I}\) (resp. \(\sigma\)). Recall that \(S_{I}(w)=\frac{w+1}{wz_{I}}\) with \(w+1=z_{I}G_{I}(z_{I})\) and similarly \(S_{0}(w)=\frac{w+1}{wz_{0}}\) with \(w+1=z_{0}G_{0}(z_{0})\) (see appendix A for a definition of the S-transform). Hence, if \(\sigma_{I}=\nu\boxtimes\sigma\) so that \(S_{I}(w)=S_{0}(w)S_{h}(w)\), we should have
\[\frac{z_{0}}{z_{I}}=S_{h}(w).\]
In particular, if \(\sigma_{I}=\nu\boxtimes\sigma\) then \(z_{0}/z_{I}\) is independent of the distribution of \(M\), that is, it is independent of the local free cumulants \(g_{n}\). We check below that \(z_{0}/z_{I}\) actually depends on the local free cumulants \(g_{n}\), so that \(\sigma_{I}\neq\nu\boxtimes\sigma\).
The equation for \(z_{I}\) reads
\[w=\int\!dx\frac{h(x)b_{z}(x)}{z_{I}-h(x)b_{z}(x)}=\int\!dx\left(\frac{1}{z_{I} }h(x)b_{z}(x)+\frac{1}{z_{I}^{2}}(h(x)b_{z}(x))^{2}+\cdots\right).\]
This is an equation for \(1/z_{I}\). We have to take into account that \(b_{z}(x)\) also depends on \(z\). Let us write \(b_{z}(x)=b_{1}(x)+\frac{1}{z_{I}}b_{2}(x)+\cdots\). We have \(b_{1}(x)=g_{1}(x)\) and \(b_{2}(x)=\int\!dy\,g_{2}(x,y)h(y)\). Solving for \(1/z_{I}\) we get
\[\frac{1}{z_{I}}=\frac{w}{[hb_{1}]}\left(1-\frac{[(hb_{1})^{2}]+[hb_{2}]}{[hb_{ 1}]^{2}}w+\cdots\right)\]
where \([\cdots]\) is short notation for integration over \(x\), i.e. \([f]=\int\!dxf(x)\). The formula for \(z_{0}\) is obtained from that of \(z_{I}\) be setting \(h=1\). Thus
\[\frac{z_{0}}{z_{I}}=\frac{[g_{1}]}{[hg_{1}]}(1+O(w)),\]
and \(z_{0}/z_{I}\) depends on the local free cumulants \(g_{n}\), and \(S_{I}(w)\neq S_{h}(w)S_{0}(w)\) or equivalently \(\sigma_{I}\neq\nu\boxtimes\sigma\).
Remark that, to next order in \(w\) we have
\[\frac{z_{0}}{z_{I}}=\frac{[g_{1}]}{[hg_{1}]}\left(1-(\frac{[(hg_{1})^{2}]}{[hg _{1}]^{2}}-\frac{[(g_{1})^{2}]}{[g_{1}]^{2}})w-(\frac{[[g_{2}(h\times h)]]}{[ hg_{1}]^{2}}-\frac{[[g_{2}]]}{[g_{1}]^{2}})w+O(w^{2})\right)\]
For constant local free cumulants (i.e. for unstructured matrices), \(g_{1}\rightsquigarrow\kappa_{1}\) and \(g_{2}\rightsquigarrow\kappa_{2}\), the r.h.s. of the previous equation is independent of those cumulants (it depends
only on \(h\)). More precisely the second term proportional to \(w\) vanishes since \([[g_{2}(h\times h)]]\rightsquigarrow\kappa_{2}[h]^{2}\) and \([hg_{1}]\rightsquigarrow\kappa_{1}[h]\), while the remaining terms become
\[\frac{z_{0}}{z_{I}}=\frac{1}{[h]}\left(1-\frac{[h^{2}]-[h]^{2}}{[h]^{2}}w+O(w^{ 2})\right)\]
This coincides with the expansion of \(S_{h}(w)\) as expected.
## 3 Applications
In this section we apply the formulae (5-6) to some explicit random matrix ensembles. For Wigner matrices we show that this produces the well known Wigner semi-circle law. We also show how to generalize this to the structured case where the variance of diagonal entries can vary. For matrices rotated by Haar random unitaries, we show that our method reduces to free multiplicative convolution when interested in the spectrum of subblocks. Finally we apply our method to the stationary distribution of the Quantum Symmetric Simple Exclusion Process (QSSEP), a structured random matrix ensembles for which the local free cumulants are known. This application to QSSEP is new and extends the results previously presented in [8].
### Wigner matrices
Wigner matrices are characterized by the vanishing of its associated free cumulants of order strictly bigger than two. Thus, for Wigner matrices only \(g_{1}\) and \(g_{2}\) are non vanishing and both are \(x\)-independent. All \(g_{n}\), \(n\geq 3\), are zero. Without loss of generality we can choose \(g_{1}=0\) and we set \(g_{2}=s^{2}\). Then \(F_{0}[p]=\frac{s^{2}}{2}\int\!dxdy\,p(x)p(y)\) and \(R_{0}[p]=s^{2}\int\!dx\,p(x)\). For the whole interval \(h(x)=1\) (considering a subset will be equivalent), the saddle point equations become
\[a=\frac{1}{z-b},\ b=s^{2}\,A,\]
with \(A=\int\!dx\,a(x)\). This yields a second order equation for \(A\), i.e. \(A^{-1}=z-s^{2}A\). Solving it, with the boundary condition \(A\sim\frac{1}{z}+\cdots\) at \(z\) large, gives
\[A=\frac{1}{2s^{2}}\left(z-\sqrt{z^{2}-4s^{2}}\right)\]
Thus the cut is on the interval \([-2s,+2s]\) and the spectral density is
\[d\sigma(\lambda)=\frac{d\lambda}{2\pi s^{2}}\,\sqrt{4s^{2}-\lambda^{2}}\ 1_{ \lambda\in[-2s,+2s]} \tag{27}\]
Of course, that's Wigner's semi-circle law.
_Inhomogeneous Variance._
Next we consider \(N\times N\) Wigner matrices with zero mean and variance
\[\mathbb{E}[M_{ij}M_{kl}]=\frac{1}{N}\delta_{jk}\delta_{il}\,g_{2}(\frac{i}{N}, \frac{j}{N}),\]
with \(g_{2}(x,y)\) a (smooth) function. It is clear that the three fundamental properties (i)-(iii) are satisfied. We restrict to diagonal covariances \(g_{2}(x,y)=s^{2}(x)\delta(x-y)\), because otherwise we cannot find closed expressions for the spectrum. The saddle point equation is then a quadratic equation for \(a_{z}(x)\) which, in the case \(h(x)=1\), reads \(a_{z}(x)(z-s^{2}(x)a_{z}(x))=1\) so that
\[a_{z}(x)=\frac{1}{2s^{2}(x)}(z-\sqrt{z^{2}-4s^{2}(x)}).\]
The resolvent is \(G(z)=\int\!dx\,a_{z}(x)\). Its discontinuity at the cut is the sum of the discontinuities for each value of \(x\). This yields for the spectral density
\[d\sigma(\lambda)=\frac{d\lambda}{2\pi}\int\!\frac{dx}{s^{2}(x)}\,\sqrt{4s^{2}( x)-\lambda^{2}}. \tag{28}\]
### Haar-randomly rotated matrices
We consider matrices of the form \(M=UDU^{\dagger}\), with \(U\) Haar distributed over the unitary group and \(D\) a diagonal matrix with spectral density \(\sigma\) in the large \(N\) limit. For such matrices, it is known that the local free cumulants are constant and equal to the free cumulants of \(\sigma\), that is
\[g_{n}(\vec{x}):=\lim_{N\to\infty}N^{n-1}\mathbb{E}[M_{i_{1}i_{2}}M_{i_{2}i_{3} }\cdots M_{i_{n}i_{1}}]^{c}=\kappa_{n}(\sigma). \tag{29}\]
The proof resorts to the HCIZ integral and is outlined in appendix B.
_Spectrum of \(M\) (consistency check)._
Of course the spectrum of the whole matrix \(M\) is that of \(D\) with spectral density \(\sigma\). Let us check this within our approach via Theorem 1. With \(g_{n}=\kappa_{n}(\sigma)\) and \(h(x)=1\) (we consider the whole matrix \(M\)), Eqs.(6) become
\[A=\frac{1}{z-b_{z}(A)},\ b_{z}(A)=\sum_{k\geq 1}A^{k-1}\kappa_{k}(\sigma),\]
with \(A=\int\!dx\,a_{z}(x)\). Let us recall some basics definition from free probability. For any measure \(\sigma\) of some random variable \(X\), let \(G_{\sigma}(z)=\mathbb{E}[\frac{1}{z-X}]=\sum_{n\geq 0}z^{-n-1}m_{n}(\sigma)\) and \(K_{\sigma}(z)=\sum_{n\geq 0}z^{n-1}\kappa_{n}(\sigma)\), with \(m_{n}\) and \(\kappa_{n}\) the \(n\)-th moments and free cumulants, respectively. As well known from free probability, \(G_{\sigma}\) and \(K_{\sigma}\) are inverse functions, i.e. \(K_{\sigma}(G_{\sigma}(z))=z\). Comparing with the previous equation, we see that
\(A^{-1}.\) The equation \(A=1/(z-b_{z}(A))\) can thus be written as \(z=b_{z}(A)+A^{-1}=K_{\sigma}(A),\) and hence
\[A=G_{\sigma}(z)\]
As a consequence, the resolvent of \(M\) is equal to \(G_{\sigma}(z)\) and the spectral density of \(M\) is indeed that of \(D,\) as it should be.
_Spectrum of a subblock reduces to free compression._
We now consider an interval \(I=[0,\ell]\in[0,1]\) and compute the spectrum \(d\sigma_{I}(\lambda)\) of the subblock of \(M\) with \(\ell N\) rows and columns that corresponds to this interval, i.e. \(h(x)=1_{x\in I}.\) The saddle point equations (6) impose \(b_{z}(x)\) to be independent of \(x\) and \(a_{z}(x)=0\) for \(x\not\in I.\) They then read
\[b_{z}=\sum_{k\geq 1}A_{\ell}^{k-1}\kappa_{k}(\sigma),\quad A_{\ell}=\frac{ \ell}{z-b_{z}}\]
with \(A_{\ell}=\int_{I}dx\,a_{z}(x).\) These two equations imply \(z=K_{\sigma}(A_{\ell})-(1-\ell)/A_{\ell}.\)
Let us now define (following a remark by Ph. Biane) the freely-compressed measure \(\sigma^{(t)}\) defined from \(\sigma\) by compressing its free cumulants by a factor \(1/t,\) that is
\[\kappa_{k}(\sigma^{(t)}):=t^{-1}\kappa_{k}(\sigma).\]
We have \(K_{\sigma^{(t)}}(w)=\frac{1}{t}K_{\sigma}(w)+(\frac{t-1}{t})\frac{1}{w}.\) The equation for \(A_{\ell}\) thus reads \(K_{\sigma^{(t)}}(A_{\ell})=\frac{z}{\ell}\) and hence
\[A_{\ell}(z)=G_{\sigma^{(\ell)}}(\frac{z}{\ell}).\]
This implies, \(\int\frac{d\sigma_{I}(\lambda)}{z-\lambda}=\int\frac{d\sigma^{(\ell)}(X)}{z- \ell X},\) so that
\[d\sigma_{I}(\ell\lambda)=d\sigma^{(\ell)}(\lambda) \tag{30}\]
That is : the spectral measure of a subblock \(I=[0,\ell]\) of \(M\) of relative size \(\ell\) is that of the freely-compressed measure \(\sigma^{(\ell)}\) but for the compressed eigenvalue \(\lambda/\ell.\)
_Spectrum via free multiplicative convolution._
For illustrating purposes, we present an explicit derivation of the well-known result that for any initial spectral measure \(\sigma\) - that of the diagonal matrix \(D\) - and any function \(h\) - defining the total spectral measure \(\nu\) -, the spectral measure of \(h^{1/2}Mh^{1/2}\) is the free multiplicative convolution of \(\nu\) and \(\sigma,\) that is
\[\sigma_{I}^{\rm tot}=\nu\boxtimes\sigma. \tag{31}\]
Note that \(\sigma_{I}^{\rm tot}\) includes potential zero eigenvalues in the case where \(h(x)=1_{x\in I}\) is the indicator function on an interval. In contrast to this, \(\sigma_{I},\) which we obtained in the last subsection via free compression as the spectrum of a subblock \(M_{I}\subset M,\) does not contain these zero eigenvalues. The two spectra are related by Eq.(9).
Let \(S_{0}\), \(S_{h}\), \(S_{I}\) (resp. \(G_{0}\), \(G_{h}\), \(G_{I}\)) the S-transform (resp. the resolvent) of the spectral measure \(\sigma\), \(\nu\), \(\sigma_{I}^{\rm tot}\). Recall the relation between the \(S\)-transform and the \(R\)-transform as \(C(zS(z))=z\) with \(C(z)=zR(z)\). Recall also the relation between the resolvent and the \(S\)-transform: \(S(zG(z)-1)=G(z)/(zG(z)-1)\). Setting \(w=zG(z)-1\), it can be written as \(S(w)=\frac{w+1}{zw}\) with \(z\) implicitly depending on \(w\) via \(w+1=zG(z)\).
We have to prove \(S_{I}(w)=S_{h}(w)S_{0}(w)\).
Let \(w:=zG_{I}(z)-1\), so that \(S_{I}(w)=\frac{w+1}{zw}\) with \(zG_{I}(z)=w+1\). The resolvent \(G_{I}\) is defined by the saddle point equations
\[G_{I}(z)=\int\frac{dx}{z-h(x)R_{0}(A)},\quad\mbox{with }A=\int\frac{h(x)\,dx}{z-h( x)R_{0}(A)}.\]
Using these relations we have \(w=R_{0}(A)\int\frac{h(x)\,dx}{z-h(x)R_{0}(A)}=AR_{0}(A)\) thus \(w=C_{0}(A)\), and hence \(A=wS_{0}(w)\). As a consequence, the relation \(A=\int\frac{h(x)\,dx}{z-h(x)R_{0}(A)}\) can be written as (using \(A=wS_{0}(w)\))
\[w=\int\frac{h(x)dx}{zS_{0}(w)-h(x)}\]
Let \(u=zS_{0}(w)\), then (using \(S_{I}(w)=\frac{w+1}{zw}\))
\[S_{I}(w)=S_{0}(w)\,\frac{w+1}{uw},\quad\mbox{with }w+1=\int\frac{u\,dx}{u-h( x)}\]
Now \(\frac{w+1}{uw}=S_{h}(w)\), because for \(h\) diagonal, \(G_{h}(u)=\int\frac{dx}{u-h(x)}\) and hence \(S_{h}(w)=\frac{w+1}{uw}\) with \(w+1=\int\frac{u\,dx}{u-h(x)}\). Thus \(S_{I}(w)=S_{0}(w)S_{h}(w)\) or equivalently \(\sigma_{I}^{\rm tot}=\nu\boxtimes\sigma\).
### Qsseep
The open Quantum Symmetric Simple Exclusion Process (QSSEP) is a quantum stochastic process that is supposed to model diffusive transport in one dimensional chaotic many-body quantum systems in the mesoscopic regime [6, 7]. Mathematically it is a one-dimensional chain with \(N\) sites occupied by spinless free fermions \(c_{j}^{\dagger}\) with noisy hopping rates and coupled to boundary reservoirs at \(j=1,N\) that inject and extract fermions with rates \(\alpha_{1,N}\) and \(\beta_{1,N}\), respectively. The key quantity of interest is the matrix of coherences with elements \(M_{ij}(t):=\mbox{Tr}(\rho_{t}\,c_{i}^{\dagger}c_{j})\) which contains all information about the system since the evolution of QSSEP preserves Gaussian fermionic states [6]. The \(N\times N\) matrix \(M\) undergoes a stochastic evolution of the form
\[dM(t)=i[dh_{t},M(t)]-\frac{1}{2}[dh_{t},[dh_{t},M(t)]]+\mathcal{L}[M]dt \tag{32}\]
with
\[dh_{t}=\begin{pmatrix}0&dW_{t}^{1}&&\\ d\overline{W}_{t}^{1}&\ddots&\ddots&\\ &\ddots&\ddots&dW_{t}^{N-1}\\ &&d\overline{W}_{t}^{N-1}&0\end{pmatrix}\]
where \(dW_{t}^{j}:=W_{t+dt}^{j}-W_{t}^{j}\) are the increments of complex Brownian motions, independent for each site \(j\), and
\[\mathcal{L}[M]_{ij}=\sum_{p\in 1,N}(\delta_{pi}\delta_{pj}\alpha_{p}-\frac{1}{ 2}(\delta_{ip}+\delta_{jp})(\alpha_{p}+\beta_{p})M_{ij})\]
The stochastic evolution has a unique stationary distribution [19] that is characterized by its local free cumulants as [13] (again with the notation \(\vec{x}=(x_{1},\cdots,x_{n})\))
\[\sum_{\pi\in NC(n)}g_{\pi}(\vec{x})=\min(\vec{x})=:\varphi_{n}(\vec{x}). \tag{33}\]
The functions \(g_{n}\) can be viewed as the free cumulants of the indicator functions \(\mathbb{I}_{x}(y):=1_{y<x}\) with respect to the Lebesgue measure, since the moments of these functions are precisely \(\mathbb{E}[\mathbb{I}_{x_{1}}\cdots\mathbb{I}_{x_{n}}]=\min(\vec{x})\).
From the point of view of physics we are interested in the spectra of \(M\) and of its subblocks in order to compute the entanglement entropy in the QSSEP [8].
_Explicit expression for \(F_{0}[p]\)._
As a first step, we compute the function \(F_{0}[p]\) that contains the initial data through the functions \(g_{n}\). Defining \(\mathbb{I}_{[p]}(y):=\int_{y}^{1}dx\,p(x)\), we shall prove that for QSSEP,
\[F_{0}[p]=w-1-\int_{0}^{1}dx\log[w-\mathbb{I}_{[p]}(x)],\text{ with }\int_{0}^{1}\frac{dx}{w-\mathbb{I}_{[p]}(x)}=1, \tag{34}\]
We define the free cumulant and the moment generating function,
\[K_{[p]}(w)=\sum_{n\geq 0}w^{n-1}g_{n}[p],\quad G_{[p]}(w)=\sum_{n\geq 0}w^{-n-1} \varphi_{n}[p],\]
where \(\varphi_{n}[p]:=\int d\vec{x}\,\varphi(\vec{x})p(x_{1})\cdots p(x_{n})\) and \(g_{n}[p]:=\int d\vec{x}\,g(\vec{x})p(x_{1})\cdots p(x_{n})\). By convention, we set \(g_{0}[p]=\varphi_{0}[p]\equiv 1\). We have \(\varphi_{n}[p]=\sum_{\pi\in NC(n)}g_{\pi}[p]\). By results from free probability theory, these two functions are inverses of each other, \(K_{[p]}(G_{[p]}(w))=w\). Integrating Eq.(33) and using \(\mathbb{I}_{[p]}(y)=\int_{0}^{1}dx\,p(x)\mathbb{I}_{x}(y)\), the Cauchy transform \(G_{[p]}\) can be written as
\[G_{[p]}(w)=\int_{0}^{1}\frac{dx}{w-\mathbb{I}_{[p]}(x)}. \tag{35}\]
Since the initial data function \(F_{0}\) is such that \(F_{0}[vp]=\sum_{n\geq 1}\frac{v^{n}}{n}g_{n}[p],\) we have \(1+v\partial_{v}F_{0}[vp]=vK_{[p]}(v).\) Define now a new variable \(w,\) depending on \(v\) and \(p,\) such that \(v=G_{[p]}(w).\) Using \(K_{[p]}(G_{[p]}(w))=w,\) the equation \(1+v\partial_{v}F_{0}[vp]=vK_{[p]}(v)\) then becomes
\[1+v\partial_{v}F_{0}[vp]=vw\]
Integrating w.r.t. \(v\) yields (with the appropriate boundary condition \(F_{0}[0]=0\))
\[F_{0}[vp]=vw-1-\int_{0}^{1}dx\log[v(w-\mathbb{I}_{[p]}(x))]\]
Indeed, computing the \(v\)-derivative of the l.h.s gives \(v\partial_{v}F_{0}[vp]=vw-1+(\frac{\partial w}{\partial v})[v-\int_{0}^{1} \frac{dx}{w-\mathbb{I}_{[p]}(x)}]\) which, using equation (35), becomes \(v\partial_{v}F_{0}[vp]=vw-1.\) Setting \(v=1\) one obtains Eq.(34).
_Differential equation for \(b_{z}(x)\)._
Next we derive a differential equation for \(b_{z}(x),\) see Eq.(36) below.
Using Eq.(34), the relation \(b_{z}(x)=\frac{\delta F_{0}[a_{z}]}{\delta a_{z}(x)}\) becomes
\[b_{z}(x)=\int_{0}^{x}\frac{dy}{w-\mathbb{I}_{[a_{z}]}(y)},\text{ with }\int_{0}^{1}\frac{dx}{w-\mathbb{I}_{[a_{z}]}(x)}=1.\]
Thus, \(b_{z}(x)\) satisfies the boundary conditions \(b_{z}(0)=0\) and \(b_{z}(1)=1.\) Furthermore, \(1/b_{z}^{\prime}(x)=w-\mathbb{I}_{[a_{z}]}(x)\) and \((1/b_{z}^{\prime}(x))^{\prime}=a_{z}(x).\) Using \(a_{z}(x)=\frac{h(x)}{z-h(x)b_{z}(x)}\) from the saddle point equation gives, after some algebraic manipulation,
\[zb^{\prime\prime}(x)+h(x)(b^{\prime}(x)^{2}-b(x)b^{\prime\prime}(x))=0. \tag{36}\]
For \(h(x)=1_{x\in I},\) that is \(h(x)=0\) for \(x\not\in I\) and \(h(x)=1\) for \(x\in I,\) this yields
\[\begin{cases}[\log(z-b_{z}(x))]^{\prime\prime}=0,&\text{ if }x\in I\\ b_{z}(x)^{\prime\prime}=0,&\text{ if }x\notin I\end{cases} \tag{37}\]
with boundary conditions \(b_{z}(0)=0\) and \(b_{z}(1)=1.\)
_Spectrum of \(M\)._
First we present the derivation of the easier case where \(I=[0,1].\) In this case a solution of Eq.(37) with correct boundary conditions is \(b_{z}(x)=z-z\left(\frac{z-1}{z}\right)^{x}\). Via Eq.(5) the resolvent becomes
\[G(z)=\int_{0}^{1}\!dx\,z^{x-1}(z-1)^{-x} \tag{38}\]
and has a branch cut at \(z\in[0,1].\) Cauchy's identity yields the spectral density as \(G(\lambda-i\epsilon)-G(\lambda+i\epsilon)=2i\pi\sigma_{[0,1]}(\lambda)\) from which we find \(d\sigma_{[0,1]}(\lambda)=\frac{d\lambda}{\pi}\int_{0}^{1}dx\,\sin(\pi x)\, \lambda^{x-1}(1-\frac{d\lambda}{\pi})\).
[MISSING_PAGE_POST]
_Remark 26_.: The proof of Theorem 1.1 is based on the fact that the spectral density of the \(\lambda\)-function is \(\lambda\
\(\lambda\))\({}^{-x}\). Integrating over \(x\) leads to
\[d\sigma_{[0,1]}(\lambda)=\frac{d\lambda}{\lambda(1-\lambda)}\,\frac{1}{\pi^{2}+ \log^{2}(\frac{1-\lambda}{\lambda})}1_{\lambda\in[0,1]}. \tag{39}\]
By a change of variable, this is actually a Cauchy-Lorenz distribution. Defining \(\nu:=\log(\frac{\lambda}{1-\lambda})\in(-\infty,+\infty)\), or \(\lambda=\frac{e^{\nu}}{1+e^{\nu}}\), we have
\[d\sigma_{[0,1]}(\lambda)=\frac{d\nu}{\pi^{2}+\nu^{2}}.\]
_Spectrum of a subblock of \(M\)._
The spectrum of an arbitrary subblock \(M_{I}\subset M\) can become quite complicated as the following proposition shows. Since we are dealing with a structured matrix, the spectrum will depends on the position of the subblock and not on its size only. Figure 1 shows that the analytical result from the following proposition indeed agrees with a numerical simulation of the spectrum of \(M_{I}\).
Proposition 2: _The spectrum of subblock \(M_{I}\) of \(M\) restricted to the interval \(I=[c,d]\) of size \(\ell=d-c\) is_
\[d\sigma_{[c,d]}(\lambda)=\frac{d\lambda}{\pi\lambda(1-\lambda)}\,\frac{\theta }{\theta^{2}+(\log r)^{2}}\,1_{\lambda\in[z_{*}^{-},z_{*}^{+}]} \tag{40}\]
Figure 1: Comparison between the analytical solution and a numerical simulation for the spectrum \(\sigma_{I}\) of \(M_{I}\) for \(I=[0.4,0.7]\). The numerics comes from a simulation of the QSSEP time evolution of \(M\) in Eq.(32) on N = 100 sites with discretization time step \(dt=0.1\). Instead of averaging over many noisy realizations, we exploit the ergodicity of QSSEP and perform a time average over a single realization between t = 0.25 and t = 0.4. The QSSEP dynamics reaches its steady state at approximately t = 0.2.
_where \(r\) and \(\theta\) are functions of \(\lambda\) determined by the complex transcendental equation_
\[\frac{\lambda}{1-\lambda}re^{i\theta}=\frac{c(\log r+i\theta)-\ell}{(1-d)(\log r +i\theta)+\ell}. \tag{41}\]
_The support of the spectrum is \([z_{*}^{-},z_{*}^{+}]\) with_
\[z_{*}^{\pm}=\frac{c(1-c)+d(1-d)\pm\sqrt{\Delta}}{c(1-c)+d(1-d)\pm\sqrt{\Delta} +2(1-d)^{2}\,e^{-\delta_{\pm}}}. \tag{42}\]
_with \(\Delta=\ell(1-\ell)[\ell(1-\ell)+4c(1-d)]\) and \(\ell+c\delta^{\pm}=\frac{1}{2(1-d)}[\ell(1-\ell)\pm\sqrt{\Delta}]\). Note that \(z_{*}^{-}<c\) and \(d<z_{*}^{+}\) so that the support of the spectrum is larger than the interval \([c,d]\)._
Proof.: An ansatz for the differential equation Eq.(37), satisfying \(b_{z}(0)=0\) and \(b_{z}(1)=1\), is
\[b_{z}(x)=\begin{cases}\alpha x,&\text{if }0<x<c,\\ z+\gamma\,Q(z)^{y},&\text{if }c<x<d,\\ 1+\beta(x-1),&\text{if }d<x<1,\end{cases} \tag{43}\]
where instead of \(x\in[c,d]\) we use \(y\in[0,1]\) to parametrize the interval, \(x=c+y\ell\). The complex function \(Q(z)\) parametrises the exponential growth of \(z-b_{z}(x)\) in the interval \([c,d]\). The coefficients \(\alpha\), \(\beta\), \(\gamma\) are determined, as a function of \(Q\), by the continuity of \(b_{z}\) and \(b_{z}^{\prime}\) at the boundaries of the interval \(y=0\) and \(y=1\). One finds (we don't need the explicit expression for \(\alpha\) and \(\beta\))
\[b_{z}(x)=z+\frac{c(1-z)-(1-d)zQ(z)}{(1-\ell)Q(z)}\,Q(z)^{y}, \tag{44}\]
for \(x=c+y\ell\in[c,d]\). The continuity of \(b_{z}\) and \(b_{z}^{\prime}\) at the boundaries of the interval yields four equations. Three are used to determined \(\alpha\), \(\beta,\gamma\). The fourth one yields a constraint on \(Q\),
\[(1-z+zQ)(\ell-c\log Q)=z(\ell-1)Q\log Q. \tag{45}\]
This specifies the analytical structure of \(Q(z)\), as complex function of \(z\), from which we deduce the spectral density.
We first determine the support of the spectrum - by looking at the position of the cut of the function \(Q(z)\) - and then the spectral measure on that support - by computing the jump of \(Q(z)\) on its cut.
The cut of \(Q\) is on the real axis. To find it, we write Eq.(45) as,
\[f(\hat{Q})=g(\hat{Q}),\quad\text{with }f(q):=1+\eta q,\ g(q):=\frac{(1-\ell) \log q}{\ell+c\log q},\]
where \(\hat{Q}:=Q^{-1}\) and \(\eta:=\frac{1-z}{z}\). We have \(g^{\prime}(q)=\frac{\ell(1-\ell)}{q(\ell+c\log q)^{2}}>0\), and \(g(q)\) diverges for \(q=e^{-\ell/c}<1\) and \(g(0)=g(\infty)=\frac{1-\ell}{c}>1\). On \(\mathbb{R}_{+}\), the function \(g(q)\) is critical to the
straight-line \(f(q)\) for two critical values \(\eta_{*}^{\pm}\) with \(\eta_{*}^{+}<\eta_{*}^{-}\). This corresponds to two critical values for \(z\), i.e. \(z_{*}^{\pm}=1/(1+\eta_{*}^{\pm})\) with \(0<z_{*}^{-}<z_{*}^{+}<1\). The cut, and hence the support of the eigenvalues, is thus on \([z_{*}^{-},z_{*}^{+}]\subset[0,1]\).
The two critical values for \(\eta\) are solutions of (with \(q=e^{\delta}\))
\[1+\eta e^{\delta}=\frac{(1-\ell)\delta}{\ell+c\delta},\ \eta e^{\delta}=\frac{ \ell(1-\ell)}{(\ell+c\delta)^{2}}.\]
This leads to a second order equation for \(\delta\),
\[c(1-d)\delta^{2}+\ell(1-d-c)\delta-\ell=0\]
The discriminant is \(\Delta=\ell(1-\ell)[\ell(1-\ell)+4c(1-d)]\) and the two solutions \(\delta^{\pm}:=\frac{1}{2c(1-d)}[\ell(d+c-1)\pm\sqrt{\Delta}]\). We let \(\kappa_{\pm}:=\ell(1-\ell)\pm\sqrt{\Delta}\), which is symmetric by \([c,d]\rightarrow[1-d,1-c]\). Then, \(\eta_{\pm}=4(1-d)^{2}\ell(1-\ell)e^{-\delta_{\pm}}/\kappa_{\pm}^{2}\), so that
\[z_{*}^{\pm}=\frac{c(1-c)+d(1-d)\pm\sqrt{\Delta}}{c(1-c)+d(1-d)\pm\sqrt{\Delta} +2(1-d)^{2}e^{-\delta_{\pm}}}. \tag{46}\]
as proposed in Eq.(42).
Let us now compute the spectral density. The later is obtained by integrating the branch cut discontinuity of the resolvent (5) (with the pole at the origin discarded), so that
\[d\sigma_{[c,d]}(\lambda)=\Im m\,\frac{d\lambda}{\pi}\int_{c}^{d}\frac{dx}{ \ell}\frac{1}{z-b_{z}(x)}\]
with \(z-b_{z}(x)\) given by Eq.(44) for \(x=c+y\ell\) in \([c,d]\). Recall Eq.(45) for \(Q\), which can alternatively be written as \((c(1-z)-(1-d)zQ)\log Q=\ell(1-z+zQ)\) so that
\[z-b_{z}(x)=-(\frac{\ell}{1-\ell})(\frac{1-z+zQ}{Q})\,\frac{Q^{y}}{\log Q},\]
Using \(Q^{-y}\log Q=-\partial_{y}Q^{-y}\), we can explicitly integrate the discontinuity to get
\[d\sigma_{[c,d]}(\lambda)=\frac{d\lambda}{\pi}(\frac{1-\ell}{\ell})\,\Im m \left[\frac{1-Q}{1-\lambda+\lambda Q}\right]\,1_{\lambda\in[z_{*}^{-},z_{*}^{+ }]} \tag{47}\]
Eq.(45) also allows to express \(Q\) as a function of \(\log Q\) as
\[\frac{z}{1-z}Q=\frac{c\log Q-\ell}{(1-d)\log Q+\ell}. \tag{48}\]
This can then be used to simplify further the expression (47) for the spectral density as
\[d\sigma_{[c,d]}(\lambda)=\frac{d\lambda}{\pi\lambda(1-\lambda)}\Im m\left[ \frac{1}{\log Q}\right]\,1_{\lambda\in[z_{*}^{-},z_{*}^{+}]} \tag{49}\]
Parametrising \(Q\) as \(Q=re^{i\theta}\) in Eqs.(48,49) yields the claim.
To end this section, let us note that the explicit expression of the spectral density satisfies the expected symmetry \(d\sigma_{[c,d]}(\lambda)=d\sigma_{[1-d,1-c]}(1-\lambda)\). In particular, one can verify the symmetry of the support, that is \(z_{*}^{\pm}(1-d,1-c)=1-z_{*}^{\mp}(c,d)\). Furthermore, for \(d=1^{-}\), \(\ell=1-c+0^{+}\), we have \(\delta_{+}=+\infty\) and \(\delta_{-}=-1/c\) so that \(z_{*}^{+}(c,1)=1\) and \(z_{*}^{-}(c,1)=\frac{c}{c+(1-c)e^{-1/c}}\). By symmetry, for \(c=0^{+}\), \(\ell=1-d-0^{+}\), we have \(\delta_{-}=-\infty\) and \(\delta_{+}=1/(1-d)\) so that \(z_{*}^{-}(0,d)=0\) and \(z_{*}^{+}(0,d)=\frac{d}{d+(1-d)e^{-1/(1-d)}}\). For \(c=0^{+}\), \(d=1^{-}\), we get \(z_{*}^{-}=0\), \(z_{*}^{+}=1\) and \(\theta=\pi\), \(r=\frac{1-\lambda}{\lambda}\), and we recover Eq.(39).
Acknowledgments.We thank Philippe Biane for discussions and Roland Speicher for suggesting at non-linear operations on these matrix ensembles. This work was in part supported by the CNRS, the ENS and the ANR project ESQuisses under contract number ANR-20-CE47-0014-01.
## Declarations
Conflict of interest.On behalf of all authors, the corresponding author states that there is no conflict of interest.
Availability of data.Data sharing not applicable to this article as no external datasets were analysed during the current study
## Appendix A Free probability glossary
We try to use notation as close as possible to those used by Speicher in [15]. Let \(\sigma\) be a (classical) measure for a random variable \(X\).
* The resolvent \(G_{\sigma}(z):=\mathbb{E}_{\sigma}[\frac{1}{z-X}]\) is a generating function of the moments \(m_{n}:=\mathbb{E}_{\sigma}[X^{n}]\): \[G_{\sigma}(z) :=\sum_{n\geq 0}m_{n}\,z^{-n-1}=z^{-1}+m_{1}z^{-2}+m_{2}z^{-3}+\cdots\] \[\hat{M}_{\sigma}(z) :=z^{-1}G_{\sigma}(z^{-1})=\sum_{n\geq 0}m_{n}\,z^{n}=1+m_{1}z+m_ {2}z^{2}+\cdots\]
(We put a hat on \(\hat{M}\) to distinguished from the matrix \(M\), otherwise it is the same definition as in Speicher et al).
* The \(R\)-transform is a generating function of the free cumulants \(\kappa_{p}:=\kappa_{p}(\sigma)\): \[R_{\sigma}(z) :=\sum_{p\geq 1}\kappa_{p}\,z^{p-1}=\kappa_{1}+\kappa_{2}z+ \kappa_{3}z^{2}+\cdots\] \[K_{\sigma}(z) :=z^{-1}+R_{\sigma}(z)=\sum_{p\geq 0}\kappa_{p}\,z^{p-1}=z^{-1}+ \kappa_{1}+\kappa_{2}z+\kappa_{3}z^{2}+\cdots\] \[C_{\sigma}(z) :=zR_{\sigma}(z)=\sum_{p\geq 1}\kappa_{p}\,z^{p}=\kappa_{1}z+ \kappa_{2}z^{2}+\kappa_{3}z^{3}+\cdots\]
(There is a shift of 1 in this definition of \(C_{\sigma}\) compared to that of Speicher et al). We of course have \(zK_{\sigma}(z)=1+C_{\sigma}(z)\).
* The function \(G_{\sigma}\) and \(K_{\sigma}\) are inverse each other, thus \[K_{\sigma}(G_{\sigma}(z))=z,\quad G_{\sigma}(K_{\sigma}(z))=z\] The previous relation then reads \[zG_{\sigma}(z)=1+C_{\sigma}(G_{\sigma}(z))\,\ \hat{M}_{\sigma}(z)=1+C_{\sigma}(z \hat{M}_{\sigma}(z)).\]
* The \(S\)-transform can be defined by \[C_{\sigma}(zS_{\sigma}(z))=z,\quad C_{\sigma}(z)\,S_{\sigma}(C_{\sigma}(z))=z\] The function \(S_{\sigma}\) exists, as a formal power in \(z\), whenever \(\kappa_{1}\neq 0:S_{\sigma}(z)=\frac{1}{\kappa_{1}}-\frac{\kappa_{2}}{\kappa_{1}^{ 2}}z+\cdots\). Using \(G_{\sigma}(K_{\sigma}(z))=z\), this relation can alternatively be written as \[G_{\sigma}\left(\frac{1+z}{zS_{\sigma}(z)}\right)=zS_{\sigma}(z),\quad S_{ \sigma}(zG_{\sigma}(z)-1)=\frac{G_{\sigma}(z)}{zG_{\sigma}(z)-1}.\] Setting \(w=zG_{\sigma}(z)-1\), the above formula can be written as \(S_{\sigma}(w)=\frac{w+1}{zw}\) with \(z(w)\) determined by solving \(zG_{\sigma}(z)=w+1\).
* For two measures \(\sigma\) and \(\nu\), the additive free convolution is defined \[R_{\sigma\boxplus\nu}(z)=R_{\sigma}(z)+R_{\nu}(z),\] that is, we add the free cumulants. Thus if \(a\) and \(b\) are (relatively) free then \(R_{a+b}(z)=R_{a}(z)+R_{b}(z)\).
* For two measures \(\sigma\) and \(\nu\), the free multiplicative convolution \(\sigma\boxtimes\nu\) is defined via their \(S\)-transform \[S_{\sigma\boxtimes\nu}(z)=S_{\sigma}(z)S_{\nu}(z),\] that is, we multiply the \(S\)-transforms. Thus, if \(a\) and \(b\) are (relatively) free, then \(S_{ab}(z)=S_{a}(z)S_{b}(z)\) (instead of \(ab\) we could have considered \(a^{1/2}ba^{1/2}\)).
## Appendix B Local free cumulants for
Haar-randomly rotated matrices
Let \(M=UDU^{\dagger}\), with \(U\) Haar distributed over the unitary group and \(D\) a diagonal matrix with spectral density \(\sigma\) in the large \(N\) limit. From the HCIZ integral, it is known that the generating function \(\mathbb{E}[e^{N\mathrm{tr}(QM)}]\) can be written in terms of the free cumulants \(\kappa_{n}(\sigma)\) of the density \(\sigma\) as
\[\mathbb{E}[e^{zN\mathrm{tr}(QM)}]\asymp_{N\to\infty}\exp\left(N\sum_{k\geq 1 }\frac{z^{k}}{k}\mathrm{tr}(Q^{k})\,\kappa_{k}(\sigma)\right)\]
for any finite rank matrix \(Q\).
Let us prove that this implies that the local free cumulants are \(g_{n}=\kappa_{n}(\sigma)\), that is
\[\mathbb{E}[M_{12}M_{23}\cdots M_{n1}]=N^{1-n}\,\kappa_{n}(\sigma)\,(1+O(N^{-1}))\] (B1)
Note that due to \(U(N)\) invariance (which in particular includes permutations), all sets of distinct indices \(i_{1},i_{2},\cdots,i_{n}\) are equivalent.
Choose \(Q=P_{n}\) the cyclic permutation \((12\cdots n)\), so that \(\operatorname{tr}(P_{n}M)=M_{12}+M_{23}+\cdots+M_{n1}\). It is easy to see (using \(U(1)^{N}\subset U(N)\) invariance), that the first non-vanishing term in \(\mathbb{E}[e^{zN\operatorname{tr}(P_{n}M)}]\) is of order \(z^{n}\) and given by \(z^{n}N^{n}\mathbb{E}[(\operatorname{tr}(P_{n}M))^{n}]\). Furthermore, (this can be proved say by recurrence)
\[\mathbb{E}[(\operatorname{tr}(P_{n}M))^{n}]=\mathbb{E}[(M_{12}+M_{23}+\cdots+ M_{n1})^{n}]{=}n!\,\mathbb{E}[M_{12}M_{23}\cdots M_{n1}]\]
Thus
\[\mathbb{E}[e^{zN\operatorname{tr}(P_{n}M)}]=z^{n}N^{n}\,\mathbb{E}[M_{12}M_{ 23}\cdots M_{n1}]+O(z^{n+1})\]
Since \(\operatorname{tr}(P_{n}^{k})=0\) for \(k<n\) and \(\operatorname{tr}(P_{n}^{n})=n\), we have
\[e^{N\sum_{k\geq 0}\frac{z^{k}}{k}\operatorname{tr}(P_{n}^{k})\,\kappa_{k}( \sigma)}=Nz^{n}\kappa_{n}(\sigma)+O(z^{n+1})\]
Comparing the two last equations proves Eq.(B1).
|
2309.04106 | Meta predictive learning model of languages in neural circuits | Large language models based on self-attention mechanisms have achieved
astonishing performances not only in natural language itself, but also in a
variety of tasks of different nature. However, regarding processing language,
our human brain may not operate using the same principle. Then, a debate is
established on the connection between brain computation and artificial
self-supervision adopted in large language models. One of most influential
hypothesis in brain computation is the predictive coding framework, which
proposes to minimize the prediction error by local learning. However, the role
of predictive coding and the associated credit assignment in language
processing remains unknown. Here, we propose a mean-field learning model within
the predictive coding framework, assuming that the synaptic weight of each
connection follows a spike and slab distribution, and only the distribution,
rather than specific weights, is trained. This meta predictive learning is
successfully validated on classifying handwritten digits where pixels are input
to the network in sequence, and moreover on the toy and real language corpus.
Our model reveals that most of the connections become deterministic after
learning, while the output connections have a higher level of variability. The
performance of the resulting network ensemble changes continuously with data
load, further improving with more training data, in analogy with the emergent
behavior of large language models. Therefore, our model provides a starting
point to investigate the connection among brain computation, next-token
prediction and general intelligence. | Chan Li, Junbin Qiu, Haiping Huang | 2023-09-08T03:58:05Z | http://arxiv.org/abs/2309.04106v2 | # Meta predictive learning model of languages in neural circuits
###### Abstract
Large language models based on self-attention mechanisms have achieved astonishing performances not only in natural language itself, but also in a variety of tasks of different nature. However, regarding processing language, our human brain may not operate using the same principle. Then, a debate is established on the connection between brain computation and artificial self-supervision adopted in large language models. One of most influential hypothesis in brain computation is the predictive coding framework, which proposes to minimize the prediction error by local learning. However, the role of predictive coding and the associated credit assignment in language processing remains unknown. Here, we propose a mean-field learning model within the predictive coding framework, assuming that the synaptic weight of each connection follows a spike and slab distribution, and only the distribution, rather than specific weights, is trained. This meta predictive learning is successfully validated on classifying handwritten digits where pixels are input to the network in sequence, and moreover on the toy and real language corpus. Our model reveals that most of the connections become deterministic after learning, while the output connections have a higher level of variability. The performance of the resulting network ensemble changes continuously with data load, further improving with more training data, in analogy with the emergent behavior of large language models. Therefore, our model provides a starting point to investigate the connection among brain computation, next-token prediction and general intelligence.
Introduction
Large language models (LLMs) based on transformer structures greatly boost both industrial and academic interests in artificial general intelligence [1]. LLMs are able to achieve state-of-art performances in a variety of different tasks, only trained by next-token prediction. The transformer structure computes self-attention scores to capture statistical correlations among input tokens in parallel, which is in stark contrast to brain-like recurrent computation based on synaptic feedback in temporal depth (e.g., a short working memory). In addition, LLMs typically require a sizable number of corpus to trigger emergence of intelligence, compared to the the fact that much less data is needed for a child to acquire linguistic ability. Therefore, it is necessary to establish a mechanistic model of language processing to understand the biological plausible mechanism and underlying physics law governing phase transitions, through statistical patterns of model hyperparameters [2].
In brain science, predictive coding is one of the most influential hypothesis that can implement hierarchical information processing [3; 4]. The predictive coding derives the neuroplasticity rule based on local error signal [5], whose goal is to minimize the surprise between the prediction and belief of a generative model of the outside world [6]. The framework of predictive coding has several benefits for theoretical research. First, the framework can be derived from the first principle that the brain is a biological machine of optimizing neural dynamics and synaptic connections to maximize the evidence of its internal model of the outside world [7]. Second, this principle shares exactly the same spirit adopted in variational free energy framework [6]. Recently, there appeared intense interests in studying the biological implementation of this hypothesis [8; 9; 10], in developing algorithmic applications [11; 12; 13], and in studying the trade-off between energy minimization and information robustness in a linear model of lateral predictive coding [14].
The predictive coding postulates that the cortex carries out a predictive model, where the incoming sensory signals are predicted by using prediction-error driven learning and inference. In this sense, predictive coding is a nice framework to model language processing. However, weight uncertainty is commonly observed in neural circuits [15; 16], e.g., synaptic transmission is stochastic, and spine size is subject to fluctuation. But these effects were not taken into account in previous studies of predictive coding, as remarked in a recent review [17]. In addition, the weight uncertainty was recently studied in recurrent neural
networks [18], inspiring fluctuation-driven synaptic plasticity. Therefore, exploring how the weight uncertainty affects predictive coding in language processing, will help to establish a mechanistic model of language processing to understand the biological plausible mechanism and underlying physics law governing phase transitions, through associated statistical patterns of model hyperparameters. In this work, we derive a mean-field learning rule for predictive coding in recurrent neural networks (RNNs), which is a fundamental structure for nature language processing [19; 20; 21; 22; 23; 24; 25], and we assume that each direction of connection follows a weight distribution incorporating weight sparsity and variance. We thus call this rule meta predictive learning (MPL). This framework is tested first on the classification of MNIST dataset [26] where pixels in an image are divided into groups, and then a toy language dataset, where we can have a thorough exploration of algorithmic capabilities, and finally a language corpus in the real world (Penn Treebank corpus [27]).
Our proposed MPL achieves equal or even better performance compared with traditional methods in all three tasks, showing the advantage of _ensemble_ predictive coding, since examples of single networks can be readily sampled from the trained distribution [18; 28]. By analyzing the distribution of hyperparameters, we are able to find that most connections are deterministic in the input and recurrent layer, while the output layer has a higher level of variability. The observation that the output connections bear a higher level of variability is a universal result in all three tasks, which may particularly connect to the generative function of the language processing model. The network performance changes non-linearly and continuously with data load \(\alpha=\frac{M}{N}\), where \(M\) is the training data size and \(N\) is the number of neurons in the circuit, and we found that the critical point is given by \(\alpha_{c}\approx 0.02\), beyond which a chance level of prediction is absent. With increasing the size of training data, the performance further improves until a perfect learning is achieved. We can then test the resulting network to generate text of arbitrary length (to create something is a first step to understand that thing), and the generated text follows perfectly the grammatical rule set before training. In addition, our MPL is able to accomplish comparable performances in the Penn Treebank corpus with other training methods in RNN, although the framework is less accurate than the transformer structure, which thereby calls for further studies about the mechanistic difference between biological learning and non-biological transformer learning, and how the latter one can inspire discovery of new fundamental elements of computation that can realize logical and mathematical reasoning in many different tasks [29; 30].
Method
Here we consider meta predictive learning in a vanilla recurrent neural network (RNN), which processes a time-dependent sequence \(\mathbf{x}\) with time length \(T\). The input signal of \(N_{\text{in}}\) dimension is first mapped to the recurrent reservoir of \(N\) neurons by an input weight matrix \(\mathbf{w}^{\text{in}}\in\mathbb{R}^{N\times N_{\text{in}}}\), whose element \(w_{ij}^{\text{in}}\) indicates the connection weight value from neuron \(j\) in the input to the reservoir neuron \(i\). The neurons in the reservoir interact with each other with reciprocal connections \(\mathbf{w}\in\mathbb{R}^{N\times N}\), where elements \(w_{ij}\) specify the directional coupling from neuron \(j\) to neuron \(i\), and moreover \(w_{ij}\neq w_{ji}\). The self-connectivity \(w_{ii}\) is also included, and can be learned without imposing any prior knowledge [18]. The internal neural dynamics \(r_{i}(t)\) is read out via the output weight \(\mathbf{w}^{\text{out}}\in\mathbb{R}^{N_{\text{out}}\times N}\). In the predictive learning setting, \(\mathbf{r}\) is interpreted as a _belief_ state when \(\mathbf{x}\) is observed as a sensory input, which can be continuously updated to match the actual prediction whose dynamics reads as follows,
\[\begin{split} h_{i}(t)&=\sum_{j=1}^{N}w_{ij}f\left( r_{j}(t-1)\right)+\sum_{j=1}^{N_{\text{in}}}w_{ij}^{\text{in}}x_{j}(t),\\ y_{i}(t)&=\phi\left(\sum_{j=1}^{N}w_{ij}^{\text{ out}}f\left(r_{j}(t)\right)\right),\end{split} \tag{1}\]
where \(y_{i}(t)\) is the \(i\)-th component of the network output, \(f(\cdot)\) denotes the non-linear activation function, and we use the ReLU function for all tasks. \(\phi(\cdot)\) is the output nonlinear function, and we use softmax function to specify the probability over all classes, which is defined as \(\phi(z_{k}(t))=\frac{e^{z_{k}(t)}}{\sum_{j}e^{z_{j}(t)}}\). The belief state \(\mathbf{r}(t)\) is updated for all time steps up to the sequence length to minimize the prediction error between \(\mathbf{r}\) and \(\mathbf{h}\), which will be detailed below. Note that \(\mathbf{r}(0)=0\), and we fix the belief of the output node \(\mathbf{r}_{y}=\hat{\mathbf{y}}\), where \(\hat{\mathbf{y}}\) denotes the label of input \(\mathbf{x}\). Generally speaking, all beliefs can be initialized to random values.
The core idea of the proposed MPL is assuming the distribution of the network parameters is subject to the following spike and slab (SaS) form [18; 28],
\[\begin{split} P\left(w_{ij}^{\text{in}}\right)&=\pi _{ij}^{\text{in}}\delta\left(w_{ij}^{\text{in}}\right)+\left(1-\pi_{ij}^{ \text{in}}\right)\mathcal{N}\left(\frac{m_{ij}^{\text{in}}}{N_{\text{in}} \left(1-\pi_{ij}^{\text{in}}\right)},\frac{\Xi_{ij}^{\text{in}}}{N_{\text{in} }\left(1-\pi_{ij}^{\text{in}}\right)}\right),\\ P\left(w_{ij}\right)&=\pi_{ij}\delta\left(w_{ij} \right)+\left(1-\pi_{ij}\right)\mathcal{N}\left(\frac{m_{ij}}{N\left(1-\pi_{ ij}\right)},\frac{\Xi_{ij}}{N\left(1-\pi_{ij}\right)}\right),\\ P\left(w_{ki}^{\text{out}}\right)&=\pi_{ki}^{\text{ out}}\;\delta\left(w_{ki}^{\text{out}}\right)+\left(1-\pi_{ki}^{\text{out}} \right)\mathcal{N}\left(\frac{m_{ki}^{\text{out}}}{N\left(1-\pi_{ki}^{\text {out}}\right)},\frac{\Xi_{ki}^{\text{out}}}{N\left(1-\pi_{ki}^{\text{out}} \right)}\right).\end{split} \tag{2}\]
Note that \(N(1-\pi)\) specifies the mean degree of each neuron in the reservoir, as \(1-\pi\) specifies the synaptic connection probability, which is biological plausible due to unreliable stochastic noise [31]. The first and second moments of elements \(w_{ij}^{\ell}\) (\(\ell\) is in, out, or recurrent depending on the context; for the recurrent context, the item has no super- or subscript) can be derived as \(\mu_{ij}^{\ell}=\frac{m_{ij}^{\ell}}{N_{\ell}},\varrho_{ij}^{\ell}=\frac{\left( m_{ij}^{\ell}\right)^{2}}{N_{\ell}^{2}\left(1-\pi_{ij}^{\ell}\right)}+\frac{ \Xi_{ij}^{\ell}}{N_{\ell}}\), respectively. Note that the mean and variance of the Gaussian slab are scaled by the number of mean synaptic connections, such that the prediction of neuron is of the order one.
Considering the statistics of synaptic weights and a large number of afferent projections for each neuron in Eq. (1), which is true in real neural circuits [32], we can reasonably assume the prediction \(h_{i}(t)\) (\(\forall i\)) follows an evolving Gaussian distribution whose mean and variance are defined by \(G_{i}=G_{i}^{\mathrm{rec}}+G_{i}^{\mathrm{in}}\) and \(\Delta_{i}^{1}=(\Delta_{i}^{\mathrm{in}})^{2}+(\Delta_{i}^{\mathrm{rec}})^{2}\), respectively. This is intuitively the result of central limit theorem. The statistics of readout neural currents can be derived in a similar way. Therefore, the mean-field dynamics of this model can be written as
\[\begin{split}& h_{i}(t+1)=G_{i}^{\mathrm{rec}}(t)+G_{i}^{ \mathrm{in}}(t+1)+\epsilon_{i}^{1}(t+1)\sqrt{\left(\Delta_{i}^{\mathrm{in}}(t+ 1)\right)^{2}+(\Delta_{i}^{\mathrm{rec}}(t))^{2}},\\ & y_{k}(t)=\phi\left(G_{k}^{\mathrm{out}}\left(t\right)+\epsilon _{k}^{2}(t)\Delta_{k}^{\mathrm{out}}\left(t\right)\right),\end{split} \tag{3}\]
where the superscript in \(\epsilon\) indicates different types of standard Gaussian random variables--one for reservoir neurons (with superscript 1) and the other for readout neurons (with superscript 2). By definition, \(\{\epsilon_{i}^{1,2}(t)\}\) are both time and neuron index dependent. Given \(\mu_{ij}^{\ell}\) and \(\varrho_{ij}^{\ell}\), the mean currents together with the associated fluctuations are derived below,
\[\begin{split}& G_{i}^{\mathrm{in}}(t+1)=\sum_{j}\mu_{ij}^{ \mathrm{in}}x_{j}(t+1)\\ & G_{i}^{\mathrm{rec}}(t+1)=\sum_{j}\mu_{ij}f\left(r_{j}(t+1) \right)\\ & G_{k}^{\mathrm{out}}(t+1)=\sum_{j}\mu_{kj}^{\mathrm{out}}\;f \left(r_{j}(t+1)\right)\\ &\left(\Delta_{i}^{\mathrm{in}}(t+1)\right)^{2}=\sum_{j}\left( \varrho_{ij}^{\mathrm{in}}\;-\left(\mu_{ij}^{\mathrm{in}}\right)^{2}\right) \left(x_{j}(t+1)\right)^{2}\\ &\left(\Delta_{i}^{\mathrm{rec}}(t+1)\right)^{2}=\sum_{j}\left( \varrho_{ij}-\left(\mu_{ij}\right)^{2}\right)\left(f\left(r_{j}(t+1)\right) \right)^{2}\\ &\left(\Delta_{k}^{\mathrm{out}}\left(t+1\right)\right)^{2}=\sum_{j }\left(\varrho_{kj}^{\mathrm{out}}\;-\left(\mu_{kj}^{\mathrm{out}}\;\right)^{2 }\right)\left(f\left(r_{j}(t+1)\right)\right)^{2}.\end{split} \tag{4}\]
The prediction dynamics (Eq. (1) or Eq. (3) in the meta learning context) can be inter
preted as perceptual inference, widely used in energy-based optimization of brain dynamics [7], while the learning given below is called the neuroplasticity. Both processes minimize exactly the same energy (or variational free energy in general [5]).
Predictive learning can be derived from a temporally hierarchical Gaussian probabilistic principle [4; 5], where the objective function is given by the negative log-likelihood of the joint neural-state distribution. To optimize this objective function, we apply a mean-field approximation of the joint distribution and an additionally Laplace approximation that leads to Gaussian forms [17]. We give a brief interpretation in appendix A. In essence, the predictive learning maximizing this log-likelihood aims to minimize the following energy cost [12; 33],
\[\mathcal{F}=\sum_{j=1}^{2}\sum_{t=1}^{T}\mathscr{E}_{j}(t). \tag{5}\]
This energy function is exactly the variational free energy in the above Gaussian probabilistic principle. The choice of \(\mathscr{E}_{j}(t)\) depends on the problem at hand. If the network produces an output at every time step as in the language model, \(\mathscr{E}_{1}(t)=\frac{1}{2}\|\mathbf{r}(t)-\mathbf{h}(t)\|^{2}\), and \(\mathscr{E}_{2}(t)=-\sum_{i}\hat{y}_{i}(t)\ln\left(y_{i}(t)\right)\). However, if the network only makes the final decision in the last time step as in the classification task, i.e., \(y_{k}(T)=\phi\left(G_{k}^{\text{out}}\left(T\right)+\epsilon_{k}^{2}(T)\Delta _{k}^{\text{out}}\left(T\right)\right)\), we then have the energy terms \(\mathscr{E}_{1}(t)=\frac{1}{2}\|\mathbf{r}(t)-\mathbf{h}(t)\|^{2}\) for \(t=1,\ldots,T-1\), \(\mathscr{E}_{1}(T)=0\), and \(\mathscr{E}_{2}(t)=0\) for \(t<T\), \(\mathscr{E}_{2}(T)=-\sum_{i}\hat{y}_{i}\ln\left(y_{i}(T)\right)\). Moreover, we define the prediction error \(\mathscr{E}_{1}^{\prime}(t)=\mathbf{r}(t)-\mathbf{h}(t)\) and \(\mathscr{E}_{2}^{\prime}(t)=\mathbf{r}_{\mathbf{y}}(t)-\mathbf{y}(t)\). This error can be propagated along the dendritic connections in neural circuits [34]. In mathematical sense, the prediction errors can be interpreted as the gradients of the above energy cost.
In essence, the predictive learning consists of three phases: inference phase, learning phase, and prediction phase (see Eq. (1), and in the current meta-learning, Eq. (3) is used). We next show the predictive learning details for the language processing, while other applications can be readily adapted. First of all, during the inference phase, the belief \(\mathbf{r}(t)\) is
updated to minimize the energy function \(\mathcal{F}\) with the following increment,
\[\begin{split}\Delta r_{i}(t^{\prime})&=-\gamma\frac{ \partial\mathcal{F}}{\partial r_{i}(t^{\prime})}\\ &=-\gamma\frac{\partial\mathscr{E}_{1}(t^{\prime})}{\partial r_{i }(t^{\prime})}-\gamma\frac{\partial\sum_{t\neq t^{\prime}}\mathscr{E}_{1}(t) }{\partial r_{i}(t^{\prime})}-\gamma\frac{\partial\mathscr{E}_{2}(t^{\prime}) }{\partial r_{i}(t^{\prime})}\\ &=-\gamma\mathscr{E}_{1,i}^{\prime}(t^{\prime})+\gamma\sum_{j} \mathscr{E}_{1,j}^{\prime}(t^{\prime}+1)\frac{\partial h_{j}(t^{\prime}+1)}{ \partial r_{i}(t^{\prime})}\\ &+\gamma\sum_{j}\mathscr{E}_{2,j}^{\prime}(t^{\prime})\frac{ \partial[G_{j}^{\text{out}}\left(t^{\prime}\right)+\varepsilon_{j}^{2}(t^{ \prime})\Delta_{j}^{\text{out}}\left(t^{\prime}\right)]}{\partial r_{i}(t^{ \prime})}\\ &=-\gamma\mathscr{E}_{1,i}^{\prime}(t^{\prime})+\gamma f^{\prime }\left(r_{i}(t^{\prime})\right)\sum_{j}\mathscr{E}_{1,j}^{\prime}(t^{\prime}+ 1)\mu_{ji}\\ &+\gamma\sum_{j}\mathscr{E}_{2,j}^{\prime}(t^{\prime})\mu_{ji}^{ \text{out}}f^{\prime}(r_{i}(t^{\prime}))+\gamma\sum_{j}\mathscr{E}_{1,j}^{ \prime}(t^{\prime}+1)\hat{\epsilon}_{ji}^{1}+\gamma\sum_{j}\mathscr{E}_{2,j}^ {\prime}(t^{\prime})\hat{\epsilon}_{ji}^{2},\end{split} \tag{6}\]
where \(\gamma\) indicates the learning rate for the inference phase (we choose \(\gamma=0.1\) for all tasks), \(\hat{\epsilon}_{ji}^{1}=\epsilon_{j}^{1}\left(t^{\prime}+1\right)\frac{\left( \mathscr{E}_{ji}-\left(\mu_{ji}\right)^{2}\right)f^{\prime}\left(r_{i}(t^{ \prime})\right)f\left(r_{i}(t^{\prime})\right)}{\sqrt{\left(\Delta_{j}^{\text {in}}\left(t^{\prime}+1\right)\right)^{2}+\left(\Delta_{j}^{\text{rec}}\left(t ^{\prime}\right)\right)^{2}}}\), and \(\hat{\epsilon}_{ji}^{2}=\epsilon_{j}^{2}\left(t^{\prime}\right)\frac{\left( \mathscr{E}_{ji}^{\text{out}}-\left(\mu_{ji}^{\text{out}}\right)^{2}\right)f^ {\prime}\left(r_{i}(t^{\prime})\right)f\left(r_{i}(t^{\prime})\right)}{ \Delta_{j}^{\text{out}}\left(t^{\prime}\right)}\). It is evident that the last two terms in Eq. (6) are related to the fluctuations caused by the network statistics. The interplay between the network statistics and the prediction errors governs the belief dynamics, which was not considered in previous studies. We emphasize this intrinsic property of neural dynamics is due to ongoing fluctuations of synaptic weights in the presence of circuit noise [31]. Equation (6) thus addresses how the neural belief is shaped under the fluctuating circuit environment.
The goal of this inference process is to find the best configuration of belief for synaptic weight modifications (aforementioned neuroplasticity). When the decrease of the energy \(\mathcal{F}\) becomes stable, e.g., \(\left|\mathcal{F}^{t}-\mathcal{F}^{t-1}\right|<0.1\), or when a maximal number of iterations (\(n\) in our algorithm 1) is approached, the learning phase starts, i.e., the hyperparameters \(\left[\text{m}^{\ell},\mathbf{\pi}^{\ell},\mathbf{\Xi}^{\ell}\right]\) are updated based on the local error signal \(\mathscr{E}_{j}^{\prime}(t)\) with the following increments,
\[\begin{split}\Delta m_{ij}^{\ell}&=-\eta\frac{ \partial\mathcal{F}}{\partial m_{ij}^{\ell}}=-\eta\sum_{t}\mathscr{E}_{\ell^{ \prime},i}^{\prime}(t)\left[-\frac{1}{N_{\ell}}\xi_{j}^{\ell}-\epsilon_{i}^{ \ell^{\prime}}(t)\frac{m_{ij}^{\ell}\pi_{ij}^{\ell}\left(\xi_{j}^{\ell}\right) ^{2}}{(N^{\ell})^{2}(1-\pi_{ij}^{\ell})\sqrt{\Delta_{i}^{\ell^{\prime}}}} \right],\\ \Delta\pi_{ij}^{\ell}&=-\eta\frac{\partial\mathcal{F }}{\partial\pi_{ij}^{\ell}}=-\eta\sum_{t}\mathscr{E}_{\ell^{\prime},i}^{ \prime}(t)\left[-\epsilon_{i}^{\ell^{\prime}}(t)\frac{\left(m_{ij}^{\ell} \right)^{2}\left(\xi_{j}^{\ell}\right)^{2}}{2(N_{\ell})^{2}\left(1-\pi_{ij}^{ \ell}\right)^{2}\sqrt{\Delta_{i}^{\ell^{\prime}}}}\right],\\ \Delta\Xi_{ij}^{\ell}&=-\eta\frac{\partial\mathcal{F }}{\partial\Xi_{ij}^{\ell}}=-\eta\sum_{t}\mathscr{E}_{\ell^{\prime},i}^{ \prime}(t)\left[-\epsilon_{i}^{\ell^{\prime}}(t)\frac{\left(\xi_{j}^{\ell} \right)^{2}}{2N_{\ell}\sqrt{\Delta_{i}^{\ell^{\prime}}}}\right],\end{split} \tag{7}\]
where \(\eta\) denotes the learning rate for the learning phase, \(\Delta_{i}^{1}=\left(\Delta_{i}^{\text{in}}(t)\right)^{2}+\left(\Delta_{i}^{ \text{rec}}(t-1)\right)^{2}\)
and \(\Delta_{i}^{2}=(\Delta_{i}^{\rm out}(t))^{2}\). To derive Eq. (7), the chain rule and mean-field dynamics [Eq. (3)] are used. The meaning of superscripts depends on the network structure where the computation is carried out. If \(\ell=\) in, \(\ell^{\prime}=1\), \(\xi_{j}^{\ell}=x_{j}(t)\), \(N_{\ell}=N_{\rm in}\); if \(\ell\) indicates the recurrent reservoir, \(\ell^{\prime}=1\), \(\xi_{j}^{\ell}=f(r_{j}(t-1))\), \(N_{\ell}=N\); if \(\ell=\) out, \(\ell^{\prime}=2\), \(\xi_{j}^{\ell}=f(r_{j}(t))\), \(N_{\ell}=N\). For an easy comprehension, we summarize all mathematical items and associated explanations in appendix D. The dynamics of \(\pi\) and \(\Xi\) is purely driven by the synaptic fluctuation, while the \(m\) dynamics is contributed by the activity (belief or sensory observation) and the synaptic fluctuation. The \(m\) yields impacts on \(\pi\) and \(\Xi\) as well. Note that the vanilla predictive coding does not take into account synaptic fluctuations (see also appendix C), which is indeed ubiquitous in neural circuits [16]. One typical source is that the synaptic noise results from noisy biochemical processes underlying synaptic transmission, while the other source is the fluctuation of spine sizes in the neocortex, and the existence of silent synapses [35; 36].
In practice, implementation of the meta learning rule in Eq. (7) follows immediately the inference phase, where the update of belief \({\bf r}\) has made \({\cal F}\) converge. To improve the prediction performance, the inference and learning phases are repeated a number of times. Prediction phase is carried out after a round of inference-learning loop, to test the model's generalization performance. Three phases can be concisely represented by the pseudocode in Alg. 1. Codes to reproduce the numerical results provided in the next section are available in our GitHub [37].
```
1:# Inference
2: Given: input \(\mathbf{x}\), label \(\hat{\mathbf{y}}\), randomly initialized belief \(\mathbf{r},\mathbf{r}_{y}=\hat{\mathbf{y}}\), standard Gaussian variables \(\boldsymbol{\epsilon}^{1}\) and \(\boldsymbol{\epsilon}^{2}\)
3:for\(\text{iter}=1,\ldots,n\)do
4:for\(t=1,\ldots,T\)do
5:\(h_{i}(t+1)=G_{i}^{\text{rec}}(t)+G_{i}^{\text{in}}(t+1)+\epsilon_{i}^{1}(t+1) \sqrt{\left(\Delta_{i}^{\text{in}}(t+1)\right)^{2}+(\Delta_{i}^{\text{rec}}(t) )^{2}}\);
6:\(y_{k}(t)=\phi\left(G_{k}^{\text{out}}\left(t\right)+\epsilon_{k}^{2}(t)\Delta_{ k}^{\text{out}}\left(t\right)\right)\);
7:\(\mathbf{r}(t)=\mathbf{r}(t)+\Delta\mathbf{r}(t)\).
8:endfor
9:endfor
10:# Learning
11:for\(\ell=\text{in},\text{out},\text{recurent}\)do
12:for\(t=1,\ldots,T\)do
13:\(\mathbf{m}^{\ell}=\mathbf{m}^{\ell}+\Delta\mathbf{m}^{\ell}\);
14:\(\boldsymbol{\pi}^{\ell}=\boldsymbol{\pi}^{\ell}+\Delta\boldsymbol{\pi}^{\ell}\);
15:\(\boldsymbol{\Xi}^{\ell}=\boldsymbol{\Xi}^{\ell}+\Delta\boldsymbol{\Xi}^{ \ell}\).
16:endfor
17:endfor
18: Output: \(\mathbf{r}\).
19:# Prediction
20: Given: test data \(\mathbf{x}\), converged belief \(\mathbf{r}\), another set of standard Gaussian variables \(\boldsymbol{\epsilon}^{1}\) and \(\boldsymbol{\epsilon}^{2}\)
21:for\(t=1,\ldots,T\)do
22:\(h_{i}(t+1)=G_{i}^{\text{rec}}(t)+G_{i}^{\text{in}}(t+1)+\epsilon_{i}^{1}(t+1) \sqrt{\left(\Delta_{i}^{\text{in}}(t+1)\right)^{2}+(\Delta_{i}^{\text{rec}}(t ))^{2}}\);
23:\(y_{k}(t)=\phi\left(G_{k}^{\text{out}}\left(t\right)+\epsilon_{k}^{2}(t)\Delta_{ k}^{\text{out}}\left(t\right)\right)\);
24:endfor
25: Output: \(\mathbf{y}\).
```
**Algorithm 1** Meta predictive learning algorithm
## III Results and Discussion
In this section, we first apply the MPL in the digit classification task, where an MNIST digit of 784 pixels is divided into a sequence of pixels, and subgroups of 28 pixels are input to the network at each single time step. As a proof of concept, the first example is to show our framework can be applied to any computational tasks of temporal structures. Then, we extend the application to two language processing tasks; one is at the toy level and the other is the real corpus.
### MNIST digit classification
The recurrent neural network is trained to classify an input image after 28 time steps, seeing 28 pixels at each time step. This task requires long-term memory, because the recurrent neural network makes the final decision only after seeing all the pixels, and the information in the previous time steps (up to 28 steps before) must be stored and processed in the last step. To carry out this task, we use a vanilla RNN with \(N=100\) recurrent neurons, \(N_{\text{in}}=28\) input units and \(N_{\text{out}}=10\) output nodes indicating the output class in the one-hot form. The entire dataset is divided into several batches, and we use stochastic gradient descent (SGD) in the learning phase to update hyperparameters \([\mathbf{m}^{\ell},\mathbf{\pi}^{\ell},\mathbf{\Xi}^{\ell}]\), and adam optimizer is applied [38]. Despite working on the network ensemble level and the fact that weight uncertainty must be taken into account during inference, learning and prediction phases, our model can achieve better and more stable performances than the predictive coding without any distribution training [Fig. 1 (a)]. As expected, the overall energy \(\mathcal{F}\) consistently decreases during training and reaches the point near zero in the late stage of training.
The macroscopic behavior of the network is corroborated by the statistical pattern of model hyperparameters underlying synaptic weights. The weight uncertainty characterized by hyperparameters \([\mathbf{\Xi}^{\ell},\mathbf{\pi}^{\ell}]\) decreases over training, showing that the weight is becoming more deterministic, and we use the average value, e.g., \(\langle\Xi_{\text{in}}\rangle=\frac{1}{N\times N_{\text{in}}}\sum_{ij}\Xi_{ij }^{\text{in}}\) to compute the average uncertainty level (for the mean \(\mathbf{m}\), we take its absolute value before the average is carried out). Interestingly, the uncertainty level is highest in the output layer, which is in striking contrast to the results obtained by a generalized backpropagation through
time (rather than local learning guided by prediction error considered in the current work) at the ensemble level [18] where the uncertainty is highest in the recurrent layer. From the predictive coding perspective, the readout weight has more flexibility to extract the information in the reservoir, which may be due to the local nature of learning that is driven by minimizing the prediction error. This remarks that a more biological plausible training may lead to different interpretations of the same computational tasks as implemented in neural circuits. Therefore, to reveal biological mechanisms, a biological plausible training is a necessary ingredient.
### Toy language model
Real language corpus is commonly complicated, and is not simple for theoretical studies. To build a metaphor of the complicated natural language, we set up a generative process where a text (a stream of tokens) is generated through a fixed rule (similar to grammar).
Figure 1: The performance of meta predictive learning on the 28 by 28 MNIST classification task. (a) Test accuracy as a function of epoch. The network with \(N=100\) recurrent neurons, \(N_{\text{in}}=28\) input units and \(N_{\text{out}}=10\) output nodes is trained on the full MNIST dataset with 60k training images (handwritten digits), and validated on another unseen 10k test handwritten digits. Predictive coding indicates the learning direct in the weight space rather than the distribution space. If the epoch is less than 40, the number of inference steps is set to \(n=100\), and \(n=200\) otherwise. The inset shows how \(\ln\mathcal{F}\) changes with training in the first 60 training epochs (this log-energy becomes stable in the late training stage, and is thus not shown). Five independent runs are considered for the fluctuation of the result. (b) The logarithmic average value of \([\mathbf{\Xi}^{\ell},\mathbf{\pi}^{\ell},\mathbf{\mathrm{m}}^{\ell}]\) versus epoch in all layers, the log means logarithm with the base \(e\). Only the first twenty epochs are considered (the result remains stable in the later training stage), and the fluctuation is computed from five independent runs.
Following this setting, the artificial corpus consists of \(M\) texts of length \(T\) each, and each text is composed of letters from \(a,b,c,\ldots,z\). A periodic boundary is applied. For example, a single sample \(x=\{a,c,g,i,...\}\) is generated according to the grammatical rule that starting from letter \({}^{\prime}a^{\prime}\), only the letter \({}^{\prime}c^{\prime}\) or \({}^{\prime}e^{\prime}\) which is located two letters or four letters next to \({}^{\prime}a^{\prime}\) (with equal probabilities) can follow \({}^{\prime}a^{\prime}\), and the case of two consecutive \({}^{\prime}c^{\prime}\) is not allowed. This rule for generating toy language is just a simple model of real corpus, but non-trivial enough for a neural network to learn the embedded rule. The generated examples (letter sequences) are shown to the neural network, which is required to discover the rule by our meta predictive learning working on next-letter prediction. After training, the network is tested by generating a sequence of arbitrary length following the same rule. A hierarchical compositional structure can also be incorporated into the generation process, but we leave this more interesting case to future studies based on this toy setting.
A RNN with \(N=100,N_{\text{in}}=26,N_{\text{out}}=26\) is trained on the full dataset following the above rule, with a total of 26624 (calculated as \(26\times 2^{T-1}\)) sequences of length \(T=11\) (other values of \(T\) can also be similarly studied), and SGD with adam optimizer is applied [38]. To detect a possible phase transition with increasing data size, we can use an increasing portion of the entire dataset (i.e., \(M<26624\)). Each letter can be encoded into one-hot form before being input to the network, while the readout implements a decoding of the neural activity into one-hot form as well. Because of simplicity in our letter space, we do not need word embedding as commonly used in language processing [39]. In Fig. 2 (a), we can easily generate a sequence of arbitrary length, by supplying a network with the letter generated in the previous time step, and the trained network (in the ensemble sense) successfully generates sequences following the ground truth rule. Interestingly, the well-trained network also generates sequences with length \(T>11\) following the same rule, suggesting the possibility that the network output could be creative to generate new grammatically correct sequences.
To study the emergence behavior of this simplified language model, we define the correct letter ratio to characterize the language generating ability of our model. After training, the network instance (sampled from the ensemble) is required to generate 26 sequences of length \(T=11\) whose first letters are one of all 26 letters of the alphabet, and the correct letter ratio is defined as the average ratio of correctly predicted letters. For example, the sequence \([^{\prime}a^{\prime},^{\prime}c^{\prime},^{\prime}e^{\prime},^{\prime}g^{ \prime},^{\prime}k^{\prime},^{\prime}m^{\prime},^{\prime}o^{\prime},^{\prime}s^{ \prime},^{\prime}w^{\prime},^{\prime}a^{\prime},^{\prime}z^{\prime}]\) has 9 correctly predictions, with ratio 0.9 (in total the
network has to predict 10 letters) for this single sequence. Therefore, the correct letter ratio indicates the language generating ability of the network ensemble, with a maximal value of 1 (100%). In Fig. 2 (b), we can easily see that the correct letter ratio first remains at a very low level (close to chance level) if the data load \(\alpha=\frac{M}{N}\) is small, i.e., the generated sequences are random when \(\alpha<0.02\). Beyond this threshold, the performance continuously improves, exhibiting the phenomenon of a second-order phase transition, which coincides qualitatively with empirical results of emergence discovered in large language models [40; 41]. The scaling exponent of the correct letter ratio (order parameter in statistical mechanics [42]) around the transition point is about 1.14. A rigorous derivation of this critical exponent is left for future analytic works. Training RNN with different network sizes yields qualitatively same behavior, but a larger network size makes the transition sharper. After the transition, the network assigns the correctly predicted letter with a larger probability than other letter candidates, while the possibilities for other letters are significantly suppressed (see Fig. 3). Another important characteristic is that, the learning with increasing data occurs first rapidly, followed by a slow period, and finally the performance is saturated to the perfect
Figure 2: The properties of meta predictive learning on the simplified language prediction task. The grammatical rule is designed as follows: starting from a random letter (\({}^{\prime}a^{\prime}\) here), only the candidates located two letters or four letters after \({}^{\prime}a^{\prime}\) can follow the starting letter with equal probability, and each letter only repeats once in this next-word generation. All letters in the alphabet form a cyclic structure. \(T=11\) is considered, and the full size of dataset is 26624. RNN with \(N=100,N_{\text{in}}=26,N_{\text{out}}=26\) is trained, and two instances of networks are randomly sampled from the (trained or untrained) network ensemble. (a) Starting from the letter a, the network generates the next letter which serves as the input at the next time step, until a sequence with desired length is generated. (b) The correct letter ratio as a function of data load \(\alpha=\frac{M}{N}\), and five independent runs are considered. \(M\) examples of sequences are used for training. A chance level of \(\frac{1}{13}\) is marked. The inset shows the correct letter ratio in the range of \(\alpha\in[0.02,0.1]\). (c) The log-energy \(\ln\mathcal{F}\) changes with training epochs and decreases to near zero. The inset shows how the correct letter ratio changes with the length of generated sequence after a full dataset is used for training. The error bar is computed with five independent networks.
generalization of the language rule. This may be interpreted as a hierarchical decoding of the information embedded in the noisy (stochasticity in the generation process) sequences. We further remark that, after a full dataset of sequences with fixed length (e.g., \(T=11\)) is trained, the network is able to generate the grammatically correct letter sequences of _arbitrary_ length [see the inset of Fig. 2 (c)]. The energy of the language model is also decreasing with training until getting stationary, which emphasizes the important role of energy-based model to understand recurrent language processing. A further extension of meta predic
Figure 3: Softmax values of the output units for different data load \(\alpha\). Panels (a,b), (c,d), (e,f) and (g,h) show two typical patterns for each data load \(\alpha=0\), \(\alpha=0.01\), \(\alpha=0.03\), and \(\alpha=0.05\), respectively. Only predictions following the designed language rule are displayed, and the text shown in the panel \({}^{\prime\prime}a\to c^{\prime\prime}\) means inputting the letter \({}^{\prime}a^{\prime}\) and the network predicts the immediate following letter \({}^{\prime}c^{\prime}\) (corresponding to the largest softmax output). The training conditions are the same as in Fig. 2.
tive learning to transformer structure is possible, as the Gaussian assumption used in the standard predictive coding can be gone beyond in a recent work [43].
To study the properties of this simplified language model, we plot the distribution of hyperparameters \([\pi,m,\Xi]\) for the input layer, output layer, and recurrent layer, respectively. The distribution of \([\pi,\Xi]\) has the L-shape in all layers, while the output layer allows for more variability in both sparsity and variance of the Gaussian slab, which is characterized
Figure 4: Illustration of hyperparameters \([\pi,m,\Xi]\) in meta predictive learning on the simplified language task. The training conditions are the same as in Fig. 2. In (c-d), we show statistical properties of bidirectional connections, and \(i<j\) is considered.
by a slightly broader distribution of \([\pi,\Xi]\). Extremes \(\pi=0,\pi=1\) and \(\Xi=0\) have particular physics significance. \(\pi=0\) indicates the connection has no sparsity, and thus carries important information for the task. The spike mass at \(\pi=1\) implies that the connection is always zero, and thus is not important for the task, but none of the connections of our model belong to this case. \(\Xi=0\) shows the corresponding connection is deterministic, because the corresponding Gaussian distribution reduces to a Dirac delta peak. This result is also observed in the 28 by 28 MNIST classification task. The distribution of hyperparameter \(m\) is broadest in the output layer, ranging from \(-200\) to \(200\), showing the higher-level variability in the connection weight of the output layer. This phenomenon may have close relationship with the fact that the embedded rule can only be retrieved by using a highly heterogeneous weighting of each neuron's activity in the reservoir, which is particularly interesting from the perspective of neural decoding of language information and probabilistic computation in a biological plausible setting [10; 15; 30], since our embedded rule is actually a probabilistic generative rule mixed with a predefined grammatical structure.
### Experiments on natural language
In this section, we apply our MPL algorithm to a more complex language corpus, i.e., Penn Treebank (PTB) corpus [27], which contains nearly 50000 sentences collected from the Wall Street Journal. The PTB is one of the most known and used corpus for word-level language modeling.
Due to the large space of the corresponding vocabulary, the corpus need to be pre-processed using word embedding techniques before sending the sentences into the network [39]. Here we describe the main steps. The first step is to use a tokenizer tool to split the sentences into tokens and replace the useless words or characters with a special token named \(<\)unk\(>\), indicating an unknown token. In addition, the tokens that appeared less than five times in the whole corpus will be replaced with \(<\)unk\(>\) to help the network concentrate on the major tokens of high frequency. Next, we collect all tokens to generate a vocabulary to store all the different tokens. However, directly inputting the tokens (treated as one-hot vectors) into the network is inconvenient when the size of vocabulary is large. Hence, we set up a look-up table called embedding layer, to transform every token into vectors in a low-dimensional feature space via neural networks. The training goal is to
learn word vector representations that are able to predict the nearby words. Rows of the trained encoding matrix gives the distributed representation of words. The embedding layer is trained by traditional back-propagation algorithm [39], while both the recurrent reservoir and the readout layer are trained by our MPL as described above (or other alternatives if comparison among algorithms is made).
The overall energy for the predictive learning is given below,
\[\mathcal{F}=\frac{1}{2}\sum_{t}||\mathscr{E}_{\text{rec}}^{\prime}(t)||^{2}+ \mathcal{L}, \tag{8}\]
where \(\mathscr{E}_{\text{rec}}^{\prime}\) denotes the prediction error for the recurrent reservoir, \(\mathcal{L}(\mathbf{y},\mathbf{r}_{y})=-\sum_{t}\sum_{i}(\mathbf{r}_{y})_{i}( t)\ln y_{i}(t)\) is related to the readout error. In order to measure the accuracy of the language model, we use perplexity metric which measures how well the model predicts the next token, i.e., the uncertainty about the prediction, precisely given by [20]
\[\text{ppl}=\left[\prod_{i=1}^{T}p(w_{i}|w_{i-1},\cdots,w_{0})\right]^{-\frac{1 }{T}}. \tag{9}\]
It is intuitive that minimizing the perplexity is equivalent to maximizing the probability of a corpus which is composed of \(T\) words indicated by \(\{w_{0},w_{1},\ldots,w_{T}\}\). Because the output of our model \(y_{i}(t)\) is actually the prediction probability in the non-linear softmax form, we can recast the perplexity as \(\text{ppl}=e^{\mathcal{L}}\), where \(\mathcal{L}\) represents the cross-entropy objective used to train the network.
We apply our MPL to model this real corpus with comparison among other competitive algorithms (see Fig. 5). The test perplexity obtained by backpropagation through time with/without meta learning reaches a similar level to those obtained by standard predictive coding and our MPL method, after tens of epochs. A salient feature is that, those trainings without the SaS premise, get easily overfitted at later stages of training. For comparison, the transformer network with self-attention blocks (see appendix B for details), achieves the lowest perplexity among all considered methods, which demonstrates that the current biological recurrent computation has still a large gap to the artificial transformer computation, where the input tokens are not shown to the network in sequence, but in a form of single block, such that the self-attention is able to integrate information from different parts of the input. This also implies that, new elements, e.g., attention mechanisms, possibly in
to-be-revealed biological forms, might be added to our current framework, to minimize the gap on one hand, and on the other hand to develop a biological computational model of intelligent systems that can handle natural language, particularly without relying on long working memory and an astonishingly large corpus.
To study the network behavior, we plot the distribution of hyperparameters \(m\), \(\pi\), \(\Xi\) when the RNN network is trained with the MPL method, as shown in the Fig. 6. We find that the mean weight \(m\) for all layers is symmetrically distributed around zero, with a relatively narrow distribution. The distribution of \(\pi\) for all layers is of an L-shape and peaks at \(\pi=0\), indicating a dense network is favored and formed after learning. The distribution of \(\Xi\) is of the U-shape and has two peaks. One peak is at \(\Xi=0\), indicating that these weights are deterministic and could only take a single value of \(m\), and the other peak is at \(\Xi\simeq 0.01\), indicating that the corresponding connection can carry a range of candidate values. Currently, it remains unknown how to relate these microscopic details of the network structure to the decoding of the semantic information in the corpus. It is thus important in future works to design analytically tractable model of language processing bridging neurophysiological plausibility and superperformance observed in the state-of-the-art architectures, which would help to uncover key neuron, synapse, and circuit motif types in the human brain.
## IV Conclusion
Predictive coding is a prediction-error driven learning with local updates, performing a joint process of both inference and learning, thereby being a potential candidate for how the brain builds an internal generative model of the complex evolving outside world [2]. We take the predictive coding within the language processing context, which is currently attracting an intense research interests due to ChatGPT [1]. We address a meta predictive learning mechanism in recurrent neural networks encoding and predicting tokens in text sequences, in the presence of uncertainty. A continuous phase transition is revealed in our model, and perfect generation can be achieved after a sufficient number of training sequences are provided. Therefore, our toy model provides a good starting point to dissect the mechanism of learning in language processing [2].
Our MPL framework is relatively not prone to overfitting in training real corpus. In
addition, there emerge intriguing statistical patterns of hyperparameters in our networks. However, it remains unclear how these statistical properties explain the performance (e.g., accuracy in next-token predictions) of the recurrent computation., which highly resembles what occurs in human brains. In contrast, the self-attention leveraged in transformer networks is not biological (e.g., not recurrent and non-local learning). Nevertheless, the transformer structure leads to emergence of intelligence to some extent, and in particular the phenomenon of in-context learning, where the trained network can perform novel tasks by a prompt of example demonstrations without any further learning. The ability of in-context learning emerges by only scaling models and computation costs [41]. The deviation from known brain computation for language processing triggers a hot debate on what the nature of intelligence is [44], and whether the intelligence can be achieved by next-token prediction [30]. More precisely, how a compressed representation of hierarchical compositional structure in linguistic data can be achieved by biological learning (or resulting in multi-task performances beyond transformer) remains largely mysterious. Our current study shows that meta predictive learning for language processing may be a fruitful route towards this goal.
A recent work demonstrated that the weight uncertainty with the form of SaS structure can be also incorporated into the transformer [45]. In addition, gated recurrent neural networks with multiplicative mechanisms were recently shown to be able to learn to implement linear self-attention [46]. Furthermore, the relationship between linear transformers allowing for faster autoregressive learning and RNNs was established in a recent work [47]. Taken together, our current work would be a starting point to establish the bridge between the biological learning (towards the science of specialized brain circuits) and transformer learning within the seminal predictive coding hypothesis, which can be put in the theoretically solid variational free energy minimization conceptual framework.
###### Acknowledgements.
This research was supported by the National Natural Science Foundation of China for Grant Number 12122515 (H.H.), and Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices (No. 2022B1212010008), and Guangdong Basic and Applied Basic Research Foundation (Grant No. 2023B1515040023).
## Appendix A Interpretation of predictive coding as variational free energy minimization
For a recurrent dynamics, we write the neural activity at each step as a latent variable \(\mathbf{r}(t)\), and then it is reasonable to assume the joint probability of a trajectory can be written into the following Markovian form,
\[P(\mathbf{r}(0),\ldots,\mathbf{r}(T))=P(\mathbf{r}(0))\prod_{t=1}^{T}P(\mathbf{ r}(t)|\mathbf{r}(t-1)). \tag{10}\]
We further assume a Gaussian form for the transition probability \(P(\mathbf{r}(t)|\mathbf{r}(t-1))=\mathcal{N}(\mathbf{r}(t);\mathbf{h}(t), \sigma_{t}^{2}\mathbf{I})\), where \(\mathbf{h}(t)=\mathbf{w}f(\mathbf{r}(t-1))\), and \(\sigma_{t}^{2}\) is a time-dependent variance, and for simplicity, we can set the variance to one without loss of generality, as the mere effect is leading to a rescaled cost function below. This Gaussian form can be obtained as an approximation, by using the Laplace approximation even if the transition probability is of other forms. The goal is to optimize the negative log-likelihood of the joint distribution, defined by
\[\mathcal{F}=-\ln P(\mathbf{r}(0),\ldots,\mathbf{r}(T))=\frac{1}{2}\sum_{t} \frac{\|\mathbf{r}(t)-\mathbf{h}(t)\|^{2}}{\sigma_{t}^{2}}+\text{const}, \tag{11}\]
which corresponds exactly to the cost function of predictive coding if we treat \(\sigma_{t}^{2}=1\) and neglect the constant term.
## Appendix B Transformer model
A transformer network consists of an embedding layer, encoder blocks, and decoder blocks [48]. In analogy to the RNN model, all tokens (one-hot vectors) are transformed into representations \(\mathbf{X}\in\mathbb{R}^{d\times T}\) by an embedding layer, where \(d\) denotes the dimension of embedding space and \(T\) denotes the sequence length. As a clear difference from the RNN training, the input to the transformer is an entire \(\mathbf{X}\) matrix, rather than one column each step for RNN. Note that we have not considered the position encoding scheme (e.g., adding a vector of sinusoids of different frequencies and phases to encode position of a word in a sentence) in our model.
An encoder block includes two parts. The first part is the self-attention mechanism,
aiming to evaluate the correlations among words in the input block \(\mathbf{X}\). To this end, we introduce three trainable matrices, namely, query \(Q\), key \(K\), and value \(V\). Then, a linear transformation of the input is applied.
\[Q= W_{Q}\cdot\mathbf{X}, \tag{10}\] \[K= W_{K}\cdot\mathbf{X},\] \[V= W_{V}\cdot\mathbf{X},\]
where \(W_{Q},W_{K}\in\mathbb{R}^{d_{h}\times d}\) and \(W_{V}\in\mathbb{R}^{d\times d}\) are transformation matrices, and \(d_{h}\) is the internal size of the attention operation. Therefore, we define \(X_{t}\) as the \(t\)-th column of \(\mathbf{X}\), and then we can define three vectors, namely \(k_{t}=W_{K}X_{t}\), \(v_{t}=W_{v}X_{t}\), and \(q_{t}=W_{Q}X_{t}\). Then, the \(t\)-th column of the self-attention matrix \(\text{SA}(\mathbf{X})\) is given by
\[\begin{split}\text{attn}(t)&=\sum_{i=1}^{T}\alpha_{ i}(t)v_{i},\\ \alpha_{i}(t)&=\frac{e^{k_{t}^{\top}q_{t}/\sqrt{d_{h }}}}{\sum_{j=1}^{T}e^{k_{j}^{\top}q_{t}/\sqrt{d_{h}}}},\end{split} \tag{11}\]
where \(\alpha_{i}(t)\) is a softmax operation containing information about the pairwise interactions between tokens. The normalization factor \(\sqrt{d_{h}}\) is required to retain relevant quantities in the exponential function being of the order one.
The second part is two feed-forward layers with skip connection, i.e.,
\[\begin{split}\mathbf{z}_{1}=&\,\text{SA}(\mathbf{X })+\mathbf{X}\hskip 56.905512pt\text{(residual layer 1)}\\ \mathbf{z}_{2}=&\,\text{ReLU}\left(W_{1}\cdot \mathbf{z}_{1}+b_{1}\right)\hskip 56.905512pt\text{(feed-forward layer 1)}\\ \mathbf{z}_{3}=& W_{2}\cdot\mathbf{z}_{2}+b_{2} \hskip 56.905512pt\text{(feed-forward layer 2)}\\ \mathbf{z}^{\text{out}}=&\mathbf{z}_{1}+\mathbf{z}_ {3}\hskip 56.905512pt\text{(residual layer 2)}\end{split} \tag{12}\]
where \(W_{1},W_{2}\) and \(b_{1},b_{2}\) are weights and biases of the two feed-forward layers. The output representations \(\mathbf{z}^{\text{out}}\) can be considered to be the input of the next encoder block. Here, we use the single headed attention transformer, and do not use the layer normalization, which scales each element of a vector by the mean and variance of all elements in that vector.
Our used transformer model has only one encoder block and one decoder layer. The
decoder layer is a linear layer (the readout layer), where the output representations \(\mathbf{z}^{\text{out}}\) can be translated into the probability of the next token, which have the same function as the readout layer of the RNN model. The dimension of representations \(d=300\) for all models in Figure 5. For four RNN models, the number of neurons in the recurrent reservoir is \(N=512\). For the transformer model, it is convenient to set the hidden dimension \(d_{h}=d=300\). The training parameters for all models are set to be the same. The batch size is 128 and the learning rate is 0.001. We have chosen the Adam algorithm as our training optimizer [38].
## Appendix C The vanilla predictive learning algorithm
The vanilla predictive learning algorithm is a simplified version of our meta-predictive learning algorithm, without considering weight uncertainty. Hence, setting \(\mathbf{\pi}=0\) and \(\Xi=0\) in Eq. (6) and Eq. (7) in the main text leads to the following update equations for belief and weights.
\[\Delta r_{i}(t^{\prime})=-\gamma\mathscr{E}^{\prime}_{1,i}(t^{\prime})+\gamma f ^{\prime}(r_{i}(t^{\prime}))\sum_{j}\mathscr{E}^{\prime}_{1,j}(t^{\prime}+1)w _{ji}+\gamma f^{\prime}(r_{i}(t^{\prime}))\sum_{j}\mathscr{E}^{\prime}_{2,j}( t^{\prime})w^{\text{out}}_{ji}, \tag{10}\]
and
\[\Delta w^{\ell}_{ij}=\frac{\eta}{N_{\ell}}\sum_{t}\mathscr{E}^{\prime}_{\ell^ {\prime},i}(t)\xi^{\ell}_{j}, \tag{11}\]
where the definition of \(\ell\), \(\ell^{\prime}\), \(\xi^{\ell}_{j}\) and \(N_{\ell}\) bear the same meaning as in the main text (see Table 1). We present the pseudocode of the vanilla predictive learning algorithm in Alg. 2.
## Appendix D Mathematical items used in the main text and associated explanations
We list mathematical items used in the main text and associated explanations to help readers go through the paper smoothly, as shown in the table I. |
2302.00012 | Hidden Little Monsters: Spectroscopic Identification of Low-Mass,
Broad-Line AGN at $z>5$ with CEERS | We report on the discovery of two low-luminosity, broad-line AGN at $z>5$
identified using JWST NIRSpec spectroscopy from the CEERS Survey. We detect
broad H$\alpha$ emission from both sources, with FWHM of $2038\pm286$ and
$1807\pm207$ km s$^{-1}$, resulting in black hole (BH) masses that are 1-2 dex
below that of existing samples of luminous quasars at $z>5$. The first source,
CEERS 1670 at $z=5.242$, is 2-3 dex fainter than known quasars at similar
redshifts and was previously identified as a candidate low-luminosity AGN based
on its rest-frame optical SED. We measure a BH mass of $M_{\rm
BH}=1.3\pm0.4\times 10^{7}~M_{\odot}$, confirming that this AGN is powered by
the least-massive BH known in the universe at the end of cosmic reionization.
The second source, CEERS 3210 at $z=5.624$, is inferred to be a heavily
obscured, broad-line AGN caught in a transition phase between a dust-obscured
starburst and an unobscured quasar. We estimate its BH mass to be $M_{\rm
BH}\simeq 0.9-4.7 \times 10^{7}~M_{\odot}$, depending on the level of dust
obscuration assumed. We derive host stellar masses, $M_\star$, allowing us to
place constraints on the BH-galaxy mass relationship in the lowest mass range
yet probed in the early universe. The $M_{\rm BH}/M_\star$ ratio for CEERS
1670, in particular, is consistent with or higher than the empirical
relationship seen in massive galaxies at $z=0$. We examine the emission-line
ratios of both sources and find that their location on the BPT and OHNO
diagrams is consistent with model predictions for low-metallicity AGN with
$Z/Z_\odot \simeq 0.2-0.4$. The spectroscopic identification of low-luminosity,
broad-line AGN at $z>5$ with $M_{\rm BH}\simeq 10^{7}~M_{\odot}$ demonstrates
the capability of JWST to push BH masses closer to the range predicted for the
BH seed population and provides a unique opportunity to study the early stages
of BH-galaxy assembly. | Dale D. Kocevski, Masafusa Onoue, Kohei Inayoshi, Jonathan R. Trump, Pablo Arrabal Haro, Andrea Grazian, Mark Dickinson, Steven L. Finkelstein, Jeyhan S. Kartaltepe, Michaela Hirschmann, Seiji Fujimoto, Stephanie Juneau, Ricardo O. Amorin, Micaela B. Bagley, Guillermo Barro, Eric F. Bell, Laura Bisigello, Antonello Calabro, Nikko J. Cleri, M. C. Cooper, Xuheng Ding, Norman A. Grogin, Luis C. Ho, Akio K. Inoue, Linhua Jiang, Brenda Jones, Anton M. Koekemoer, Wenxiu Li, Zhengrong Li, Elizabeth J. McGrath, Juan Molina, Casey Papovich, Pablo G. Perez-Gonzalez, Nor Pirzkal, Stephen M. Wilkins, Guang Yang, L. Y. Aaron Yung | 2023-01-31T19:00:00Z | http://arxiv.org/abs/2302.00012v1 | Hidden Little Monsters: Spectroscopic Identification of Low-Mass, Broad-Line AGN at \(z>5\) with CEERS
###### Abstract
We report on the discovery of two low-luminosity, broad-line active galactic nuclei (AGN) at \(z>5\) identified using JWST NIRSpec spectroscopy from the Cosmic Evolution Early Release Science (CEERS) Survey. We detect broad H\(\alpha\) emission in the spectra of both sources, with FWHM of \(2038\pm 286\) and \(1807\pm 207\) km s\({}^{-1}\), resulting in virial black hole (BH) masses that are 1-2 dex below that of existing samples of luminous quasars at \(z>5\). The first source, CEERS 1670 at \(z=5.242\), is 2-3 dex fainter than known quasars at similar redshifts and was previously identified as a candidate low-luminosity AGN based on its morphology and rest-frame optical spectral energy distribution (SED). We measure a BH mass of \(M_{\rm BH}=1.3\pm 0.4\times 10^{7}\,M_{\odot}\), confirming that this AGN is powered by the least-massive BH known in the universe at the end of cosmic reionization. The second source, CEERS 3210 at \(z=5.624\), is inferred to be a heavily obscured, broad-line AGN caught in a transition phase between a dust-obscured starburst and an unobscured quasar. We estimate its BH mass to be in the range of \(M_{\rm BH}\simeq 0.9-4.7\times 10^{7}\,M_{\odot}\), depending on the level of dust obscuration assumed. We perform SED fitting to derive host stellar masses, \(M_{*}\), allowing us to place constraints on the BH-galaxy mass relationship in the lowest mass range yet probed in the early universe. The \(M_{\rm BH}/M_{*}\) ratio for CEERS 1670, in particular, is consistent with or higher than the empirical relationship seen in massive galaxies at \(z=0\). We examine the narrow emission-line ratios of both sources and find that their location on the BPT and OHNO diagrams is consistent with model predictions for moderately low-metallicity AGN with \(Z/Z_{\odot}\simeq 0.2-0.4\). The spectroscopic identification of low-luminosity, broad-line AGN at \(z>5\) with \(M_{\rm BH}\simeq 10^{7}\,M_{\odot}\) demonstrates the capability of JWST to push BH masses closer to the range predicted for the BH seed population and provides a unique opportunity to study the early stages of BH-galaxy assembly.
High-redshift galaxies (734); Quasars (1319); Supermassive black holes (1663) +
Footnote †: Kavli Astrophysics Fellow
0000-0002-8820-788X]Dale D. Kcevski
## 1 Introduction
With the advent of wide-field quasar surveys such as those carried out by the Sloan Digital Sky Survey (SDSS; Fan et al., 2001; Jiang et al., 2016) and the Panoramic Survey Telescope & Rapid Response System 1 (Pan-STARRS1; Banados et al., 2016; Mazzucchelli et al., 2017), hundreds of quasars have been discovered and characterized at \(z>5\)(Inayoshi et al., 2020; Fan et al., 2022), with the most distant found a mere 670 million years after the Big Bang (Wang et al., 2021). The super massive black holes (SMBHs) that power these sources have masses of order \(\sim 10^{9}\,M_{\odot}\), raising the question of how such systems were built in such a short amount of cosmic time. Most theories involve Eddington-limited or possibly super-Eddington accretion onto seed BHs that are predicted to form at \(10<z<30\) and have masses that range from \(\sim 10^{2}\,M_{\odot}\) (so called "light seeds") to over \(\sim 10^{5}M_{\odot}\)("heavy seeds") with a continuous distribution (e.g., Inayoshi et al., 2020; Volonteri et al., 2021). The relative contribution of each seed type remains largely unconstrained by observations (Miller et al., 2015; Trump et al., 2015).
Most quasar surveys, which observe \(\gtrsim 1,000\) deg\({}^{2}\) down to \(\sim 20\) mag, are sensitive to only the most luminous quasar populations (\(\sim 10^{47}\) erg s\({}^{-1}\) in bolometric luminosity; \(L_{\rm bol}\)). These ultra-rare systems, which formed in biased regions of the early universe, place limited constraints on the BH seed population as they would have undergone sustained episodes of exponential growth, even for the most mas
sive predicted seeds, thereby erasing the imprint of the initial seed mass distribution (e.g., Tanaka & Haiman, 2009; Volonteri, 2010). A complementary approach is to search for lower-luminosity quasars hosting SMBHs with masses closer to the predicted seed mass range at the earliest epochs possible (Somerville et al., 2008; Valiante et al., 2016; Ricarte & Natarajan, 2018; Yung et al., 2021; Li et al., 2022). Several deep optical surveys have attempted to do this by reaching a dex fainter in luminosity (e.g., Willott et al., 2007, 2010; Matsuoka et al., 2016, 2022; Kim et al., 2018, 2020; Fujimoto et al., 2022); however, these samples are still far more luminous than what is observed in the nearby universe (\(L_{\rm bol}\sim 10^{43-44}\) erg s\({}^{-1}\); e.g., Greene & Ho, 2007, Liu et al., 2018, 2019), biasing our understanding of early SMBHs toward the most massive and active populations (however, see also Mezcua et al., 2018).
Additional constraints on the seed mass distribution can be obtained by comparing the masses of high-redshift SMBHs to that of their host galaxies. In the local universe, well established scaling relationships exist between the mass of SMBHs and the bulge properties of their hosts (e.g., Magorrian et al., 1998; Gebhardt et al., 2000; Ferrarese & Merritt, 2000; McConnell & Ma, 2013; Sun et al., 2015). However, offsets from this relationship at higher redshift can help constrain models of early BH growth and their co-evolution with galaxies (Hirschmann et al., 2010; Habouzit et al., 2022; Hu et al., 2022). Observational studies have produced mixed results in this regard, with several reporting that SMBHs become increasingly overmassive relative to their hosts with increasing redshift (e.g., Trakhtenbrot & Netzer, 2010; Bennert et al., 2011; Park et al., 2015; Shimasaku & Izumi, 2019; Ding et al., 2020; Neeleman et al., 2021), while other studies report no evolution in the local scaling relationship (e.g., Willott et al., 2017; Izumi et al., 2019; Suh et al., 2020). Pushing such studies to lower SMBH and host masses at high redshifts is expected to provide additional insight into the earliest seeds. Not only are lower-luminosity AGN more representative of the normal BH population (Habouzit et al., 2022), lower mass hosts have a relatively quiet merger history and so represent a robust "fossil record" of the initial BH-seed mass distribution (Volonteri et al., 2008; Volonteri & Natarajan, 2009).
JWST is expected to be a game changer on both fronts, allowing for the detection of lower luminosity quasars and the light of their host galaxies out to the epoch of cosmic reionization. Since its launch, JWST has already revealed the host morphologies of X-ray and optically selected AGN out to \(z\sim 4\)(Kocevski et al., 2022; Ding et al., 2022), detected the host light of a quasar at \(z\simeq 6\) for the first time (Ding et al., 2022), and identified a candidate faint quasar \(z\simeq 7.7\)(Furtak et al., 2022). Recently, Onoue et al. (2023, hearafter O23) reported a candidate low-luminosity AGN at \(z\sim 5\) by exploiting the first NIRCam images of the CEERS program. This AGN candidate, CEERS-AGN-z5-1, has a compact morphology and shows a rest-frame UV-to-optical spectral energy distribution (SED) that can be well explained by an unobscured quasar with \(L_{\rm bol}=2.5\pm 0.3\times 10^{44}\) erg s\({}^{-1}\) and strong Balmer and [O iii] emission lines. In addition, Carnall et al. (2023) recently reported the detection of broad H\(\alpha\) emission from a quiescent galaxy at \(z=4.658\) using JWST, from which they measure the central SMBH mass of \(M_{\rm BH}=10^{8.7\pm 0.1}M_{\odot}\).
Here we report on the detection of broad H\(\alpha\) emission from two \(z>5\) galaxies, including CEERS-AGN-z5-1, using NIRSpec data obtained as part of the second epoch of CEERS observations. The first source, CEERS 1670 at \(z=5.242\), was identified as a result of targeted follow-up of CEERS-AGN-z5-1, while the second source, CEERS 3210 at \(z=5.624\), was found serendipitously while inspecting the spectra of galaxies with photometric redshifts of \(z>8\) in the literature.
We show that the SMBHs at the heart of these low-luminosity AGN have masses 1-2 dex lower than existing samples of luminous quasars with BH mass estimates at \(z>5\). We also examine the emission line ratios of both sources and place constraints on the relationship between SMBH and host mass in the lowest mass range yet probed in the early universe. Our analysis is presented as follows: in Section 2, we describe the near-infrared imaging and spectroscopy used for this study, while in Section 3, we discuss the properties of our sample. In Section 4, we outline our methodology for measuring the emission line properties of our sample. Section 5 describes our results, and the implications of our findings are discussed in Section 6. We use vacuum wavelengths for all emission-line features and, when necessary, the following cosmological parameters are used: \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\Lambda}=0.7\), and \(\Omega_{\rm m}=0.3\).
## 2 Observations & Data Reduction
The Cosmic Evolution Early Release Science Survey (CEERS) is an early release science program that covers 100 arcmin\({}^{2}\) of the Extended Groth Strip (EGS) with imaging and spectroscopy using coordinated, overlapping parallel observations by most of the JWST instrument suite (Finkelstein et al., in prep). CEERS is based around a mosaic of 10 NIRCam pointings, with six NIRSpec and eight MIRI pointings observed in parallel. Here we make use of NIRCam pointings 3 and 6 obtained on 21 June 2022, as well as NIRSpec pointing 4, obtained on 21 December 2022. In each NIRCam pointing, data were obtained in the short-wavelength (SW) channel F115W, F150W, and F200W filters, and long-wavelength (LW) channel F277W, F356W, F410M, and F444W filters. The total exposure time for pixels observed in all three dithers was typically 2835 s per filter.
The NIRSpec observations were taken with the G140M/F100LP, G235M/F170LP and G395M/F290LP \(R\simeq 1000\) grating/filter pairs as well as with the \(R\simeq 30-300\) prism, providing a complete coverage of the \(1-5~{}\mu\)m range with both configurations. The observation adopted a three-nod pattern, each of the nods consisting of a single integration of 14 groups (1036 s). The coadded spectra have a total exposure time of 3107 s in each spectral configuration. Targets for the microshutter array (MSA) configuration included sources selected using the NIRCam imaging in the field from CEERS epoch one (June 2022), especially prioritizing targets with photometric redshifts of \(z>6\). Each target was observed using a "slitlet" aperture of three microshutters, and the design also included empty shutters for background subtraction. The shutter configuration for observations taken with the medium resolution gratings and the prism are identical.
We performed an initial reduction of the NIRCam images in all four pointings, using version 1.5.3 of the JWST Calibration Pipeline1 with some custom modifications. We used the current (15 July 2022) set of NIRCam reference files2, though we note that the majority were created pre-flight, including the flats and photometric calibration references. We describe our reduction steps in greater detail in Finkelstein et al. (2022) and Bagley et al. (2022). Coadding the reduced observations into a single mosaic was performed using the drizzle algorithm with an inverse variance map weighting (Fruchter & Hook, 2002; Casertano et al., 2000) via the Resample step in the pipeline. The output mosaics have pixel scales of 0\(\farcs\)03/pixel.
Footnote 1: [http://jwst-pipeline.readthedocs.io/en/latest/](http://jwst-pipeline.readthedocs.io/en/latest/)
Footnote 2: [http://jwst-croks.stsci.edu](http://jwst-croks.stsci.edu), [http://jwst_nircam_0214.imap](http://jwst_nircam_0214.imap)
Photometry was computed on PSF-matched images using SExtractor (Bertin & Arnouts, 1996) v2.25.0 in two-image mode, with an inverse-variance weighted combination of the PSF-matched F277W and F356W images as the detection image. Photometry was measured in all seven of the NIRCam bands observed by CEERS, as well as the F606W, F814W, F105W, F125W, F140W, and F160W HST bands using data obtained by the CANDELS and 3D-HST surveys (Grogin et al., 2011; Koekemoer et al., 2011; Brammer et al., 2012; Momcheva et al., 2016).
The CEERS NIRSpec observations (Arrabal Haro et al., in prep.) were reduced using version 1.8.5 of the JWST Science Calibration Pipeline with the Calibration Reference Data System (CRDS) mapping 1027, starting from the Level 0 uncalibrated data products ("_uncal.fits" files) available on MAST. Custom parameters were used for the jump step at the detector-level calibration for a better treatment of the "snowballs"3 produced by high-energy cosmic ray events, and a nodded background subtraction was adopted.
Footnote 3: [https://jwst-docs.stsci.edu/data-artifacts-and-features/snowballs-and-shower-artifacts](https://jwst-docs.stsci.edu/data-artifacts-and-features/snowballs-and-shower-artifacts)
The reduced two-dimensional (2D) spectra ("s2d") have a rectified trace with a flat slope. The current version (1.8.5) of the pipeline does not correctly identify source locations in the 2D spectra for one-dimensional (1D) spectra extraction. For the sources presented in this work, the 1D spectra were extracted using custom boxcar apertures centered on the visually identified continuum trace. Any remaining artifacts in the extracted spectra were masked after a detailed visual analysis. The flux uncertainties of the reduced 1D spectra appear to be underestimated by a factor of \(\sim\)2, as estimated from the normalized median absolute deviation (NMAD) of the flux in line-free regions, and so we rescale the flux uncertainty of each spectrum by a factor equal to the ratio of the line-free NMAD to the median pipeline uncertainty.
The current (version 1.8.5) of the NIRSpec MSA data reduction uses a flux calibration that relies on pre-flight knowledge of the instrument, which is known to differ from the post-launch performance (see Figure 20 of Rigby et al., 2022). The pipeline applies a correction for "slit losses" outside the MSA aperture using a _pathloss_ reference file based on a pre-launch model for point sources that has not yet been fully verified on orbit. This correction may be inaccurate for extended sources or non-default spectral extraction apertures, and indeed by comparing spectroscopic fluxes to NIRCam photometry we find some evidence that further corrections are required (see, e.g., section 3). While this may impact our interpretation of individual line fluxes or luminosities, the _relative_ spectrophotometry of the reduced spectra is measured to be reliable, with line ratios of doublets ([O iii] \(\lambda\lambda\)4960, 5008; Storey & Zeippen, 2000) and Balmer lines (Osterbrock, 1989) that match physical expectations (see additional discussion in Trump et al., 2022, Arrabal Haro et al., in prep.).
## 3 Sample Description
During the initial inspection of our reduced NIRSpec data, we identified two sources with broad H\(\alpha\) emission. Information on these sources, referred to as CEERS 1670 and CEERS 3210, is listed in Table 1. CEERS 1670 was ob
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Source Name & R.A. & Dec. & \(z\) & \(m_{F356W}\) \\ & (deg) & (deg) & & (mag) \\ \hline CEERS 1670 & 214.823453 & 52.830281 & 5.242 & \(25.8\pm 0.01\) \\ CEERS 3210 & 214.809142 & 52.868484 & 5.624 & \(26.9\pm 0.04\) \\ \hline \end{tabular} Note. – CEERS 1670 is the same source as CEERS-AGN-z5-1 in O23.
\end{table}
Table 1: AGN Sample
served as a result of targeted follow-up of the AGN candidate CEERS-AGN-z5-1 identified by O23. CEERS 3210 was selected for observation as it was previously identified as a candidate massive galaxy at \(z=8.13\) by Labbe et al. (2022) and a potential strong-line emitter at \(z=5.72\) by Perez-Gonzalez et al. (2022). NIRCam images of both sources are shown in Figure 1, while their 1D and 2D spectra from the G395M grating are shown in Figure 2. Our derived redshifts, based on the [O iii] \(\lambda\lambda\)4960, 5008 narrow lines, for CEERS 1670 and CEERS 3210 are \(z=5.242\) and \(z=5.624\), respectively.
Neither source is directly detected in the deep (800 ksec) Chandra X-ray observations of the CEERS field from the AEGIS-XD survey (Nandra et al., 2015). However, the shape of their SEDs, coupled with the existence of broad-line emission in their spectra, suggest both sources host low-luminosity AGN.
In Figure 3, we show the NIRCam photometry and NIRSpec prism spectrum of both CEERS 1670 and CEERS 3210. In the case of CEERS 1670, we find the prism spectrum must be scaled by a factor of 2\(\times\) to match the NIRCam broad-band photometry. This may be due to potential slit losses, as CEERS 1670 sits near the edge of its microshutter slit, the outline of which can be seen in Figure 1. We find no such correction is needed for the CEERS 3210 prism spectrum.
As discussed by O23, the broad-band photometry of CEERS 1670 is well reproduced by a continuum model with a single power-law function, with the exception of filters that are affected by strong line emission, namely F277W, F410M, and F444M. A single power-law fit to the other four filters yields the best-fit power-law slope \(\alpha_{\lambda}=-1.14\pm 0.03\) (\(\equiv\) d ln \(F_{\lambda}/\) d ln \(\lambda\)), which is consistent with a typical value for unobscured quasars (e.g., Fan et al., 2001; Vanden Berk et al., 2001). This power-law model yields the absolute magnitude at rest-frame 1450 A of \(M_{1450}=-19.44\pm 0.05\) mag. Likewise, the monochromatic luminosity at rest-frame 3000 A and 5100 A is \(L_{3000}=(4.83\pm 0.09)\times 10^{43}\) erg s\({}^{-1}\) and \(L_{5100}=(4.48\pm 0.08)\times 10^{43}\) erg s\({}^{-1}\), respectively. We find that a low-redshift composite spectrum of quasars (the blue model in Figure 3a) from Vanden Berk et al. (2001, hereafter VB01) scaled to match the photometry can explain the observed spectral shape of CEERS 1670 well.
The SED of CEERS 3210 shows more complexity. The source has a blue continuum spectrum with a UV slope of \(\alpha_{\lambda}=-3.0\pm 0.3\) at \(\lambda_{\rm obs}\simeq 1-2\)\(\mu\)m and a very steep continuum spectrum (\(\alpha_{\lambda}=1.8\pm 0.2\)) with strong Balmer and [O iii] emission lines at longer wavelengths. This steep spectral slope, coupled with the broad H\(\alpha\) emission we detect, suggests that this source is a heavily obscured, broad-line AGN (e.g., Gregg et al., 2002). In Figure 3b, we overlay the composite SED of low-redshift broad-line AGN (VB01) reddened assuming a color excess of \(E(B-V)=0.9\) and the extinction law discussed in Calzetti et al. (2000). Note that this model shown with the cyan curve is essentially the same as the QSO2 SED template provided in Polletta et al. (2006). This model traces the observed prism continuum at \(\lambda_{\rm obs}\gtrsim 3\)\(\mu\)m well; however, the obscured broad-line AGN model does not explain the blue side of the observed spectrum, requiring additional components at these shorter wavelengths. We discuss more complex SED models, including fits using hybrid galaxy plus AGN models, in Section 6.3.
## 4 Line Fitting Analysis
The NIRSpec spectra of CEERS 1670 and CEERS 3210 include several prominent emission lines. The G395M/F290LP spectrum of both sources includes strong
Figure 1: JWST NIRCam images of our broad-line AGN sample at \(z>5\) taken in the short-wavelength (F150W and F200W) and long-wavelength (F277W, F356W, and F444W) filters. The RGB images are composed of images in the F150W, F277W, and F444W filters. All images are \(2\arcsec\times\,2\arcsec\) in size. The alignment of the NIRSpec microshutter aperture relative to each source is shown in red overto the F444W image.
Figure 2: NIRSpec spectra of sources CEERS 1670 and CEERS 3210 taken in the G395M grating with \(R\sim 1000\). The 2D spectra are shown above with extraction windows highlighted in red. Grey regions in both the 1D and 2D spectra indicate regions masked due to artifacts identified via visual inspection. The location of several prominent emission lines are noted.
H\(\alpha\), H\(\beta\), and [O iii] \(\lambda\lambda\)4960, 5008 emission, and CEERS 3210 also features a He I \(\lambda\)5877.25 line. Both sources exhibit a weak line near the expected wavelength of the [Fe x] \(\lambda\)6376 coronal emission line. The G235M/F170LP spectrum of both sources includes the [Ne iii] \(\lambda\)3870.86 line, while CEERS 3210 also exhibits the H\(\gamma\)\(\lambda\)4341.69 and auroral [O iii] \(\lambda\)4364.44 lines.
We measure line fluxes and uncertainties with a Levenberg-Marquardt least-squares method implemented by the mpfit IDL code4. We fit isolated lines with single Gaussians and simultaneously fit multiple Gaussians for features in the Balmer line regions. The results of our line fits are shown in Figure 4.
Footnote 4: [https://pages.physics.wisc.edu/](https://pages.physics.wisc.edu/)\(\sim\)craigm/idl/fitting.html
To account for potential broad components, we fit the H\(\alpha\) line with two Gaussians: one narrow with width \(\sigma<\) 350 km s\({}^{-1}\) and one broad with width \(\sigma>\) 350 km s\({}^{-1}\). We also attempted to include additional Gaussian components for the [N ii] \(\lambda\lambda\)6550, 6585 doublet, constraining the line widths and relative line centers to narrow H\(\alpha\), but found that the [N ii] lines are not significantly (\(>\)3\(\sigma\)) detected and their inclusion does not improve the \(\chi^{2}_{0}\) of the fit. We report the 1\(\sigma\) upper limit for [N ii] \(\lambda\)6585 but do not include it in the fits for broad and narrow H\(\alpha\).
We also performed a simultaneous fit of the H\(\beta\) emission-line region with components for narrow H\(\beta\) and the [O iii] \(\lambda\lambda\)960,5008 doublet. In both systems we tested a fit that included an additional broad (\(\sigma>\) 350 km s\({}^{-1}\)) H\(\beta\) component but found that this component is only marginally (\(<\)1\(\sigma\)) detected and including it increases the \(\chi^{2}_{0}\) of the fit. We report 1\(\sigma\) upper limits for putative broad H\(\beta\) emission that assume the same width as the broad H\(\alpha\) component applied to the local noise of the H\(\beta\) region.
Finally, we fit single narrow Gaussians for the [O ii] \(\lambda\)3728.48 (the 3727+3729 doublet is blended in the \(R\simeq\) 1000 medium-resolution NIRSpec grating), [Ne iii] \(\lambda\)3870.86, H\(\gamma\)\(\lambda\)4341.69, and [O iii] \(\lambda\)4364.44. The [Ne iii] line is significantly (\(>\)3\(\sigma\)) detected in CEERS 1670 and all the other lines are only marginally (\(<\)3\(\sigma\)) detected.
## 5 Results
### Emission Line Properties
Our two AGN are identified from their broad H\(\alpha\) emission. As described above, we use a two-component fit with both narrow and broad Gaussian components in which the line centers, widths, and fluxes are free parameters. These broad+narrow fits have significantly lower \(\chi^{2}_{0}\) than single-Gaussian fits for the H\(\alpha\) lines. Both objects have best-fit narrow H\(\alpha\) components that are unresolved in the \(R\sim 1000\) NIRSpec spectra, with narrow-H\(\alpha\) widths of \(\sigma\) = \(135\pm 9\) km s\({}^{-1}\) and \(\sigma=131\pm 24\) km s\({}^{-1}\) for CEERS 1670 and CEERS 3210, respectively. The best-fit broad H\(\alpha\) components have \(\sigma=840\pm 120\) km s\({}^{-1}\) and FWHM = \(2060\pm 290\) km s\({}^{-1}\) for CEERS 1670 and \(\sigma=720\pm 87\) km s\({}^{-1}\) and FWHM = \(1800\pm 200\) km s\({}^{-1}\) for CEERS 3210 (fitting \(\sigma\) and FWHM independently).
Figure 3: The SEDs of the two low-luminosity AGN (CEERS 1670 and CEERS 3210) obtained with the JSWT NIRSpec and NIRCam. Left panel (a): the continuum spectral shape is explained by the composite quasar spectrum of VB01 scaled to match the photometry of CEERS 1670 (blue), and is fitted well with a single power law with an index of \(\alpha_{\lambda}=-1.14\) (dashed). The galaxy SED model with \(M_{*}\simeq 6.0\times 10^{9}\)\(M_{\odot}\) is overlaid (red), where the stellar continuum in the F356W filter becomes comparable to the observed F356W flux density. This gives a robust upper bound of the underlying stellar population. Right panel (b): the source has a blue continuum spectrum with a UV slope of \(\alpha_{\lambda}<-3.0\) at \(\lambda_{\rm obs}\simeq 1-2\)\(\mu\)m and a very steep continuum spectrum (\(\alpha_{\lambda}\simeq 2.0\)). The redder part can be explained either by a heavily obscured quasar (cyan) or a dusty starburst galaxy (red). As a possible explanation of the blue excess in the spectrum, the unobscured broad-line AGN contribution is added to the dusty starburst galaxy (blue). In the dusty galaxy model, the stellar mass is set to \(M_{*}\simeq 6\times 10^{10}\)\(M_{\odot}\) (see the text in Section 6.3).
In contrast, the H\(\beta\) emission lines of both objects are best-fit by single narrow Gaussians, with no statistical improvement from including a broad component. Both H\(\beta\) lines appear to be unresolved, with best-fit single-Gaussian widths of \(\sigma=145\pm 17\) km s\({}^{-1}\) for CEERS 1670 and \(\sigma=108\pm 33\) km s\({}^{-1}\) for CEERS 3210. We compute upper limits for a potential (undetected) broad H\(\beta\) component by assuming a Gaussian of the same width as the measured H\(\alpha\) broad lines with the noise properties of the H\(\beta\) region in the spectrum. In both cases the upper limit for potential H\(\beta\) broad emission is statistically consistent with a broad H\(\alpha\)/H\(\beta\) = 3.1 (Osterbrock, 1989): CEERS 1670 has a lower limit of broad H\(\alpha\)/H\(\beta>2.4\) (3\(\sigma\)) and CEERS 3210 has a lower limit of H\(\alpha\)/H\(\beta>3.0\) (3\(\sigma\)). In other words, both CEERS 1670 and CEERS 3210 are consistent with (undetected) broad H\(\beta\) emission that matches typical Type 1 AGN H\(\alpha\)/H\(\beta\) ratios, and the lack of observed broad H\(\beta\) in CEERS 1670 and CEERS 3210 cannot be used to classify them as intrinsic Type 1.5 AGN.
The narrow Balmer emission lines imply modest dust attenuation in both objects. CEERS 1670 has a measured narrow-line Balmer decrement of H\(\alpha\)/H\(\beta\) = \(3.9\pm 0.5\) and CEERS 1670 has a narrow-line H\(\alpha\)/H\(\beta\) = \(5.3\pm 2.1\). We use these Balmer decrements as priors to inform the SED fitting in Section 6.2 and 6.3.
Intriguingly, both AGN have weak emission-line features that are consistent with marginally-detected [Fe x] \(\lambda\)6376, as seen in Figures 2 and 4. [Fe x] is a coronal emission line with an ionization potential of 262 eV that is observed in low-mass AGN in the local universe (e.g., Molina et al.
Figure 4: The rest-frame spectra (black histograms) and associated uncertainty (gray error bars) of both sources in regions with emission-line features. Red lines show the best-fit Gaussians for narrow emission lines and the blue line shows the best-fit broad component for H\(\alpha\), which have a FWHM of \(2060\pm 286\) and \(1802\pm 204\) km s\({}^{-1}\) for CEERS 1670 and CEERS 3210, respectively.
2021). The putative [Fe x] emission lines are marginally detected with SNR=2.4 for CEERS 1670 and only SNR=1.5 for CEERS 3210. Both lines are best-fit to be slightly redder than the other narrow-line features: if the marginal detections represent genuine emission lines then they may indicate a kinematic offset between the extreme-ionization coronal gas and the narrow-line region.
Finally, in Figure 5 we plot the narrow emission-line ratios of both sources on the BPT ([O iii]/H\(\beta\) versus [N ii]/H\(\alpha\); Baldwin et al. 1981) and OHNO ([Ne iii]/[O ii] versus [O iii]/H\(\beta\); Backhaus et al. 2022) line-ratio diagnostics that are commonly used to classify galaxies as dominated by emission from AGN or star formation. The colored curves in Figure 5 indicate MAPPINGS V photoionization models from Kewley et al. (2019), with different colored curves for different ionization (log(\(Q/[\mathrm{cm\ s^{-1}}]\)) = [7, 8, 9] increasing left to right), metallicity along each curve (\(Z/Z_{\odot}=[1,0.4,0.2,0.05]\) as indicated in the legend), and curves shown for each of three thermal pressures (log(\(Pk_{B}^{-1}/[\mathrm{K\ cm^{-3}}]\)) = [7, 8, 9]). The MAPPINGS V models use \(\alpha\)-enhanced abundances as described in Nicholls et al. (2017), such that low metallicities include enhanced relative abundances of O and Ne (and a lower relative abundance of N). Figure 5 also includes comparison samples of high-redshift galaxy line ratios from early JWST spectroscopy: stacked CEERS measurements from Sanders et al. (2023) in the BPT and SMACS ERO galaxies from Trump et al. (2022) in the OHNO diagram.
At low redshift (\(z\lesssim 2\)), AGN typically have higher [N ii]/H\(\alpha\), [O iii]/H\(\beta\), and [Ne iii]/[O ii] ratios due to harder ionizing radiation from the AGN accretion disk, and line-ratio diagnostics shown in Figure 5 can be used to separate AGN from star-forming galaxies. However, high-redshift galaxies show systematic offsets relative to galaxies and AGN at \(z=0\), with higher ionization and lower metallicity in both AGN and from star-forming H ii regions (Shapley et al. 2005; Erb et al. 2006; Liu et al. 2008; Kewley et al. 2013, 2023; Sanders et al. 2023). Both CEERS 1670 and CEERS 3210 have high [O iii]/H\(\beta\), low [N ii]/H\(\alpha\), and high [Ne iii]/[O ii] line ratios that are consistent with MAPPINGS V photoionization models for high ionization (log(\(Q/[\mathrm{cm\ s^{-1}}]\)) \(\simeq 8\)) and moderately low metallicity (\(Z/Z_{\odot}\simeq 0.2-0.4\)).
The AGN line ratios and interstellar medium conditions implied in Figure 5 are virtually indistinguishable from star-forming galaxies observed at similar redshifts, since high-redshift H ii regions have similarly high ionization and low metallicity to these \(z\sim 5\) AGN narrow-line regions. Photoionization models show that low-metallicity AGN can have similar [O iii]/H\(\beta\) and [N ii]/H\(\alpha\) ratios and lie within or even below the star-forming branch (Groves et al. 2006;
Figure 5: Left Panel (a): The BPT emission-line diagnostic diagram. The gray contours denote the distribution of local star-forming galaxies and AGN as measured by the SDSS survey (York et al. 2000). Black diamonds denote stacked line ratios of CEERS galaxies at \(z\sim 5.6\), \(z\sim 4.5\), and \(z\sim 3.3\)(Sanders et al. 2023). The black long and short-dashed lines denote the \(z=0\) and \(z=2.3\) boundary between the star-forming and AGN regions of the diagram defined by Kauffmann et al. (2003) and Kewley et al. (2013), respectively. Right Panel (b): the OHNO diagnostic diagram. Black squares denote line ratios of SMACS ERO galaxies at \(5.3<z<8.5\)(Trump et al. 2022) and gray contours denote the distribution of \(z\sim 0\) SDSS galaxies. The dashed line denotes the boundary between star-forming and AGN regions as defined in Backhaus et al. (2022). Colored curves in both panels show MAPPINGS V photoionization models (Kewley et al. 2019). The three color-coded sets of curves and points along those curves correspond to different ionization parameters and metallicities, as indicated by the legends, with three curves for each color corresponding to different gas pressures as described in the text. Both of our \(z\sim 5\) AGN have narrow-line ratios that are consistent with low metallicity and high ionization, with little difference from the emission-line ratios observed for other populations of high-redshift galaxies.
Feltre et al., 2016). Although low-metallicity AGN are rare in the local universe (e.g., Storchi-Bergmann et al., 1998; Groves et al., 2006), recent simulations that make use of the AGN photoionization models presented in Feltre et al. (2016) predict that high-redshift, low-metallicity AGN should primarily occupy the top portion of the local star-forming branch (Hirschmann et al., 2019, 2022), in agreement with our findings. The fact that neither source is X-ray detected and that their BPT line ratios are similar to that of star-forming galaxies observed at the same redshift means that their broad-line emission may be one of the few ways to detect these high-redshift low-luminosity AGN. Other possible approaches include diagnostics with high-ionziation and extreme-ionization lines (e.g., He ii and [Ne v]; Feltre et al., 2016; Nakajima & Maiolino, 2022; Cleri et al., 2023). Preselection with photometric colors may also be useful to select fast-growing BHs with \(M_{\rm BH}\sim 10^{6-7}\)\(M_{\odot}\) in metal-poor environments (Inayoshi et al., 2022).
### Virial BH Mass Estimates
In this section, we estimate the virial BH masses of the two broad-line AGN assuming that their broad H\(\alpha\) emission traces the kinematics of gas in the broad-line-region. The single-epoch BH mass estimation method is best calibrated against the width of the broad H\(\beta\) emission line and the rest-frame 5100 A continuum luminosity (\(L_{5100}\)) using the reverberation mapping technique (e.g., Kaspi et al., 2000). However, since we do not detect a broad H\(\beta\) component in our spectra, we instead employ the BH mass relationship proposed by Greene & Ho (2005, hereafter GH05), which relies entirely on H\(\alpha\) emission. This method has been widely used in, for example, BH mass estimates for AGN in dwarf galaxies (e.g., Reines et al., 2013; Baldassare et al., 2015). This recipe is based on empirical correlations between Balmer emission-line luminosities and \(L_{5100}\) and between the line widths of H\(\beta\) and H\(\alpha\).
In terms of the broad H\(\alpha\) line width and \(L_{5100}\), the BH mass formula is expressed as:
\[M_{\rm BH}=5.04\times 10^{6}\,M_{\odot}\left(\frac{L_{5100}}{10^{44}\ {\rm erg \ s^{-1}}}\right)^{0.64}\left(\frac{{\rm FWHM}_{\rm H\alpha}}{10^{3}\ {\rm km\ s^{-1}}}\right)^{2.06}. \tag{1}\]
This equation is based on the formula of Kaspi et al. (2000) for H\(\beta\) with the H\(\beta\) line width substituted with that of H\(\alpha\) (Equation 3 of GH05). It is important to note that this equation assumes that the 5100 A continuum luminosity is dominated by light from the AGN. Alternatively, we can directly apply the virial BH mass recipe of GH05, which is based on the broad H\(\alpha\) line width and luminosity:
\[M_{\rm BH}=2.0\times 10^{6}\left(\frac{L_{\rm H\alpha}}{10^{42}\ {\rm erg\ s^{-1}}} \right)^{0.55}\left(\frac{{\rm FWHM}_{\rm H\alpha}}{10^{3}\ {\rm km\ s^{-1}}}\right)^{2.06}M_{\odot}. \tag{2}\]
First, we use the line width of the broad H\(\alpha\) component detected in our NIRSpec spectroscopy, corrected for the \(R\sim 1000\) instrumental resolution, and \(L_{5100}\) derived from the photometric SEDs to estimate the virial BH masses of CEERS 1670. Using Equation 1 results in a BH mass of \(M_{\rm BH}=1.3\pm 0.4\times 10^{7}\)\(M_{\odot}\), with the Eddington ratio of \(L_{\rm bol}/L_{\rm Edd}=0.15\pm 0.04\). We use the bolometric luminosity inferred from \(L_{3000}\) to be consistent with other \(z>5\) BH mass estimates in the literature. We apply a bolometric correction of \(L_{\rm bol}=5.15L_{3000}\)(Richards et al., 2006) to derive \(L_{\rm bol}=2.49\pm 0.04\times 10^{44}\ {\rm erg\ s^{-1}}\). Using instead the H\(\alpha\) line width and luminosity, Equation 2 yields \(M_{\rm BH}=1.1\pm 0.3\times 10^{7}\)\(M_{\odot}\). This value is more systematically uncertain than our first estimate, although consistent within the \(1\sigma\) error, owing to potential slit losses (see Section 2).
The BH mass estimate for CEERS 3210 is complicated because of its potentially obscured nature. Taking the observed H\(\alpha\) luminosity at face value and applying Equation 2, we derive a mass of \(M_{\rm BH}=9.0\pm 2.2\times 10^{6}\)\(M_{\odot}\). We caution that this value is likely a lower limit since the H\(\alpha\) emission is likely affected by dust extinction. If we assume that a dust-reddened AGN continuum dominates the observed rest-optical spectrum with \(A_{V}=4\) (see Section 6.3), the inferred BH mass could be as high as \(M_{\rm BH}=4.7\pm 1.2\times 10^{7}\)\(M_{\odot}\). A careful decomposition of the AGN/host components, and, if the AGN is dust-reddened, measurements of AGN continuum luminosity at rest-frame infrared wavelengths (Greene et al., 2014; Kim et al., 2015) are required to better estimate the intrinsic continuum luminosity and subsequently the virial mass for this AGN.
## 6 Discussion
### The \(M_{\rm BH}-L_{\rm bol}\) Distribution
The successful spectroscopic identification of two low-luminosity broad-line AGN at \(z>5\) opens up a new parameter space for high-redshift AGN studies, thanks to the unprecedented infrared sensitivity of JWST and the multi-wavelength photometric dataset available in the EGS field. Figure 6 shows the distribution of \(z\gtrsim 5\) AGN in the BH mass - bolometric luminosity plane with the two new low-luminosity AGN shown in red and orange.
As is discussed in O23, CEERS 1670 is 2-3 dex fainter than known quasars at \(z\gtrsim 5\)(e.g., Willott et al., 2010; Trakhtenbrot et al., 2011; Shen et al., 2019; Onoue et al., 2019; Matsuoka et al., 2019; Kato et al., 2020) and more comparable to those of typical nearby AGN (e.g., Liu et al., 2019). The virial BH mass estimate we present above now shows that this low-luminosity AGN is by far the least-massive BH known in the universe at the end of cosmic reionization. The modest Eddington ratio of CEERS 1670 suggests that this AGN has been identified after its rapid accretion mode has
ended, although it is possible the system will experience future bursts of heavy accretion (Li et al., 2022).
For CEERS 3210, if we use the observed H\(\alpha\) luminosity without an extinction correction, then the BH powering this AGN may be comparably low-mass as CEERS 1670. However, if we assume heavy dust attenuation (\(A_{V}=4\)), it becomes a BH accreting at a rate above the Eddington limit. In Figure 6, we show our results assuming both no extinction for the H\(\alpha\) luminosity and \(A_{V}=4\) with the bolometric luminosity converted from \(L_{5100}\) estimated from the H\(\alpha\) luminosity. Adopting a more moderate level of dust extinction inferred from the observed Balmer decrement in the NIRSpec spectrum (H\(\alpha\)/H\(\beta=5.3\); \(A_{V}=1.9\)), brings the bolometric luminosity of the source closer to the Eddington value. Thus, CEERS 3210 is likely in its most active mode of accretion and on the way to expelling the material that currently obscures it. Fujimoto et al. (2022) report a dust-reddened AGN at \(z=7.19\), the BH mass of which is estimated to be \(M_{\rm BH}\lesssim 10^{8}\)\(M_{\odot}\) based on the upper limit of its X-ray luminosity. Although not confirmed, their AGN and CEERS 3210 may be drawn from the same population of high-redshift dust-reddened AGN. We discuss this scenario in greater detail in Section 6.3 below.
### Constraints on the Host Galaxy Mass of CEERS 1670
Figure 3a shows the prism spectrum and NIRCam photometric flux densities of CEERS 1670. As discussed in Section 3, the continuum spectral shape can be explained by the low-redshift composite quasar spectrum of VB01. Since the observed spectrum is dominated by the central AGN contribution, it is challenging to estimate the stellar mass of the host galaxy in a plausible way. O23 conducted the SED fitting analysis for the photometric data using templates of metal-poor galaxies (Inoue, 2011). The best-fit model with pure galaxy SEDs, where the quasar contribution is neglected, suggests a case with metallicity \(Z=0.2\)\(Z_{\odot}\), stellar age 500 Myr, star formation rate (SFR) 3.6 \(M_{\odot}\) yr\({}^{-1}\), whose stellar mass is \(1.8\times 10^{9}\)\(M_{\odot}\). This value is considered to be an upper bound of the stellar mass among the SED templates O23 explored, but the true upper bound depends sensitively on the properties of the assumed stellar population. In the following, we give a robust upper bound of the stellar mass built up in the host galaxy at \(z\gtrsim 5\), assuming the SED model parameters that yield a high mass for the given stellar luminosity.
One advantage of focusing on \(z>5\) galaxies is that the stellar age is limited to the age of the Universe, e.g., \(t\simeq 1\) Gyr at \(z=5.7\). Although the star formation history (SFH) in the galaxy is unconstrained, the mass-to-light ratio (\(M_{\star}/L_{\star}\)) in the rest-frame optical and near-infrared band tends to increase with time (e.g., Bell & de Jong, 2001); for instance, the \(M_{\star}/L_{\star}\) ratio in the \(B\)-band can be approximated as \(\propto t\)
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline ID & \(M_{1450}\) & \(L_{5100}\) & \(L_{\rm H\alpha}\)(broad) & FWHM\({}_{\rm H\alpha,broad}\) & \(M_{\rm BH}\) & \(\lambda_{\rm Edd}\) & \(M_{\star}\) & (H\(\alpha\)/H\(\beta\))\({}_{\rm obs}\) \\ & (mag) & (\(10^{43}\)erg s\({}^{-1}\)) & (\(10^{42}\)erg s\({}^{-1}\)) & (km s\({}^{-1}\)) & (\(10^{7}\)\(M_{\odot}\)) & & (\(10^{9}\)\(M_{\odot}\)) & \\ \hline
1670 & \(-19.4\pm 0.05\) & \(4.48\pm 0.08\) & \(1.64\pm 0.21\) & \(2060\pm 290\) & \(1.3\pm 0.4\) & \(0.15\pm 0.04\) & \(<6.0\) & \(3.9\pm 0.5\) \\ \hline
3210 & See text & \(1.67\pm 0.16\) & \(1800\pm 200\) & \(0.90\pm 0.22\) & \(0.29\pm 0.08\) & \(<60.0\) & \(5.3\pm 2.1\) \\
3210\({}_{A,\star}\)+4 & See text & \(34.4\pm 3.4\) & \(1800\pm 200\) & \(4.7\pm 1.2\) & \(3.5\pm 0.9\) & \(<60.0\) & \(5.3\pm 2.1\) \\ \hline \end{tabular} Note. – The BH mass for CEERS 1670 uses \(L_{5100}\) estimated from the photometric SED and the line width of broad H\(\alpha\) (FWHM\({}_{\rm H\alpha,broad}\)) (Equation 1), while for CEERS 3210 we use FWHM\({}_{\rm H\alpha,broad}\) and line luminosity of broad H\(\alpha\) (Equation 2). The bolometric luminosity is also converted from \(L_{\rm H\alpha}\) for CEERS 3210. In the third row, we show the case when CEERS 3210 is heavily dust-reddened with \(A_{V}=4\). The H\(\alpha\) luminosities are reported as observed, with no correction for potential slit losses.
\end{table}
Table 2: Derived AGN Properties
Figure 6: The BH mass - bolometric luminosity plane. Quasar samples at \(z\geq 5\) are shown as blue and green symbols and contours, while low redshift AGN are shown in black. CEERS 1670 and CEERS 3210 have BH masses 1-2 dex below that of known high redshift quasars and more comparable to those of typical nearby AGN.
at \(t\sim 1\) Gyr when a constant star formation rate (or decaying with a delay time)5 is assumed (Into & Portinari, 2013). Therefore, for the purpose of deriving an upper bound of the stellar mass, we adopt a characteristic time of \(t=1\) Gyr. We use the population synthesis code STARBURST99 version v7.0.1 (Leitherer et al., 1999) to generate stellar SEDs of galaxies. Here, we assume the Kroupa IMF (Kroupa, 2001; \(0.1-100\)\(M_{\odot}\)), the Padova isochrone models, and constant star formation with a duration of 1 Gyr. We consider two values of stellar metallicity (\(Z=Z_{\odot}\) and 0.2 \(Z_{\odot}\)) to show the metallicity dependence, while we note that the solar-metallicity case gives a higher upper bound of the stellar mass. We take into account dust attenuation by the extinction law of starburst galaxies (Calzetti et al., 2000). The color excess of the stellar continuum is fixed to \(E_{\rm s}(B-V)=0.09\), which is calculated from the Balmer decrement of the narrow emission lines we detect in the NIRSpec spectra (see Section 5.1).
Footnote 5: One of the most extreme SFH is the case where a galaxy forms in one burst episode at \(z\to\infty\). The \(M_{*}/L_{*}\) ratio continuously decreases with time due to death of massive stars that are not a dominant population in mass for a standard initial mass function (IMF) (e.g., Kroupa, 2001; Chabrier, 2003). However, the SFH is not applicable to our targets with active star formation inferred from strong emission lines.
This model, when scaled to the flux density in the F356W filter, results in a host mass of \(M_{*}=6.0\times 10^{9}\)\(M_{\odot}\). This galaxy SED model is shown in Figure 3a as the red curve. Therefore, we argue that the stellar mass of the host galaxy is limited to \(M_{*}<6.0\times 10^{9}\)\(M_{\odot}\) for CEERS 1670 so that the stellar continuum flux density does not exceed the observed continuum level. We note that the upper bound depends significantly on the low-mass end (\(m_{*,\rm min}\)) of the stellar IMF; for instance, the upper bound is reduced by a factor of \(\sim 3\) for \(m_{*,\rm min}=1.0\)\(M_{\odot}\).
### The Obscured Nature of CEERS 3210
Figure 3b shows the prism spectrum and NIRCam photometric flux densities of CEERS 3210. The red spectral shape with an index of \(\alpha_{\lambda}\simeq 2.0\) at longer wavelengths can be explained either by a heavily obscured quasar (cyan) or a dusty starburst galaxy (red). Both models require the existence of obscuring material along the line of sight: a typical visual extinction of \(A_{V}\simeq 3.65\) and 4.0 for the obscured quasar and dusty galaxy model, respectively. We note that this dusty-galaxy SED is calculated with the same galaxy model as discussed in Section 6.2, but assuming a stellar mass of \(M_{*}=6\times 10^{10}\)\(M_{\odot}\) and a different level of extinction. However, neither of the SED models explains the excess of the observed spectrum at \(\lambda_{\rm obs}\lesssim 2\)\(\mu\)m, requiring additional blue components.
One possible explanation for the blue component is dust (and electron) scattering, which preserves the spectral shape of the intrinsic broad-line AGN component (e.g., Zakamska et al., 2005). In fact, obscured quasars at low redshifts (\(z<2.5\)) tend to show optical polarization levels higher than those of unobscured populations (Alexandroff et al., 2018). The fraction of the scattered flux relative to the primary component depends on the covering factor of the scattering medium and our viewing angle. For instance, assuming that 0.6% of the radiation flux of the intrinsic spectrum is scattered to our line of sight (see the Torus model in Polletta et al., 2006), the total SED is consistent with the photometric flux densities. Alternatively, the spectral shape of CEERS 3210 could be explained by the combination of quasar emission at short wavelengths and light from a heavily obscured starburst galaxy dominating at long wavelengths. This combination of AGN+galaxy light is shown as the blue curve in Figure 3b. If this is the case, CEERS 3210 would be caught in a transition stage, moving from a dust-obscured starburst to an unobscured luminous quasar by expelling gas and dust. This model hypothesis is consistent with the dust-reddened AGN at \(z=7.19\) reported by Fujimoto et al. (2022), the BH mass of which is similar to that of CEERS 3210. This would make CEERS 3210 a dusty progenitor of the luminous, unobscured quasars observed by ground-based quasar surveys.
We can place a constraint on the host galaxy mass of CEERS 3210 following the same arguments used for CEERS 1670. Assuming the light at longer wavelengths is entirely dominated by stellar emission and using a dust-obscured (\(A_{V}=4.0\)) version of the stellar population described Section 6.2, we obtain an upper limit on the host mass of \(M_{*}\lesssim 6.0\times 10^{10}\)\(M_{\odot}\). It is worth noting that the unobscured galaxy SED is modeled so that it has the highest \(M_{*}/L_{*}\) ratio, and thus our estimate gives a conservative upper bound. Using the hybrid quasar + dusty galaxy model does not appreciably change this upper limit as the steep spectral slope at \(\lambda_{\rm obs}>3\)\(\mu\)m is dominated by the galaxy component in the second scenario.
Nevertheless, it is difficult to distinguish these two scenarios using the current data. Thus, multi-wavelength follow-up observations such as rest-frame infrared and far-infrared imaging are needed to further constrain the nature of CEERS 3210. We leave a more detailed SED analysis of this source to future work.
### BH-Galaxy Coevolution at \(z\simeq 5\)
The empirical relation between the masses of SMBHs and their host galaxies is considered to be one of the most important outcomes of their mutual evolution over the cosmic timescale (e.g., Kormendy & Ho, 2013). To constrain how and when the BH-galaxy correlations were established, the \(M_{*}-M_{\rm BH}\) distribution at the earliest epoch of the universe needs to be unveiled. The apparent location of high
\(z\) quasars and their hosts also gives us more information on the BH growth mechanisms and their seeding processes (Inayoshi et al., 2022; Hu et al., 2022; Scoggins et al., 2023).
Our first source, CEERS 1670, is a broad-line AGN with a BH mass of \(M_{\rm BH}\simeq 1.3\times 10^{7}\,M_{\odot}\) hosted in a star-forming galaxy with a stellar mass limited below \(M_{\star}<6.0\times 10^{9}\,M_{\odot}\). Our second source, CEERS 3210, is inferred to be a heavily obscured broad-line AGN with a BH mass of \(M_{\rm BH}\simeq 4.7\times 10^{7}\,M_{\odot}\) (or \(9.0\times 10^{6}\,M_{\odot}\) unless it is obscured). The host stellar mass is possibly as high as \(M_{\star}\lesssim 6.0\times 10^{10}\,M_{\odot}\) in the case of the the hybrid quasar + dusty galaxy model.
Figure 7 shows the \(M_{\star}-M_{\rm BH}\) distribution of \(z\gtrsim 6\) quasars compiled in Izumi et al. (2021) (circle) for which the stellar mass is assumed to be the [C ii]-based dynamical mass. CEERS 1670 is located at the left-bottom corner of the plane, which is uniquely separated from the \(z\gtrsim 6\) quasar population already known (e.g., Wang et al., 2013; Venemans et al., 2016; Izumi et al., 2021). The mass ratio of \(M_{\rm BH}/M_{\star}>2.4\times 10^{-3}\) for CEERS 1670 is consistent with or higher than that expected from the empirical relation seen in massive galaxies at \(z=0\) (black line Kormendy & Ho, 2013), but is overmassive compared to the BH-to-galaxy mass ratio measured for nearby broad-line AGN whose virial BH masses are estimated to be as low as that of CEERS 1670 (Reines & Volonteri, 2015). On the other hand, adopting the dust-corrected BH mass and dusty-galaxy SED model, CEERS 3210 is located well below the empirical relation at \(z\simeq 0\). An important caveat is that the upper bound of the stellar mass can be reduced by a factor of \(\simeq 3-5\) with a different stellar population and star formation history (see discussion in Section 6.2). Further follow-up observations would give a better estimate of the stellar mass. The existence of such an overmassive BH, if confirmed, provides us with a unique opportunity to study the early stage of the BH-galaxy assembly.
### Update of \(z\sim 5\) AGN Luminosity Function
We update the UV luminosity function of \(z=5\) AGN from O23, based on the spectroscopic redshift of CEERS 1670. We do not include CEERS 3210 in our discussion, because of its unconstrained intrinsic UV luminosity. Following O23, we do not aim to provide statistical constraints on the luminosity function based on our small and incomplete sample, but we rather quantify the serendipity of our discovered low-luminosity AGN at \(z>5\) in the 34.5 arcmin\({}^{2}\) of the first NIRCam pointings of the CEERS survey. Adopting the spectroscopic redshift of \(z=5.24\) and the redshift interval of \(\Delta z\pm 0.5\), we derive the AGN number density of \(\Phi=1.07\times 10^{-5}\) Mpc\({}^{-3}\) mag\({}^{-1}\) at the UV magnitude of \(M_{1450}=-19.4\) mag. The difference from O23 (\(\Phi=1.03\times 10^{-5}\) Mpc\({}^{-3}\) mag\({}^{-1}\)) is tiny, because the central redshift of \(z=5.24\) only slightly changes from their work (\(z=5.15\)). The updated luminosity function is presented in Figure 8.
The faint end of the \(z>5\) AGN/quasar luminosity function is a matter of debate, because low-luminosity AGN produce more ionizing photons in a certain cosmic volume than do much rarer luminous AGN, and thus its steepness is critical to infer the relative role in the cosmic reionization with respect to star-forming galaxies (e.g., Giallongo et al., 2015; Onoue et al., 2017; McGreer et al., 2018; Matsuoka et al., 2018; Giallongo et al., 2019; Finkelstein et al., 2019; Grazian et al., 2020, 2022; Niida et al., 2020; Kim et al., 2020; Kim & Im, 2021; Jiang et al., 2022; Finkelstein & Bagley, 2022; Yung et al., 2021). The space density that we infer suggests that low-luminosity AGN such as CEERS 1670 may in fact be common, in agreement with previous studies that have identified candidate faint quasars in relatively small survey areas (e.g., Fujimoto et al., 2022). However, a complete survey of low-luminosity AGN with a well-defined selection function as well as a careful analysis of host galaxy contribution to the UV magnitudes (Bowler et al., 2021; Adams et al., 2022; Harikane et al., 2022) is required to statistically argue the AGN abundance at the faint end, and subsequently the relative contribution of AGN to the cosmic hydrogen/helium reionization.
## 7 Conclusions
We make use of JWST NIRSpec spectroscopy from the CEERS Survey to identify two low-luminosity AGN at \(z>5\) with broad H\(\alpha\) emission in their spectra. The first source, CEERS 1670 at \(z=5.242\), has a UV magnitude of
Figure 7: The BH mass versus stellar mass relation of CEERS 1670 (red) and CEERS 3210 (orange; \(A_{V}=4\)). Circle symbols show the \(z>6\) quasar samples compiled by Izumi et al. (2021): brighter ones with \(M_{1450}<-25\) mag (blue) and fainter ones with \(M_{1450}>-25\) mag (cyan). The gray and green cross symbols are the observational samples in the local universe provided by Kormendy & Ho (2013) and Reines & Volonteri (2015), respectively. The diagonal dashed lines represent \(M_{\rm BH}/M_{\star}=0.1\), 0.01, and \(10^{-3}\).
\(-19.4\pm 0.05\), making it 2-3 dex fainter than known quasars at similar redshifts. The source was previously identified as a candidate low-luminosity AGN based on broad-band photometry by O23. We measure a FWHM of \(2038\pm 286\) km s\({}^{-1}\) for the broad H\(\alpha\) component, resulting in a BH mass of \(M_{\rm BH}=1.3\pm 0.4\times 10^{7}\,M_{\odot}\), making this the least-massive BH known in the universe at the end of cosmic reionization.
The second source, CEERS 3210 at \(z=5.624\), has a blue continuum spectrum at short wavelengths (\(\lambda_{\rm obs}<3\,\mu\)m) and a steep spectral slope at longer wavelengths. The SED shape suggests that this source is a broad-line AGN possibly caught in a transition phase between a dust-obscured starburst and an unobscured quasar. We measure a FWHM of \(1807\pm 207\) km s\({}^{-1}\) for the broad H\(\alpha\) component, resulting in a BH mass in the range of \(M_{\rm BH}\simeq 0.9-4.7\times 10^{7}\,M_{\odot}\), depending on the level of dust obscuration assumed.
We derive upper limits on the host mass of each AGN and place constraints on the \(M_{*}\)-\(M_{\rm BH}\) relationship in the lowest mass range yet probed in the early universe. We find the host of CEERS 1670 is limited to \(M_{*}<6.0\times 10^{9}\,M_{\odot}\), while the host mass of CEERS 3210 can be an order of magnitude larger (\(6.0\times 10^{10}\,M_{\odot}\)) if we assume a visual extinction of \(A_{V}=4.0\), as inferred from our SED fitting. The \(M_{\rm BH}/M_{*}\) ratio for CEERS 1670, in particular, is consistent with or higher than the empirical relationship seen in massive galaxies at \(z=0\), but is overmassive compared to the BH-to-galaxy mass ratio measured for nearby broad-line AGN whose virial BH masses are estimated to be as low as that of CEERS 1670.
We examine the narrow emission-line ratios of both sources and find that their location on the BPT and OHNO diagrams is consistent with model predictions for moderately low-metallicity AGN with \(Z/Z_{\odot}\simeq 0.2-0.4\). The fact that neither source is X-ray detected and their emission line ratios in the BPT diagram are virtually indistinguishable from star-forming galaxies observed at similar redshifts means that their broad-line emission may be one of the few ways to detect these AGN. Other possible approaches include diagnostics with high-ionziation lines (e.g., He and Ne) (Feltre et al., 2016; Nakajima and Maiolino, 2022). Preselection with photometric colors may also be useful to select fast-growing BHs with \(M_{\rm BH}\sim 10^{6-7}\,\,M_{\odot}\) in metal-poor environments (Inayoshi et al., 2022).
The spectroscopic discovery of two low-luminosity, low-mass AGN at \(z>5\) demonstrates the capabilities of JWST to push the BH mass limit closer to the range predicted for the BH seed population. Future work to uncover these low-luminosity AGN, which are the dominant BH population at high redshift, will be the key to further constraining their abundance and the early growth history of SMBHs and their host galaxies.
## 8 Acknowledgments
This work is supported by NASA grants JWST-ERS-01345 and JWST-AR-02446 and based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. This work also made use of the Rainbow Cosmological Surveys Database, which is operated by the Centro de Astrobiologia (CAB/INTA), partnered with the University of California Observatories at Santa Cruz (UCO/Lick,UCSC).
We also acknowledge support from the National Natural Science Foundation of China (12073003, 12150410307, 12003003, 11721303, 11991052, 11950410493), and the China Manned Space Project Nos. CMS-CSST-2021-A04 and CMS-CSST-2021-A06. PGP-G acknowledges support from Spanish Ministerio de Ciencia e Innovacion MCIN/AEI/10.13039/501100011033 through grant PGC2018-093499-B-I00.
AG acknowledges financial contribution from the grant PRIN INAF 2019 (RIC) 1.05.01.85.09: "New light on the Intergalactic Medium (NewIGM)" and support from PRIN MIUR project "Black Hole winds and the Baryon Life Cycle of Galaxies: the stone-guest at the galaxy evolution supper", contract 2017-PH3WAT.
Figure 8: The AGN luminosity function at \(z\sim 5\) based on CEERS 1670 (red). The 1-\(\sigma\) errors have been derived using the low number count statistics by Gehrels (1986). The binned luminosity function from the literature are shown for AGN (McGreer et al., 2018; Giallongo et al., 2019; Grazian et al., 2020; Niida et al., 2020) and Lyman break galaxies (Harikane et al., 2022; Bouwens et al., 2021). The short-dashed line represents the parametric luminosity function of Niida et al. (2020) and the long-dashed line is from Finkelstein and Bagley (2022) without a correction term from a double-power law function (i.e., \(\delta=0\) in their Equation 1). |
2309.12002 | Lateral Solid Phase Epitaxy of Yttrium Iron Garnet | Solid phase epitaxy is a crystallization technique used to produce high
quality thin films. Lateral solid phase epitaxy furthermore enables the
realization of non-planar structures, which are interesting, e.g., in the field
of spintronics. Here, we demonstrate lateral solid phase epitaxy of yttrium
iron garnet over an artificial edge, such that the crystallization direction is
perpendicular to the initial seed. We use single crystalline garnet seed
substrates partially covered by a SiOx film to study the lateral
crystallization over the SiOx mesa. The yttrium iron garnet layer retains the
crystal orientation of the substrate not only when in direct contact with the
substrate, but also across the edge on top of the SiOx mesa. By controlling the
crystallization dynamics it is possible to almost completely suppress the
formation of polycrystals and to enable epitaxial growth of single crystalline
yttrium iron garnet on top of mesas made from arbitrary materials. From a
series of annealing experiments, we extract an activation energy of 3.0 eV and
a velocity prefactor of $6.5 \times 10^{14}$ nm/s for the lateral epitaxial
crystallization along the <100> direction. Our results pave the way to engineer
single crystalline non-planar yttrium iron garnet structures with controlled
crystal orientation. | Sebastian Sailler, Darius Pohl, Heike Schlörb, Bernd Rellinghaus, Andy Thomas, Sebastian T. B. Goennenwein, Michaela Lammel | 2023-09-21T12:17:34Z | http://arxiv.org/abs/2309.12002v2 | # Lateral Solid Phase Epitaxy of Yttrium Iron Garnet
###### Abstract
Solid phase epitaxy is a crystallization technique used to produce high quality thin films. Lateral solid phase epitaxy furthermore enables the realization of non-planar structures, which are interesting, e.g., in the field of spintronics. Here, we demonstrate lateral solid phase epitaxy of yttrium iron garnet over an artificial edge, such that the crystallization direction is perpendicular to the initial seed. We use single crystalline garnet seed substrates partially covered by a SiO\({}_{x}\) film to study the lateral crystallization over the SiO\({}_{x}\) mesa. The yttrium iron garnet layer retains the crystal orientation of the substrate not only when in direct contact with the substrate, but also across the edge on top of the SiO\({}_{x}\) mesa. By controlling the crystallization dynamics it is possible to almost completely suppress the formation of polycrystals and to enable epitaxial growth of single crystalline yttrium iron garnet on top of mesas made from arbitrary materials. From a series of annealing experiments, we extract an activation energy of 2.8 eV and a velocity prefactor of \(5.1\times 10^{13}\) nm/s for the lateral epitaxial crystallization along the <100> direction. Our results pave the way to engineer single crystalline non-planar yttrium iron garnet structures with controlled crystal orientation.
## I Introduction
Epitaxy is one of the most commonly used techniques for obtaining single crystalline thin films.[1; 2] As a subset, solid phase epitaxy (SPE) describes the phase transition of an amorphous solid to its crystalline form while in contact with a crystalline seed of a similar or identical lattice parameter.[3] This causes the crystallization to start from the interface with the seed material and results in a single crystalline thin film of the same crystal orientation as the seed.[3]
A special type of SPE is lateral solid phase epitaxy (LSPE), where the crystallization direction is perpendicular to the initial seed surface normal.[4; 5] Initially, lateral solid phase epitaxy was developed for the fabrication of silicon on insulator structures and has been an important technological step for the semiconductor industry.[4; 6; 7] Therefore, the SPE of silicon and germanium has been studied most comprehensively.[8; 9; 10; 11] Recently, the lateral crystallization of oxide thin films has gained increasing interest and has been shown for Ba\({}_{0.6}\)Sr\({}_{0.4}\)TiO\({}_{3}\),[12] Nb:TiO\({}_{2}\)[13] and SrTiO\({}_{3}\).[14]
In this paper we investigate the oxide compound yttrium iron garnet (Y\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\), YIG). Its ferrimagnetic properties,[15] combined with a long spin diffusion length[16; 17] as well as an exceptionally low Gilbert damping and a low coercive field[18; 19] make it a prototypical material in the field of magnetism and spintronics.[20]
In these areas the focus in recent years was expanded towards non-planar, three dimensional and curved magnetic structures[21; 22; 23; 24] as curvature was reported to induce novel phenomena like curvature-induced anisotropy[25] or the Dzyaloshinskii-Moriya-interaction.[26] These phenomena in turn are predicted to lead to a variety of resulting effects, for example to spin-wave nonreciprocities[27; 28] or magnetochiral effects.[29; 30]
Realizing non-planar magnetic structures is therefore highly desirable. However, the deposition techniques commonly used for the fabrication of YIG like pulsed laser deposition and magnetron sputtering typically yield planar thin films.
In this work, we report the lateral solid phase epitaxy of YIG over an artificial mesa on top of crystalline seed substrates. From systematic annealing experiments, we extract the activation energy as well as the crystallization velocity which allow for a full description of the lateral solid phase epitaxy. Our results pave the way for experiments on more sophisticated non-planar structures of one of the prototypical magnetic materials for spintronics.
## II Methods
All films discussed in this publication were deposited using radio-frequency magnetron sputtering at room temperature in an AJA International sputtering system.
For the lateral crystallization experiments, we used yttrium aluminum garnet (Y\({}_{3}\)Al\({}_{5}\)O\({}_{12}\), YAG, _CrysTec_) substrates with the <111> crystal orientation being parallel to the surface normal as well as two types of gadolinium gallium garnet (Gd\({}_{3}\)Ga\({}_{5}\)O\({}_{12}\), GGG, _SurfaceNet_) substrates, where the crystal orientation along the surface normal is either <111> or <001>. Since GGG and YAG crystallize in the same space group I\({}_{3}\)S as YIG and their lattice parameters are comparable to those of YIG (a\({}_{GGG}=1.2376\) nm,[31] a\({}_{YAG}=1.2009\) nm,[32]\({}_{YAG}=1.2380\) nm\({}^{23}\)) they are considered as closely lattice matched. Before the sputtering process all substrates were cleaned for five minutes in aceton and isopropanol, and one minute in deionized water in an ultrasonic bath.
To create an artificial mesa on the substrate surface, we first sputter a nominally 20 nm thick SiO\({}_{x}\) layer either from a SiO\({}_{2}\) sinter target or by reactive sputtering from a silicon target onto one of the garnet substrates (cp. Fig. 1(a)). The SiO\({}_{x}\) layer from the SiO\({}_{2}\) target (reactive Si) was deposited at a sputtering pressure of \(2.7\times 10^{-3}\) mbar in pure argon atmosphere (argon to oxygen 13:4) at 150 W (100 W) at a rate of 0.0208 nm/s
(0.0172 nm\(/\)s).
Several ways were utilized to fabricate the mesa from the SiO\({}_{\mathrm{x}}\) layer (cp. Fig. 1(b)). Some of the substrates were partially covered with Kapton tape before the SiO\({}_{\mathrm{x}}\) sputtering process, which yield the desired structure by simply removing the tape afterwards. While this path is simple, it leads to comparably rough edges. To improve the edge quality of the SiO\({}_{\mathrm{x}}\) mesa optical lithography and subsequent etching of the SiO\({}_{\mathrm{x}}\) was utilized. The form of the mesa was first defined in photoresist by a Smart Print (_Smartforce Technologies_) and after developing transferred into the SiO\({}_{\mathrm{x}}\) by physical etching with a SF\({}_{\mathrm{6}}\) plasma in an Oxford Instrument RIE system. To counteract any potential damage to the substrate after the etching step, it was annealed at 700 \({}^{\circ}\)C for 4 h. We also used chemical etching of the SiO\({}_{\mathrm{x}}\) stripe in buffered HF.
In a subsequent fabrication step, we then deposit the YIG film on top of the predefined SiO\({}_{\mathrm{x}}\) mesas as sketched in Fig. 1(c). As we want to investigate lateral crystallization, the nominal YIG thickness was chosen to be at least twice as thick as the SiO\({}_{\mathrm{x}}\) to ensure continuity of the YIG layer across the mesa edge. YIG was sputtered from a stochiometric sinter target at \(2.7\times 10^{-3}\) mbar argon pressure and 80 W power, at a rate of 0.0135 nm\(/\)s. This results in a complete coverage of the SiO\({}_{\mathrm{x}}\) mesas with YIG, where the YIG film within the predefined trench is still in contact with the lattice matched substrate (cp. Fig. 1(c)).
To induce and observe crystallization of the YIG layer, the complete stack was then annealed at temperatures between 600 \({}^{\circ}\)C and 650 \({}^{\circ}\)C multiple times in a tube furnace under air. The expected crystallization behavior upon annealing is shown in Fig. 1(d-f). First, YIG starts crystallizing vertically from the lattice matched substrate via solid phase epitaxy (cp. Fig. 1(d)). After reaching the top edge of the film, the epitaxial, single crystalline YIG now acts as a seed for the amorphous YIG on SiO\({}_{\mathrm{x}}\) (cp. Fig. 1(e)). Starting from the edge of the mesa a lateral crystallization front is expected to move with a constant velocity (cp. Fig. 1(f)). The evolution of crystalline YIG across the mesa was observed in an scanning electron microscope (SEM, _Zeiss GeminiSEM_), where the crystalline region was analyzed via electron backscatter diffraction (EBSD).
Transmission electron microscopy was conducted using a JEOL JEM F200 operated at 200 kV acceleration voltage equipped with a GATAN OneView CMOS camera for fast imaging. Local EDS analysis was perfomed using a dual 100 mm\({}^{2}\) window-less silicon drift detector.
## III Results and discussion
The selection of the ideal annealing temperature is crucial for the observation of lateral crystallization. In our previous work [34] we determined the parameters describing the vertical crystallization of YIG for different time and temperature pairs depending on the substrate. We demonstrated, that epitaxial crystallization from a lattice matched seed becomes possible at temperatures below those required for the formation of polycrystals. Vice-versa, the formation of polycrystalline YIG can be suppressed by using sufficiently low annealing temperatures. From our results we approximated, that for \(T<660\)\({}^{\circ}\)C the formation of a fully polycrystalline film on SiO\({}_{\mathrm{x}}\) would take about 100 h. Avoiding the formation of polycrystalline grains is of great importance, as those would hinder the epitaxial crystallization. Please note that nucleation is a thermally activated process and therefore statistically also possible for lower temperatures. During our study we found that nucleation was more likely to occur if there were external nucleation sites in the form of particles on the respective sample, demonstrating the need of clean surfaces. However, the temperature for annealing has to be sufficiently high, since the crystallization rate depends exponentially on the temperature (cp. Eq. (1)). Below 540 \({}^{\circ}\)C the crystallization via solid phase epitaxy of YIG will not occur in reasonable times (t >100 h). Therefore, we are confined to a temperature range of about 120 \({}^{\circ}\)C (540 \({}^{\circ}\)C - 660 \({}^{\circ}\)C) to study the lateral crystallization of YIG.
Heyroth et al. [23] showcased a single crystalline YIG bridge fabricated by coating a resist template with YIG deposited via pulsed laser deposition. After lift off the bridge is annealed at 800 \({}^{\circ}\)C for 3 h. However, we do not expect their annealing process to be easily transferrable to larger, sputtered YIG structures, as we see significant formation of polycrystalline YIG above 660 \({}^{\circ}\)C.
To mathematically describe the lateral solid phase epitaxy of YIG from a YIG seed we use a modified Arrhenius equation. [3, 35] This description assumes a homogeneous crystallization front that moves through an amorphous material starting from a crystalline seed of the same material. At a given temperature \(T\), this crystallization front is expected to move with a constant velocity (\(v=\frac{l}{t}\)). Here \(v\) is the lateral crystallization velocity, \(l\) the lateral crystallization distance and \(t\) the annealing time. The crystallization velocity itself depends exponentially on the temperature as well as the activation energy \(E_{A}\) and can be described by Eq. (1), [3, 35]
\[v=v_{0}\cdot e^{\frac{-E_{A}}{E_{B}^{2}}} \tag{1}\]
Fig. 1: Sample preparation scheme for lateral crystallization. After coating the single crystalline substrate with SiO\({}_{\mathrm{x}}\) (a), a stripe is defined by optical lithography and subsequent etching (b). YIG is afterwards sputtered on top (c) and annealed at 600 \({}^{\circ}\)C - 650 \({}^{\circ}\)C, which induces the crystallization at the substrate interface (d). After vertically crystallizing (e) the crystallization front propagates laterally, over the edge of the SiO\({}_{\mathrm{x}}\) mesa (f).
where \(k_{B}\) is the Boltzmann constant and the prefactor \(v_{0}\) represents a maximal velocity.
To confirm LSPE of YIG and to determine the crystallization velocities \(v(T)\), we analyze our samples with electron back scatter diffraction (EBSD) for a sequence of annealing times. EBSD also allows us to exclude polycrystalline YIG on top of the SiO\({}_{\mathrm{x}}\) mesa. A secondary electron (SE) image taken across the edge of the mesa is depicted in Fig. 2(a). On the right, crystalline YIG is on top of the YAG substrate as depicted in the schematics above the SE image. The lateral crystallization front moves from the right across the mesa's edge, which can be seen in the middle of the image, towards the left. The area of crystallized YIG on top of the SiO\({}_{\mathrm{x}}\) mesa can be discerned as a change in gray level which we ascribe to the height and density change upon crystallization.
To verify the formation of single crystalline YIG on YAG as well as across the mesa on SiO\({}_{\mathrm{x}}\), the SE image is superimposed with the results from the EBSD mapping (cp. Fig. 2(b)). The monochrome color confirms a single crystalline YIG crystallizing in the same out of plane direction as the substrate. This demonstrates, that we achieved lateral solid phase epitaxy of 1 \(\upmu\)m YIG over a 18 nm high mesa. Furthermore, unlike in similar studies on Si and other oxides,[4, 5, 13, 14] we do not find any polycrystalline YIG seeds on SiO\({}_{\mathrm{x}}\) after 96 h at 600 \({}^{\circ}\)C.
To further investigate the single crystalline nature of the laterally crystallized YIG, transmission electron microscopy (TEM) was performed. Fig. 2(c) depicts a side view of the mesa structure as illustrated schematically in Fig. 1(f) and Fig. 2(a + b). The TEM images show a sharp edge of the deposited SiO\({}_{\mathrm{x}}\) and a rounder YIG edge on top (Fig. 2(c)). The TEM image supports the results of the SEM investigation, in that the YIG layer crystallizes epitaxially from the substrate and also laterally on top of the SiO\({}_{\mathrm{x}}\) layer. However, close to the SiO\({}_{\mathrm{x}}\) edge in Fig. 2(c), some imperfections in the single crystal can be resolved in the TEM image. Additionally, a rotation of the crystal is visible, which results in a slightly tilted crystal on top of the SiO\({}_{\mathrm{x}}\) as the tilted YIG crystallizing from the bottom acts as the new seed for the lateral crystallization. This rotation is most clearly visible in samples where the \([111]\) direction is parallel to the surface normal and takes place exclusively in the planes perpendicular to the \([1\overline{1}0]\) direction.
The distance up to which the YIG crystallizes laterally on top of the SiO\({}_{\mathrm{x}}\) can be extracted from both techniques, TEM and SEM. As the SEM allows for a fast measurement of the crystallization front, the lateral crystallization data is extracted from the SE images and EBSD data.
To determine the lateral crystallization velocities, each sample was annealed multiple times and the distance covered by the crystallization front was measured after each annealing step. At each time and temperature pair we analyze multiple images across the mesa edge and thereby obtain the respective, average crystallization distance and its standard deviation. We then determine the lateral crystallization velocity from the slope of a linear fit to the data. In addition to the rate, a temporal offset can be extracted. This offset occurs due to a delayed start of the lateral crystallization, which we assume is the time the YIG layer needs to crystallize vertically before being able to act as a seed. Furthermore, this temporal offset can stem from an initial induction period after which the velocity followed a linear rate, which has been reported for other materials.[36]
Fig. 2: Observation of lateral crystallization of YIG on a patterned YAG/SiO\({}_{\mathrm{x}}\) substrate after annealing for 96 h at 600 \({}^{\circ}\)C. (a) Secondary electron (SE) image of the mesa etched into the SiO\({}_{\mathrm{x}}\), where the left side is elevated as sketched. First, the YIG layer on top of the YAG substrate crystallizes vertically towards the top of the sample and thereby changes from the as deposited state (a.d.) into a single crystalline (sc) YIG (right side of the image). Once the YIG reaches the top edge of the sample, it crystallizes laterally via LSPE from right to left onto the SiO\({}_{\mathrm{x}}\) layer. The formation of crystalline YIG is accompanied by a change in contrast. Superimposing the SE image from (a) with the EBSD mapping (b) confirms the single crystalline nature of the YIG layer also in the lateral crystallization regime. The TEM investigation in (c) further verifies the lateral solid phase epitaxy of YIG on top of the SiO\({}_{\mathrm{x}}\). Directly next to the sharp edge of the SiO\({}_{\mathrm{x}}\) mesa a slight rotation in the YIG crystal orientation can be seen, which transfers to the single crystalline YIG on top of the SiO\({}_{\mathrm{x}}\), as it grows epitaxially from the YIG seed.
Figure 3 shows the lateral crystallization velocity of YIG at 600 \({}^{\circ}\)C when using <111> oriented YAG and GGG as seed substrates. We extract a lateral crystallization velocity of \(v_{\rm{YAG}}\) = 10.3 nm/h (0.003 nm/s) for YIG on YAG and \(v_{\rm{GGG}}\) = 23.7 nm/h (0.007 nm/s) for YIG on GGG. On YAG the time delay before lateral crystallization is 3 h and on GGG 10.2 h.
Since the lateral crystallization starts from a YIG seed for either of the substrates, one might expect the same lateral crystallization velocity. However, the different velocities suggest that the substrate indeed influences the maximal crystallization velocity. On the one hand, this behavior might originate from a different crystalline quality of the vertically crystallized YIG on the two substrates. Since GGG exhibits a lower lattice mismatch than YAG, it is expected to lead to a higher quality YIG film by epitaxy. On the other hand, the crystallization was not perfectly epitaxial near the SiO\({}_{\rm{x}}\) mesa (cp. Fig. 2(c)), which could influence the initial crystallization as well as the final velocity. In the course of this work we observed that the final velocity depends on surface and mesa edge quality.
To further substantiate our results, we compare our lateral crystallization velocity to the vertical crystallization velocity of YIG, which we reported in earlier work [34]. Compared to the vertical crystallization velocity of 50.4 nm/h (0.014 nm/s) for a YIG thin film on GGG, the lateral crystallization of YIG is five times slower on YAG and two times slower on GGG. Since the vertical crystallization of YIG on GGG was confirmed to be epitaxial, it allows for a good comparison with the LSPE of YIG. This behavior, that the lateral crystallization is slower than the vertical one, is naively unexpected but was also reported for silicon, where the lateral crystallization was four to eight times slower compared to the vertical direction [4]. In silicon this behavior was ascribed back to the formation of facets and defects in the lateral silicon.
Another reason for the slower lateral crystallization velocity could also be a dependence on the crystal direction along which the YIG crystallizes. From studies on silicon it is known, that differences in vertical crystallization velocity [8, 9] are transferred into the lateral crystallization [6]. Such a crystal orientation dependence of the crystallization velocities have also been reported for bulk YIG [37, 38, 39].
Hence, to compare the lateral crystallization velocities extracted here with the ones from our previous work, that describe the vertical crystallization, the direction along which the crystallization takes place needs to be taken into account [40]. Together with the different seed substrates, this direction dependence could play a role in the difference between the lateral crystallization velocity and the vertical one of a factor two for YIG on GGG.
In addition to the possibility of a direction dependence the lateral solid phase epitaxy of YIG is expected to exponentially depend on the temperature as described in Eq. (1).
To investigate both the crystal orientation dependence and the temperature dependence, multiple samples were prepared on GGG substrates with the [001] and [111] directions parallel to the surface normal. These orientations allow for the investigation of the lateral crystallization velocity along the [010], [100], [1\(\overline{1}\)0] and [11\(\overline{2}\)] directions. As YIG is a cubic system, these will be referred to as their equivalents <100>, <110> and <112>. The crystallization along these directions was evaluated for samples annealed at different temperatures of 600 \({}^{\circ}\)C, 625 \({}^{\circ}\)C, 637 \({}^{\circ}\)C and 650 \({}^{\circ}\)C.
Although the formation of some polycrystalline grains could be seen with increasing temperature, this did not hinder the lateral crystallization up to a distance of 2.2 um via LSPE.
Fig. 4 shows the results of these series over the annealing temperature. Each velocity shown in Fig. 4 is extracted from a series like the one shown in Fig. 3. From the semi-logarithmic plot, a linear dependence of the lateral crystallization velocity on the inverse temperature can be seen. As all the samples are made of the same material, i.e. YIG, we expect one single
Fig. 3: Lateral crystallization velocities for YIG at 600 \({}^{\circ}\)C on two different seed substrates. After annealing for 24 h the lateral crystallization distance was extracted from multiple SEM images of different areas over the mesa edge and the sample annealed again. The average lateral crystallization with the standard deviation is depicted over the annealing time. A linear fit to the data was used to extract the lateral crystallization rate and the delay before the onset of lateral crystallization. While YIG starts to grow earlier on YAG it shows a slower rate of 10.3 nm/h compared to the 23.7 nm/h on GGG.
activation energy for all directions,[9; 10; 11; 35; 42] which we extract from the slope of a linear fit to all velocities. This results in an activation energy of \(E_{A}\) = 2.8 \(\pm\) 0.2 eV for the lateral crystallization of YIG.
In contrast to the reports for the formation of bulk YIG,[39] however, we find no significant difference in the maximal lateral crystallization velocity depending on crystal orientation. For the formation of bulk YIG from the liquid phase it was reported, that facets in <110> and <112> direction are the thermodynamically most stable, while the <111> direction was described to grow fastest.[38; 39; 43] There, YIG was found to crystallize up to 10 times faster along the <111> than along the <110> direction,[38] while the crystallization velocities along the <110> and <112> directions were found to behave very similarly.[39]
For the LSPE of our sputtered thin films with an activation energy of 2.8 eV we find prefactors of \(v_{0}\)(<100>) = \(5.1\times 10^{13}\) nm/s, \(v_{0}\)(<110>) = \(4.9\times 10^{13}\) nm/s and \(v_{0}\)(<112>) = \(5.2\times 10^{13}\) nm/s. Tolksdorf et al. reported a very similar growth behavior of LPE grown YIG for facets along the <110> and <112> direction, with the <112> direction being slightly faster, which we find here as well.[39] No qualitative literature data could be found for the crystallization along the <100> direction. Further studies involving a lateral growth along the faster crystallizing <111> direction[38; 39] could help to verify a orientation dependence of lateral YIG growth.
Both the activation energies \(E_{A}\) and the prefactors \(v_{0}\) are in good agreement with the literature for solid phase epitaxy, see Tab.1. Compared to the model systems of silicon, germanium and SrTiO\({}_{3}\), the activation energy for YIG is higher, while the crystallization velocities are in a similar order of magnitude as for silicon and germanium. The epitaxial crystallization process of YIG seems to be more similar to elemental Si and Ge than to the oxide SrTiO\({}_{3}\).
Additionally, our activation energy of \(E_{A}\) = 2.8 eV \(\pm\) 0.2 eV for epitaxial YIG compares well with previously reported values. Specifically, investigations of YIG thin films on GGG revealed an activation energy of 3.93 eV.[34] For the formation of bulk, polycrystalline YIG from oxide powders, a value of 5.08 eV was reported.[44] Chen et al.[41] report, that the activation energy for polycrystalline SrTiO\({}_{3}\) is half of that of epitaxial SrTiO\({}_{3}\), which is in good agreement with our findings for the YIG thin films. The activation energy for solid phase epitaxy of \(E_{A}\) = 2.8 eV is also roughly half of 5.08 eV for the oxide powders and also reduced compared to the value of vertical crystallization on GGG substrates. We therefore conclude, that the lateral solid phase epitaxy of YIG is described by an activation energy of \(E_{A}\) = 2.8 eV and for the directions <100>, <110> and <112> by the \(v_{0}\) values of \(5.1\times 10^{13}\) nm/s, \(4.9\times 10^{13}\) nm/s and \(5.2\times 10^{13}\) nm/s, respectively.
## IV Conclusion
To assess the lateral solid phase epitaxy of YIG, we defined SiO\({}_{\mathrm{x}}\) mesa structures on top of single crystalline garnet substrates, which were subsequently covered by an amorphous YIG layer by room temperature sputtering. By carefully choosing the annealing temperature we were able to laterally crystallize up to 2.2 um of single crystalline YIG on top of an amorphous SiO\({}_{\mathrm{x}}\) layer. At 600 \({}^{\circ}\)C on GGG a crystallization velocity of 23.7 nm/h was found, which increased by a factor seven to 173.1 nm/h at 650 \({}^{\circ}\)C. By extracting multiple lateral crystallization velocities at different temperatures and along different crystal orientations, we confirmed an exponential dependence on temperature as expected for LSPE. The resulting crystallization parameters are summarized in Tab.1, where the crystallization velocity we derive is independent on the crystal orientation of the seed. The understanding of these dynamics allows for a controlled and precise manufacturing of single crystalline YIG thin films of micrometer length scales on arbitrary substrates and therefore pave the way for sophisticated non-planar structures.
## V Acknowledgments
This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 446571927 and via the SFB 1432 - Project-ID 425217212. We gratefully acknowledge technical support and advice by the nano.lab facility of the University Konstanz. We acknowledge the use of the facilities in the Dresden Center for Nanoanalysis (DCN) at the Technische Universitat Dresden and the support of Alexander Tahn.
|
2309.10615 | To clump or not to clump The impact of wind inhomogeneities on the
optical and NIR spectroscopic analysis of massive OB stars | Winds of massive stars have density inhomogeneities (clumping) that may
affect the formation of spectral lines in different ways, depending on their
formation region. Most of previous and current spectroscopic analyses have been
performed in the optical or ultraviolet domain. However, massive stars are
often hidden behind dense clouds rendering near-infrared observations
necessary. Our objective is to investigate whether a spectroscopic analysis
using either optical or infrared observations results in the same stellar
parameters with comparable accuracy, and whether clumping affects them in
different ways. We analyzed optical and near-infrared observations of a set of
massive O stars with spectral types O4-O9.5 and all luminosity classes. We
obtain similar stellar parameters in the optical and the infrared, although
with larger uncertainties in the near-infrared, both with and without clumping,
albeit with some individual deviating cases. We find that the inclusion of
clumping improves the fit to H$_\alpha$ or HeII 4686 in the optical for
supergiants, as well as that of Br$_\gamma$ in the near-infrared, but it
sometimes worsens the fit to HeII 2.18$\mu$m. Globally, there are no
significant differences when using the clumping laws tested in this work. The
infrared can be used for spectroscopic analyses, giving similar parameters as
from the optical, though with larger uncertainties. The best fits to different
lines are obtained with different (linear) clumping laws, indicating that the
wind structure may be more complex than adopted in the present work. No
clumping law results in a better global fit, or improves the consistency
between optical and infrared stellar parameters. Our work shows that the
optical and infrared lines are not sufficient to break the dichotomy between
the mass-loss rate and clumping factor. | K. Rübke, A. Herrero, J. Puls | 2023-09-19T13:45:39Z | http://arxiv.org/abs/2309.10615v1 | # To clump or not to clump
###### Abstract
Context:Winds of massive stars have density inhomogeneities (clumping) that may affect the formation of spectral lines in different ways, depending on their formation region. Most of previous and current spectroscopic analyses have been performed in the optical or ultraviolet domain. However, massive stars are often hidden behind dense clouds rendering near-infrared observations necessary. It is thus inevitable to compare the results of such analyses and the effects of clumping in the optical and the near-infrared, where lines share most of the line formation region.
Aims:Our objective is to investigate whether a spectroscopic analysis using either optical or infrared observations results in the same stellar parameters with comparable accuracy, and whether clumping affects them in different ways.
Methods:We analyzed optical and near-infrared observations of a set of massive O stars with spectral types O4-O9.5 and all luminosity classes. We used Fastwind model atmospheres with and without optically thin clumping. We first studied the differences in the stellar parameters derived from the optical and the infrared using unclumped models. Based on a coarse model grid, different clumping stratifications were tested. A subset of four linear clumping laws was selected to study the differences in the stellar parameters derived from clumped and unclumped models, and from the optical and the infrared wavelength regions.
Results:We obtain similar stellar parameters in the optical and the infrared, although with larger uncertainties in the near-infrared, both with and without clumping, albeit with some individual deviating cases. We find that the inclusion of clumping improves the fit to H\({}_{\alpha}\) or He = 4686 in the optical for supergiants, as well as that of Br\({}_{\gamma}\) in the near-infrared, but it sometimes worsens the fit to He = 1.218\(\mu\)m. Globally, there are no significant differences when using the clumping laws tested in this work. We also find that the high-lying Br lines in the infrared should be studied in more detail in the future.
Conclusions:The infrared can be used for spectroscopic analyses, giving similar parameters as from the optical, though with larger uncertainties. The best fits to different lines are obtained with different (linear) clumping laws, indicating that the wind structure may be more complex than adopted in the present work. No clumping law results in a better global fit, or improves the consistency between optical and infrared stellar parameters. Our work shows that the optical and infrared lines are not sufficient to break the dichotomy between the mass-loss rate and clumping factor.
Conclusions:The infrared can be used for spectroscopic analyses, giving similar parameters as from the optical, though with larger uncertainties. The best fits to different lines are obtained with different (linear) clumping laws, indicating that the wind structure may be more complex than adopted in the present work. No clumping law results in a better global fit, or improves the consistency between optical and infrared stellar parameters. Our work shows that the optical and infrared lines are not sufficient to break the dichotomy between the mass-loss rate and clumping factor.
## 1 Introduction
The evolution of massive stars is an intricate subject. These relatively scarce objects evolve through various and sometimes extreme stages such as blue super- and hypergiants, luminous blue variables, Wolf-Rayet stars, and red supergiants, reaching (in most cases) their maximum luminosity when dying as supernovae before becoming compact objects such as neutron stars and black holes, or just a diffuse remnant evidencing the explosion (Langer, 2012). Moreover, they are usually born in double or multiple systems (Sana et al., 2012) whose components may interact along their evolution, adding new possibilities to the evolutionary zoo: stars that have been spun up, stars stripped from their outer layers, stars that have been violently ejected from their system and travel through space as walk- or runaways, high-mass X-ray and \(\gamma\)-ray binaries, or combinations of neutron stars and black holes in binary systems that may emit gravitational waves (e.g., de Mink et al., 2013; Gotberg et al., 2018; Renzo et al., 2019; Langer et al., 2020, 2020; Sander, 2019; Abbott et al., 2022)
Being powerful sources of energy and matter, these stars have a strong impact on their surroundings and even on their host galaxy, whose chemical and mechanical evolution is affected. Moreover, our interpretation of the spectra or the population diagrams of the host galaxy depends on our correct understanding of its present and past massive star population (Wang et al., 2020; Menon et al., 2021).
Advances in our modeling of the different evolutionary stages require that the physical parameters of the stars are ac
curately known, which means correctly modeling the main relevant processes that dominate the evolution is necessary. It has long been realized that the process of mass loss has a strong impact on the evolution of these stars from the early phases onward (Chiosi & Maeder, 1986). Thus accurate knowledge of their mass-loss rates is crucial. For hot stars, the dominant mechanism producing the stellar wind is the scattering and absorption of energetic photons via spectral line transitions, and the corresponding momentum transfer onto the stellar plasma. The line-radiation-driven wind theory (Castor et al., 1975; Pauldrach et al., 1986) has been quite successful in explaining how mass is driven away from the stellar surface by the radiation field. The actual size of the mass-loss rate, however, is still debated to date, and there might be uncertainties within a factor of about three, with significant discrepancies regarding the derived values when using different diagnostic tools (e.g., Fullerton et al., 2006)
The main reason for these uncertainties (at least in the earlier phases of massive stellar evolution) is the wind structure. Because of the intrinsic instability of the line-driving process - the so-called line deshadowing instability (LDI: e.g., Owocki & Rybicki, 1984; Feldmeier, 1995; Sundqvist & Owocki, 2013, and already Lucy & Solomon, 1970)-, the stellar wind is predicted to deviate from homogeneity. Most likely, it is strongly structured, forming clumps of high density separated by an inter-clump medium which is rarefied or even almost void. The effect of this structure on the line profiles used as diagnostic tools is different for resonance lines (usually observed in the ultraviolet, and with an opacity that depends linearly on density) and for recombination lines (usually observed in the optical or near-infrared, with an opacity that depends on density quadratically). In addition, and due to the Doppler effect, the spatial distribution of the velocity plays also a role in allowing photons to escape ("vorosity" effect, Owocki, 2008).
This density structure, or clumping, is currently modeled within two flavors of approximation. In the first one, known as micro- or optically thin clumping, and firstly implemented (in its current description) into a non-local thermodynamic equilibrium (NLTE) atmosphere code by Schmutz (1995), the light interacts with the wind-plasma only within the overdense clumps, which are adopted to be optically thin for all considered processes. This assumption is usually justified for recombination line processes such as H\({}_{\alpha}\) in not too dense winds. In the alternative approximation, known as macro- or optically thick clumping (see Owocki et al., 2004; Oskinova et al., 2007; Owocki, 2008; Surlan et al., 2013; Sundqvist et al., 2010, 2011, 2014), the actual optical depth of the clumps for the considered process is (or needs to be) accounted for; for example, even if a clump may be optically thin in H\({}_{\alpha}\), it is most likely optically thick for a (UV) resonance line.
In the optically thick case, the light is affected by porosity effects (both in physical space for continua and in velocity space for lines), which usually allow for increased photon escape through the interclump medium1. Compared to the average opacity resulting from the assumption of optically thin clumping, the effective opacity in optically thick clumps decreases2, leading to potential de-saturation effects, particularly in UV resonance lines (Oskinova et al., 2007). Moreover, in such a situation, a non-void interclump medium also plays a decisive role, not only for opening porosity channels, but also for providing additional opacity to allow for saturated UV resonance lines which would otherwise become (in contrast to observations) desaturated (Sundqvist et al., 2010).
Footnote 1: a very instructive illustration can be found in Brands et al. (2022)
Footnote 2: though on an absolute scale, the effective opacity also increases with increasing absorber density, until a certain saturation threshold is reached (Owocki et al., 2004; Sundqvist et al., 2010, 2011)
Clumping has a severe effect on the derived mass-loss rates. When recombination lines are used as diagnostics, their emission (and absorption) increases in the clumps with the square of the density. In addition, since the average of the square is larger than the square of the average, the actual mass-loss rate is lower than the one obtained when adopting a homogeneous medium. When resonance lines are used, the effect of over- and under-densities (almost) cancels out the microclumping approach, and the derived mass-loss rate remains unaffected. When, for resonance lines, optically thick clumping is accounted for, the actual mass-loss rate may be larger than the one obtained from both a micro-clumped and a homogeneous medium.
The distribution of clumping as a function of distance from the star or velocity (which is usually adopted to increase monotonically, but see Sander et al., 2023) has been studied by several authors using different diagnostics that probe different wind regions, broadly moving to longer wavelengths to probe outer regions (e.g., Puls et al., 2006; Najarro et al., 2011; Bouret et al., 2012; Rubio-Diez et al., 2022). They agree that clumping starts close to the photosphere and increases up to a maximum, remaining constant or decreasing in the intermediate and outermost regions. The degree of clumping, that is the maximum contrast between the density in the clumps and the density in an homogeneous medium with the same mean density, has also been studied by these and other authors (e.g., Hawcroft et al., 2021; Brands et al., 2022) with values that range from three to 20 for Galactic stars, or at least for stars with high mass-loss rates (when analyzing lower-metallicity stars, Brands et al., 2022).
Massive stars are often hidden behind dense clouds of gas and dust, either local to them and their star-forming regions, or as a result of the accumulated matter in their direction. Therefore, it is often necessary to observe them in the near-infrared (NIR), where extinction is less severe than in the optical. This is particularly true for our Galaxy, where the high extinction in the Galactic Plane hides a significant number of massive stars, rendering NIR observations a key tool for their study.
In this paper we aim to study the effect of clumping onto the stellar parameter determination when using optical or NIR diagnostic lines, as well as the consistency of the parameters obtained from the two wavelength domains. Our study has been done in the approximation of micro-clumping, as recent studies have shown that macro-clumping has no significant effect on the recombination lines (Sundqvist & Puls, 2018; Hawcroft et al., 2021; Brands et al., 2022). Moreover, we used different clumping distributions that have been proposed in the literature. To this end, we analyzed Galactic O-type stars with spectral types O4-O9.5 and luminosity classes from I to V. The stars have been observed in the optical and infrared with a high resolving power and high signal-to-noise ratio (S/N).
We present the data used for our study in Sect. 2, and our methodology in Sect. 3. In Sect. 4, we explain how we derived the stellar parameters when adopting a homogeneous wind, both in the optical and the NIR. In Sect. 5, we explore the effects of clumping on the stellar parameters, using different clumping distributions on a test model grid. In Sect. 6, we analyze the observed stars again, now with clumping. Sect. 7 discusses the impact of clumping on the analysis results. Conclusions are presented in Sect. 8.
## 2 The data
For our work, we selected those O stars from the NIR catalog by Hanson et al. (2005) that were also present in the IACOB Spectroscopic Database (Simon-Diaz et al., 2011) at the beginning of our project (see Table 1). The Hanson et al. spectra were obtained with the Infra-Red Camera and Spectrograph (IRCS) mounted at the Cassegrain focus of the 8.2 m Subaru Telescope at Mauna Kea, Hawaii, in the H and K bands with a resolving power R\(\sim\)12 000 and signal-to-noise S/N\(\sim\)200-300. They cover specific regions of the H and K bands: 1.618-1.661, 1.667-1.711, 1.720-1.765, 2.072-2.123, 2.152-2.205, 2.238-2.293 and 2.331-2.388 \(\mu\)m. Although the wavelength coverage is not complete, the main H and He lines in the NIR are present. IACOB spectra were obtained with the Fibre-fed Echelle Spectrograph (FIES) attached to the Nordic Optical Telescope (NOT) with S/N \(\geq\) 150 and \(R\sim 46\,000\), covering the full range from 3710 to 7270 A. Details on the observations and data reduction can be found in the references above. The sample covers the range of O spectral types from O4 to O9.5 and all luminosity classes. According to Holgado et al. (2022), most stars show line profile variations, but only one is classified as SB1 (HD 30614, \(\alpha\) Cam). Thus we assume that the spectra are not significantly contaminated by companions. Although some spectral types are under-represented (like mid-type supergiants or cool late-type dwarfs), the sample as a whole provides a good testbed for the global behavior of O-type stars (see Fig. 1 for an example of the available data).
## 3 Methodology
To determine optical and NIR parameters, we use two main tools: a full grid of synthetic optical and near-infrared spectra, and an automatic tool that allows us to determine the parameters for a large sample of stars. We generate the first one using the code Fastwind(Puls et al., 2005, version 10.1), covering the range of massive OB star parameters, with a grid of \(\sim\)100 000 models detailed below. To create this grid of models, we used the distributed computation system HTCond3. The second ingredient is iacob_gbat(Simon-Diaz et al., 2011; Sabin-Sanjulian et al., 2014, and Holgado et al., 2018, Appendix A), an automatic tool that allows us to fit the observed spectrum, returning the stellar/wind parameters corresponding to the best-fitting model (as defined by the methodology described in Sect. 3.2.1). Since our version of this algorithm has been designed for the optical range, we needed to expand it to the NIR.
Footnote 3: [http://research.cs.wisc.edu/htcondor/](http://research.cs.wisc.edu/htcondor/). The supercomputer facility HTCondor@Instituto de Astrofísica de Canarias consists of a cluster of 914 cores, each capable of running in parallel, enabling us to create a full grid of models within roughly one week.
### A model grid for optical/NIR FASTWIND analyses
The NLTE, line-blanketed and unified model atmosphere code Fastwind requires as input the atmospheric parameters. I.e., for the description of the photosphere, we have to provide effective temperature, \(T_{\rm eff}\), gravity, \(\log g\), radius, \(R_{*}\), microturbulent velocity, \(v_{\rm esc}\), and surface abundances. Wind parameters are mass-loss rate, \(\dot{M}\), terminal velocity, \(v_{\infty}\), and the exponent \(\beta\) of the canonical \(\beta\)-velocity law, as well as a description of the inhomogeneous wind structure ("clumps"). Since for the considered parameter space, all investigated features remain optically thin in the clumps (Sundqvist & Puls, 2018), we need to provide "only" the spatial stratification of the clumping factor4, \(f_{\rm cl}\), that describes the overdensities of the clumps with respect to the average wind density. Setting \(f_{\rm cl}\) to unity everywhere results in a smooth wind model.
Footnote 4: under the simplifying assumption of a void interclump medium, the inverse of the volume-filling factor
It is obvious that the combination of all these parameters would result in a huge amount of models. To reduce that number, we constrain the stellar radius and the terminal velocity (from \(v_{\rm esc}\), see Kudritzki & Puls, 2000) using prototypical values (see Holgado et al., 2018), and calculate the mass-loss rate from the condition that the wind strength parameter (or optical depth invariant), \(Q=\dot{M}/(R_{*}v_{\infty})^{3/2}\), results in one of the grid-values as denoted in Table 2 for which the units are \(M_{\odot}\) a\({}^{-1}\) for \(\dot{M}\), \(\rm km\,s^{-1}\)for \(v_{\infty}\), and \(R_{\odot}\) for \(R_{*}\). The quantity \(Q\) combines mass-loss rate, stellar radius, and wind terminal velocity in such a way that the emission in H\({}_{\alpha}\) (and other wind diagnostics lines, as long as recombination-dominated) can be shown to vary (almost) as a function of \(Q\) alone (see Puls et al., 1996, Repolust et al., 2005, Fig. 12, and Holgado et al., 2018, Appendix B).
Table 2 displays more information about our model grid (here for the case of unclumped models), where \(Y_{\rm He}\) denotes the He-abundance as \(N_{\rm He}/N_{\rm H}\), with \(N\) the corresponding particle density. Figure 2 illustrates the distribution of grid models in the \(\log g\) vs. \(T_{\rm eff}\) (Kiel) diagram, together with the Geneva evolutionary tracks for 5, 10, 15, 20, 25, 40, 50, 60, 85 and 120 M\({}_{\odot}\), and for "solar" conditions (\(Z=0.014\)), as published by Ekstrom et al. (2012). The final grid contains a total of 107 547 models. In Sect. 5 we will calculate additional grids, with various clumping laws as described there.
Previous model-grid spectra used by our working group have been calculated for the optical range (e.g., Sabin-Sanjulian et al., 2017; Holgado et al., 2018). For our current study, we needed to extend them to the near infrared. Table 3 lists the H and He lines included in our synthetic spectra. This list refers only to the diagnostic lines covered in the formal solution; for the solution of the rate equation system, all decisive lines are considered. The table also indicates additional blends of the major component. For example, the total Br\({}_{7}\) complex comprises four different transitions. Blends from additional elements, such as nitrogen, have
\begin{table}
\begin{tabular}{r l l l} \hline \hline \# & STAR ID & Spectral Type & Variability \\ \hline
1 & HD46223 & O4 V([f]) & LPV \\
2 & HD15629 & O4.5 V([f]) & LPV \\
3 & HD46150 & O5 V([f])z & LPV \\
4 & HD217086 & O7 Vnn([f])z & – \\
5 & HD149757 & O9.2 IVnn & LPV \\
6 & HD190864 & O6.5 III(f) & – \\
7 & HD203064 & O7.5 IIIn([f]) & LPV \\
8 & HD15570 & O4 If & – \\
9 & HD14947 & O4.5 If & LPV \\
10 & HD30614 & O9 Ia & SB1 \\
11 & HD210809 & O9 Iab & LPV \\
12 & HD209975 & O9.5 Ib & LPV \\ \hline \hline \end{tabular}
\end{table}
Table 1: O stars selected for the analysis. Spectral types are from the Galactic O Star Catalog (GOSC, Maíz Apellániz et al. (2013), accessible at [https://gosc.caib.inta-csic.es/gosc.php](https://gosc.caib.inta-csic.es/gosc.php)). The last column displays the variability classification by Holgado et al. (2022): Line Profile Variations (LPV), Single Spectroscopic Binary (SB1) or no evidence of radial velocity variations (–)
been neglected. As well, Br\({}_{12}\) was finally not included among the diagnostics (see comments in Sect. 3.2.2).
### Automatic fitting and extension to the NIR
#### 3.2.1 iacob_gbat
iacob_gbat is a grid-based automatic tool (Simon-Diaz et al. 2011; Sabin-Sanjulian et al. 2014, and Holgado et al. 2018, Appendix A) developed to compare a large amount of synthetic spectra with the observed ones. It calculates the fitness of the individual synthetic spectra, and provides us with the best fit (following specific criteria, see below), and the corresponding stellar parameters including appropriate error bars as described below. Before running the tool, one needs to determine the rotational and macroturbulent velocities (\(V\sin i\), and \(V_{\rm mac}\), respectively). A wrong determination of these velocities can result in an erroneous value for all stellar parameters (Sabin-Sanjulian 2014, Fig. 2.13). Rotational and macroturbulent velocities are obtained with the iacob_broad tool, developed by Simon-Diaz & Herrero (2014). Details can be found in Sections 4.1 and 4.2.1.
In the next steps, we define interactively the wavelength range of the considered lines, correct for radial velocity, in case renormalize the continuum, and/or clip nebular lines. Finally, we run iacob_gbat to determine the six stellar and wind parameters (see Sect. 3.1). The basic strategy of iacob_gbat is to find the minimum \(\chi^{2}\) from the sum of the corresponding individual \(\chi^{2}_{i}\) for each considered line \(i\), i.e., the optimal solution.
The weight given to each line, \(\frac{1}{\sigma_{i}}\), is iteratively determined, either from the photon noise in the neighboring continuum of the line, or, if larger, from the minimum average deviation between
\begin{table}
\begin{tabular}{l l} \hline \hline Parameter & Range of values \\ \hline \(T_{\rm eff}\)[K] & [22000–55000], stepsize 1000 K \\ \(\log g\)[\(g\) in cgs] & [2.6–4.3], stepsize 0.1 dex \\ \(v_{\rm mic}\) [km s\({}^{-1}\)] & 5,10,15,20 \\ \(Y_{\rm He}\) & 0.06, 0.10, 0.15, 0.20, 0.25, 0.30 \\ \(\log Q\) & -15.0, -14.0, -13.0, -12.7, -12.5, -12.3, \\ & -12.1, -11.9, -11.7 \\ \(\beta\) & 0.8, 1.0, 1.2, 1.5 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Parameter ranges for the grid models. The metallicity composition follows the solar values provided by Asplund et al. (2009), and \(Q\) is calculated in units of \(M_{\odot}\) a\({}^{-1}\) for \(\dot{M}\), km s\({}^{-1}\)for \(v_{\infty}\), and \(R_{\odot}\) for \(R_{*}\).
Figure 1: Example for the spectra available in our sample (HD 46150). Upper panel: optical spectrum from the IACOB database; lower panels: NIR spectra from the Hanson et al. (2005) catalog.
Figure 2: Location of models from the Fastwind grid in the \(\log g\) vs. \(\log T_{\rm eff}\) plane. Nonrotating Geneva evolutionary tracks (Ekström et al. 2012) are plotted in green, and the blue line defines the corresponding Zero-Age Main Sequence (ZAMS). The numbers indicate the tracks’ initial stellar masses in units of \(M_{\odot}\).
the synthetic and the observed line \(i\), for the overall best-fitting model5.
Footnote 5: Since the best-fitting model is not known in advance, an iterative procedure needs to be invoked.
This strategy ensures that systematic errors are accounted for (in case where the synthetic profiles are outside the noise-level compared to the observed ones), and that such lines obtain a low weight in the overall \(\chi^{2}\). A detailed description of the total procedure can be found in Holgado et al. (2018, Appendix A)6.
Footnote 6: In this appendix, Holgado et al. provide relations based on a reduced \(\chi^{2}\), though all previous and current versions of iacob_gbat apply the standard, non-reduced quantity.
As a result, a distribution of \(\chi^{2}\) values is obtained that can be used to identify the best-fitting model and the corresponding values/uncertainties for each of the stellar and wind parameters. In Fig. 3, we plot the distribution of \(\chi^{2}\) versus \(T_{\rm eff}\) for HD 15 629. The minimum \(\chi^{2}\) value (resulting from an interpolation of the lower envelope) provides us with the appropriate value for \(T_{\rm eff}\), and the 1-\(\sigma\) uncertainty is estimated from the range where \(\chi^{2}=\chi^{2}_{\rm Mira}+1\).
Sometimes, the distributions present specific difficulties: cases in which we cannot determine a given parameter with sufficient accuracy, or values that are at the border of the grid parameter range. Thus and always, the final output has to be examined individually, to identify these cases and at least to minimize corresponding problems. A more detailed discussion of the different problems can be found in Sabin-Sanjulian (2014).
#### 3.2.2 Extension to the near infrared
To extend the iacob_gbat tool toward the NIR, we added several modules to the code. In addition to including all the NIR lines from Table 3 for the determination of the best fit model, we performed several tests to check the extended version.
The ratio between the strengths of He i 4471 and He ii 4541 is a good temperature diagnostics in the optical range. As shown in Fig. 4, the ratio between He i 1.70 and He ii 1.69 in the NIR yields a similar diagnostic. Here we show their equivalent width ratio for a series of models ranging from 25 000 to 55 000 K, and for three values of \(\log g\). Obviously, these H-band lines can be as sensitive to the temperature as the optical ones, and with a very analogous behavior.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Line & Wavelength [Å] & number of H/He components \& identification \\ \hline \multicolumn{3}{|c|}{Optical} \\ H\({}_{\alpha}\) & 6562 & 2 - H i (2-3) \& He ii (4-6) \\ H\({}_{\beta}\) & 4861 & 2 - H i (2-4) \& He ii (4-8) \\ H\({}_{\gamma}\) & 4340 & 2 - H i (2-5) \& He ii (4-10) \\ H\({}_{\delta}\) & 4101 & 2 - H i (2-6) \& He ii (4-12) \\ He i 4387 & 4387 & 1 - He ii (2p1-5dl) \\ He i 4922 & 4922 & 1 - He i (2p1-4d1) \\ He i 4026 & 4026 & 2 - He i (2p3-5d3) \& He ii (4-13) \\ He i 4471 & 4471 & 1 - He i (2p3-4d3) \\ He i 6678 & 6678 & 2 - He i (2p1-3d1) \& He ii (5-13) \\ He ii 4200 & 4200 & 1 - He ii (4-11) \\ He ii 4541 & 4541 & 1 - He ii (4-9) \\ He ii 4686 & 4686 & 1 - He ii (3-4) \\ \multicolumn{3}{|c|}{H-band} \\ \multicolumn{3}{|c|}{H\({}_{10}\)} & 17362 & 1 - H i (4-10) \\ \multicolumn{3}{|c|}{Br\({}_{11}\)} & 16807 & 1 - H i (4-11) \\ \multicolumn{3}{|c|}{Br\({}_{12}\)} & 16407 & 1 - H i (4-12) \\ He i 1.70 & 17000 & 1 - He i (3p3-4d3) \\ He ii 1.69 & 16900 & 1 - He ii (7-12) \\ \multicolumn{3}{|c|}{K-band} \\ \multicolumn{3}{|c|}{Br\({}_{\gamma}\)} & 21660 & 4 - H i (4-7), He i (4d1-7f1), He i (4d3-7f3) \& He ii (8-14) \\ He ii 2.18 & 21880 & 1 - He ii (7-10) \\ \hline \end{tabular}
\end{table}
Table 3: Diagnostic H/He optical and NIR lines used in the current work (regarding Br\({}_{12}\) see Sect. 3.2.2). The He i line at 2.11 \(\mu\)m is severely contaminated by N iii 2.1155 \(\mu\)m, and He i 2.05 \(\mu\)m is not present in the Hanson et al. (2005) spectra. These two lines are not included in our analysis. Wavelengths are given in air.
Figure 3: An example of the distribution of \(\chi^{2}\) (\(y\)-axis) versus effective temperature (HD 15 629). The minimum of \(\chi^{2}\) is indicated by a red dot, and \(1-\) and \(2-\sigma\) ranges are found from the intersection between the dashed lines and the distribution.
Similar to the Balmer lines in the optical, the shape and wings of the Brackett lines in the NIR are sensitive to gravity7. However, during our test calculations, we realized a peculiar behavior of the different Brackett lines, making it difficult or even impossible to simultaneously fit the observed spectra. Indeed, particularly the higher members of the Brackett series (starting around Br\({}_{12}\)) are only poorly represented by our synthetic profiles. We carried out a series of tests, grouping the lines in pairs (Br\({}_{10}\) & Br\({}_{11}\), Br\({}_{11}\) & Br\({}_{12}\), Br\({}_{10}\) & Br\({}_{12}\)), i.e., skipping always one of the lines in our parameter determination. This way, we checked which pair was more consistent with the rest of the NIR lines. Our tests indicated that the highest member considered here, Br\({}_{12}\), gave the poorest agreement.
Footnote 7: A discussion of specific dependencies which are different from the behavior of the optical lines can be found in Repolust et al. (2005)
Currently, the origin of this disagreement remains unclear, but might be related to insufficient accuracy of line-broadening data, collision strengths for hydrogen transitions with higher upper levels, difficulties in the reduction process, or a combination of all of them all (see also Repolust et al. 2005 and Sect. 7). Forthcoming work needs to identify the region in stellar parameter space where the problem appears most strongly, its physical origin, and potential solutions. Meanwhile, and since this problem becomes particularly worrisome only from Br\({}_{12}\) on, we decided to skip this line from our line list when applying the iacob_gbat tool for our IR analysis.
## 4 First results: Parameter determinations adopting smooth winds
We divide our stellar sample in three groups according to the luminosity class of the stars (i.e., [I-II], [III] and [ IV-V]). Each of the three groups presents a particular behavior w.r.t. the fits obtained. Dwarf stars show the best fits to the observed spectrum, whereas fit difficulties increase for giants and are usually largest for the luminosity class I stars, those with the strongest winds.
### Stellar parameters from the optical spectrum
We first determine the stellar parameters using only the optical spectra secured in the IACOB database. We determine the \(V\sin i\) and \(V_{\rm mac}\) values using the iacob-broad package (Simon-Diaz & Herrero 2014). Our values for the optical, presented in Table 4 together with their NIR counterparts8, agree with (and have errors similar to) those from Simon-Diaz & Herrero (2014) within 20 km s\({}^{-1}\) or \(\pm\)20%, except for the \(V_{\rm mac}\) of the fast rotators.
Footnote 8: we only discuss here the results for the optical. For a further discussion, see Sect. 4.2.1
However, because of the high rotational velocities, this has no impact on the final results (within the uncertainties). Updated values have been recently presented by Holgado et al. (2022). For most stars, the differences are well within the adopted uncertainties. Only HD 149 757 and HD 15 570 show a larger difference. For the first object, Holgado et al. (2022) estimate 385 and 94 km s\({}^{-1}\) for \(V\sin i\) and \(V_{\rm mac}\), respectively, compared to a value of 290 km s\({}^{-1}\) for both quantities as derived here. This is a consequence of the degeneracy between rotational and macroturbulent velocities when both reach high values. For the second star, we find 38 and 120 km s\({}^{-1}\), whereas Holgado et al. (2022) estimate 81 and 115 km s\({}^{-1}\). We attribute this large difference to the use of different spectra and different lines. Holgado et al. have used the N v 4605 line, which is in a region of complicate normalization due to the nearby strong N iii emission, whereas we have used the O iii 5592 line. To ensure that these differences will not affect our results, we have repeated our optical and infrared analyses described below with the values from Holgado et al. (2022), without any significant differences. This finding results from the combined rotational and macroturbulence broadening, producing similar profiles in these cases.
Table 5 summarizes the parameters obtained from our optical analysis after running the iacob_gbat tool. Here and in the following similar tables, upper and lower limits refer to the corresponding parameter ranges of our model grid(s) only. As an
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \# & Star ID & Type & \(V\sin i\) & \(V_{\rm mac}\) & \(V\sin i\) & \(V_{\rm mac}\) \\ \hline & & OP & OP & NIR & NIR \\ \hline
1 & HD46223 & O4 V((f)) & 52 & 97 & 70 & 100 \\
2 & HD15629 & O4 V((f)) & 70 & 69 & 68 & 96 \\
3 & HD46150 & O5 V((f))z & 69 & 107 & 107 & 114 \\
4 & HD217086 & O7 V(m/[f))z & 382 & 104 & 372 & 18 \\
5 & HD149757 & O9.2 Ivan & 290 & 290 & 366 & 165 \\
6 & HD190864 & O6.5 III(f) & 65 & 90 & 73 & 113 \\
7 & HD203064 & O7.5 IIIn(f) & 315 & 98 & 344 & 103 \\
8 & HD15570 & O4 If & 38 & 120 & 74 & 92 \\
9 & HD14947 & O4.5 Iff & 117 & 49 & 132 & 25 \\
10 & HD30614 & O9 Ia & 115 & 72 & 78 & 213 \\
11 & HD210809 & O9 Iab & 76 & 79 & 72 & 167 \\
12 & HD209975 & O9.5 Ib & 52 & 95 & 73 & 113 \\ \hline \end{tabular}
\end{table}
Table 4: Comparison between \(V\sin i\) and \(V_{\rm mac}\) values obtained from optical (“OP”) metal lines and from the NIR He i\(\lambda\)1.70\(\mu\)m line. Typical uncertainties are \(\pm\)10% in the optical and \(\pm\)15% in the infrared. All velocities are given in km s\({}^{-1}\).
Figure 4: Equivalent width (EW) ratios for selected optical and NIR He i/He ii lines, as a function of \(\log T_{\rm eff}\) and \(\log g\) (see legend), resulting from our model-grid calculations.
example, \(\beta>1.0\) would mean that \(\beta\) ranges, within its 1-\(\sigma\) uncertainties, from 1.0 to 1.5, when consulting Table 2. In Table 5, such limits frequently occur for the parameters \(\beta\) and \(v_{\rm mic}\). For the strong H\({}_{\alpha}\) and/or He ii 4686 wind emission from our supergiants (which actually should allow for quite a precise determination of \(\beta\)), this simply means that the contribution of these lines to the global \(\chi^{2}\) is low when counted with equal weights as done here. The additional information contained in the other optical H and He lines is usually not sufficient to constrain these parameters more accurately. The inclusion of information from UV P Cygni lines would be very helpful in these cases. On the other hand, more precise values for the micro-turbulent velocity can be only obtained from the analyses of metal lines from species with more than one ionization stage visible (e.g., Markova & Puls 2008); however, in addition, such values might depend on the chosen atom.
In Table 5, gravities are not corrected for the effects of centrifugal acceleration, as we here are only interested in the formal fits and do not compare with evolutionary models. Errors were obtained from iacob-gbat as described above, but following the arguments from Sabin-Sanjulian et al. (2017) we set a lower limit of 0.1 in \(\log Q\) and 0.03 in \(Y_{\rm He}\) for these uncertainties when the automatically derived formal errors turned out to be lower9. Fig. 5, left side, displays a comparison between selected observed optical profiles and the synthetic lines from the best fit model for each star.
Footnote 9: sometimes, the iacob-gbat tool may deliver unrealistically low errors, as it does not take into account uncertainties like the continuum normalization
From the fits shown in Fig. 5 (left side) we draw the following conclusions:
* Except for one object (see below), all dwarfs show excellent fits. Even the fast rotators do not show any significant problems, despite of potential effects not considered here, like gravity darkening or geometrical deformation; the fit for HD 149 757 is poorer, as the model yields too broad wings in some of the Balmer lines.
* The two giants within our sample are mid-types. HD 190 864 shows small differences in the cores of the He i lines, with slightly too shallow theoretical profiles for He ii 4200 and 4541 complemented by a slightly too deep profile for He ii 4686. HD 203 064, a fast rotator, displays a poor fit to H\({}_{\alpha}\) and, to a lesser extent, to He ii 4686.
Figure 5: Spectral fits for selected optical (left) and NIR (right) lines using unclumped models. Observations are shown in black, and best fit model profiles in red. We stress that the individual model parameters for the best fitting optical and NIR profiles differ (to various extents) since the analyses have been performed separately for both ranges (cf. Table 5 vs. Table 6). The horizontal bar gives the wavelength scale for each range, and the scale of the ordinate axis is given by the vertical bar (at the bottom of the H\({}_{\alpha}\) column for the optical range, and at the bottom of the Br\({}_{10}\) column for the NIR.)
* The supergiants display the largest fitting problems, particularly in H\({}_{\alpha}\), sometimes together with problems in H\({}_{\beta}\) and He ii 4686 (much less though), which points to some wind influence. This agrees with the findings by Holgado et al. (2018). The largest difficulties are found for the H\({}_{\alpha}\) P-Cygni like profile of the late types, HD 30 614 (of Ia luminosity class) and HD 210 809. In both stars the He ii 4686 core shows a shift to the red. The best fit in this group is obtained for the less luminous supergiant, HD 209 975 (Ib). Early-type supergiants have an intermediate behavior in H\({}_{\alpha}\) (despite of showing emission), although they present some difficulties for the red wing of H\({}_{\beta}\) that are not seen in late-type supergiants.
We compare our parameters with those recently quoted by Holgado (2019) (most of the values used here have already been published in Holgado et al. 2018), see Fig. 6. All temperatures agree well within the errors given here and by Holgado (2019). For the (uncorrected) gravity, we find significant differences for the rapidly rotating dwarfs, particularly HD 149 757, for which we obtain \(\log g=3.84\pm 0.17\), whilst Holgado (2019) inferred 3.50\(\pm\)0.05. Although marginally within the uncertainties, HD 217 086 also shows differences in \(\log g\) (3.60\(\pm\)0.11 versus 3.81\(\pm\)0.12). We attribute these differences to the difficulties with the normalization and radial velocity correction in fast rotators. As the line wings are very extended and reach the continuum rather smoothly, a small difference in the data treatment
Figure 6: Comparison between the stellar parameters obtained by Holgado (2019) (see also Holgado et al. 2018) and our work. Upper left panel: effective temperature. The dashed lines represent \(\pm\) 1000 K; upper right panel: logarithmic gravity (\(\pm\)0.1 dex); lower left panel: log \(Q\) (\(\pm\)0.2 dex); lower right panel: helium abundance \(Y_{\rm{He}}\)(\(\pm\)0.03). Numbers indicate the stars as listed in Tab. 1.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline Star & spectral type & \(T_{\rm{eff}}\)(kK) & \(\log g\) (dex) & \(\log Q\) & \(Y_{\rm{He}}\) & \(v_{\rm{mic}}\) (km s\({}^{-1}\)) & \(\beta\) \\ \hline HD46223 & O4 V (U/(f)) & 43.0 \(\pm\) 1.2 & 3.76 \(\pm\) 0.07 & -12.8 \(\pm\) 0.2 & 0.10 \(\pm\) 0.03 & \(>\) 9.1 & 1.0 \(\pm\) 0.2 \\ HD15629 & O4.5 V (U/(fc)) & 41.4 \(\pm\) 1.4 & 3.71 \(\pm\) 0.11 & -12.7 \(\pm\) 0.2 & 0.12 \(\pm\) 0.03 & \(<\) 19.9 & 1.0 \(\pm\) 0.2 \\ HD46150 & O5 V (U/(j))z & 41.2 \(\pm\) 1.0 & 3.78 \(\pm\) 0.07 & -13.0 \(\pm\) 0.3 & 0.09 \(\pm\) 0.03 & \(>\) 5.0 & \(>\) 0.8 \\ HD217086 & O7 Vnn(f)z & 37.0 \(\pm\) 1.0 & 3.60 \(\pm\) 0.11 & -13.9 \(\pm\) 1.1 & 0.11 \(\pm\) 0.03 & 12.4 \(\pm\) 7.4 & \(<\) 1.2 \\ HD149757 & O9.2 IVn & 32.5 \(\pm\) 0.9 & 3.84 \(\pm\) 0.17 & -14.1 \(\pm\) 0.9 & 0.11 \(\pm\) 0.03 & 12.2 \(\pm\) 7.2 & \(<\) 1.2 \\ HD190864 & O5.5 III(II) & 37.1 \(\pm\) 0.7 & 3.58 \(\pm\) 0.05 & -12.7 \(\pm\) 0.1 & 0.12 \(\pm\) 0.03 & 15.1 \(\pm\) 3.4 & 0.9 \(\pm\) 0.1 \\ HD203064 & O7.5 III(III)((f)) & 34.9 \(\pm\) 0.7 & 3.54 \(\pm\) 0.11 & -12.7 \(\pm\) 0.1 & 0.10 \(\pm\) 0.03 & \(>\) 15.2 & 0.9 \(\pm\) 0.1 \\ HD15570 & O4 If & 40.1 \(\pm\) 0.9 & 3.75 \(\pm\) 0.18 & -12.0 \(\pm\) 0.1 & 0.11 \(\pm\) 0.03 & \(>\) 5.0 & \(>\) 1.0 \\ HD14947 & O4.5 If & 38.1 \(\pm\) 0.5 & 3.61 \(\pm\) 0.05 & -12.0 \(\pm\) 0.1 & 0.15 \(\pm\) 0.03 & \(>\) 9.5 & \(>\) 1.2 \\ HD30614 & O9 Ia & 29.4 \(\pm\) 0.8 & 2.96 \(\pm\) 0.09 & -12.2 \(\pm\) 0.1 & 0.14 \(\pm\) 0.03 & \(>\) 15.9 & \(>\) 0.8 \\ HD210809 & O9 Iab & 31.1 \(\pm\) 0.3 & 3.17 \(\pm\) 0.07 & -12.4 \(\pm\) 0.1 & 0.12 \(\pm\) 0.03 & \(>\) 16.2 & \(>\) 1.0 \\ HD209975 & O9.5 Ib & 31.3 \(\pm\) 0.4 & 3.23 \(\pm\) 0.05 & -12.7 \(\pm\) 0.1 & 0.10 \(\pm\) 0.03 & \(>\) 12.2 & \(>\) 1.1 \\ \hline \end{tabular}
\end{table}
Table 5: Stellar parameters obtained from the optical analysis using unclumped models. Gravities do not include a centrifugal correction. Upper and lower limits refer to the corresponding parameter ranges of our model grid only (see Table 2).
may result in a relatively large difference in gravity. In addition, in the case of HD 149 757, variability also plays a role10.
Footnote 10: We have analyzed a different spectrum than Holgado (2019), and the Balmer lines are slightly broader in our case.
For \(\log Q\), the agreement is excellent11, except again for the fast rotating dwarfs. This is basically due to the lack of sensitivity of the diagnostics (mainly, the H\({}_{\alpha}\) line) at these low values of \(Q\), combined with high rotational velocities. The helium abundances agree also well12.
Footnote 11: stars 1, 2, 6, 7 and 12 cluster around the same locus in the figure
Footnote 12: here, stars 1, 2, 3, 7, 8, and 12 overlap in the figure, as do 6 and 11
### Analysis in the near infrared
In this subsection, we derive the stellar parameters solely from the near infrared, following a similar methodology as we did in the previous subsection. This will tell us how far results obtained for stars in heavily obscured clusters can be compared to those provided in the extensive literature of optical analyses. While this exercise has already been carried out by other authors (e.g., Repolust et al. 2005, or more recently within investigations when fitting simultaneously optical and infrared spectra, e.g., Najarro et al. 2011 or Bestenlehner et al. 2014), we have to check whether our automatic procedure extended to the near infrared results in reliable stellar parameters.
#### 4.2.1 Determination of \(V\sin i\) and \(V_{\rm mac}\) in the NIR
We start again by deriving \(V\sin i\) and \(V_{\rm mac}\) using iacob-broad. In the optical, these values were derived using metal lines, whose broadening is dominated by processes determining these quantities. However, the metal NIR lines are too weak in our spectra and are not available for all stars. For this reason, we are forced to use He i lines, which are affected by the Stark effect, limiting our ability to measure the rotational velocity for slow rotators (or the macroturbulent velocity when this is low). H ii at lines are even less well suited, since they are dominated by the strong linear Stark effect. Thus we decided to use the He i\(\lambda\)1.70\(\mu\)m line, which is strong enough for all the stars. Ramirez-Agudelo et al. (2013) have shown that it is possible to derive accurate rotational velocities from the (quadratically) Stark broadened optical He i lines. However, the Stark broadening increases toward the infrared, and thus it could place a lower limit (see below) on the derived \(V\sin i\) values.
Figure 7 compares the projected rotational velocities obtained from both wavelengths ranges (filled circles), whereas Table 4 gives the numerical values. In the figure, dashed lines indicate the region that departs by \(\pm\)30 km s\({}^{-1}\) or \(\pm\)30% (whatever is larger) from the 1:1 relationship. This band marks the region where stellar parameters are not affected beyond errors by changes in the adopted rotational velocity (Sabin-Sanjulian 2014). It does not indicate the uncertainties in the determinations, which sometimes are larger than the difference between the values obtained from the optical and the NIR spectra, as discussed below. We see that the \(V\sin i\) pairs are always located within these bands, and that most values agree reasonably well. Therefore, we do not expect a significant impact on our results due to these differences.
We also see that there might be a limit to the lowest rotational velocities determined with He i 1.70\(\mu\)m (around 80 km s\({}^{-1}\)), although this would require more slowly rotating stars to be confirmed (the points cluster close to the 1:1 relation). The only really departing point, at \(V\sin i\) (opt) = 115 and \(V\sin i\) (NIR) = 78 km s\({}^{-1}\), corresponds to HD 30 614, with a strong He i \(\lambda\)1.70\(\mu\)m line in absorption. This discrepancy is related to the large value found for \(V_{\rm mac}\) ( see Tab. 4 and open diamonds in Fig. 7). As expected, \(V_{\rm mac}\) departs strongly from the 1:1 relation for some objects, especially for fast rotators. However, the tests we performed for HD 149 757 and HD 15 570 indicate that no significant changes in the stellar/wind parameters are expected for these stars.
We conclude that it is possible to derive the rotational and macroturbulent velocities from the NIR spectrum alone, although with larger uncertainties than from the optical spectra, and a presumably lower limit for the derived \(V\sin i\).
#### 4.2.2 Stellar parameters from the NIR spectrum
We now derive the stellar parameters for the same stars as in Sect. 4.1, using the NIR spectra secured and reduced by Hanson et al. (2005). The results of the NIR analysis are presented in Table 6. Again, \(\beta\) and microturbulent velocities could not be well constrained, indicating that for these stars the near infrared is not better suited than the optical for this task. This suggests that the difference in line formation depth between the optical and the H- and K-band spectra is not sufficient to provide new information, at least at the resolution and S/N of the spectra analyzed here. The comparison of the observed profiles with those from the best fit model is presented in Fig. 5, right side. Inspection of these profile fits leads to the following summary:
* The best fit quality is again obtained for the dwarfs, but now not without significant problems. The best fitted lines are the He ones, especially He i \(\lambda\)1.70\(\mu\)m. Br\({}_{10}\) and Br\({}_{11}\) also fit reasonably, but Br\({}_{\gamma}\) is not well fitted. For this line, the fast rotator HD 217 086 shows a profile different from the other dwarfs, with a strong and relatively narrow absorption in the blue half-line (presumably because of a narrow emission component, see Fig. 8), and a broad absorption redward from line center.
* The O7.5 III fast rotator HD 203 064 displays a similar Br\({}_{\gamma}\) profile as the O7 V fast rotator HD 217 086, and a similarly poor fit (see also Fig. 8), pointing to some pro
Figure 7: \(V\sin i\) (filled circles) and \(V_{\rm mac}\) (open diamonds) values obtained from the optical metal lines and from He i \(\lambda\)1.70\(\mu\)m in the NIR. The dashed lines give the band \(\pm\) 30 km s\({}^{-1}\)(for low \(V\sin i\)) or 30% of \(V\sin i\) (optical), whatever is larger.
cess(es) not considered in our models, presumably related to differential rotation (see Petrenz & Puls, 1996 for a discussion of similar line-shapes of H\({}_{\alpha}\)). The fit to Br\({}_{10}\) and Br\({}_{11}\), however, is much better. The slower rotating giant, HD 190 864, shows also a good fit for Br\({}_{10}\) and Br\({}_{11}\), and a poor fit for Br\({}_{\gamma}\), although without the characteristic shape of the fast rotators. For both giants, the fit to the He lines is of varying quality. Globally, the fits are again acceptable, except for Br\({}_{\gamma}\).
* For almost all supergiants, the Brackett lines, particularly Br\({}_{\gamma}\), show a poor fit quality, except for, surprisingly, HD 30 614 (that had the largest problems in the optical) and, to a lesser extent, the low luminosity object, HD 209 975. The early-type supergiants show the poorest fits to the Brackett spectrum, with the models predicting an absorption profile for Br\({}_{\gamma}\) while the observations show emission instead. The only exception with a reasonable fit is Br\({}_{11}\) from HD 15 570. Regarding the He lines, these are also poorly fitted in the late-type supergiants. Within a given spectral subtype, He i\(\lambda\)1.70\(\mu\)m departs more and more from a good fit with increasing luminosity. Still, for the cooler supergiants, He ii\(\lambda\)2.18\(\mu\)m is always stronger than predicted, and He ii\(\lambda\)1.69\(\mu\)m (not shown) only modestly reproduced. The situation is different for the early-type supergiants, where the fits to the He ii lines are acceptable, though far from being perfect.
We compare again with previous results in the literature, namely those from Repolust et al. (2005) (Figure 9) Globally, there is a fair agreement13 for all stars, except for stars #9 and #11 (HD 14 947 and HD 210 809). Here, we obtain a higher \(T_{\rm eff}\) and \(\log g\), which relates to the fact that in both stars the shallow Br\({}_{10}\) and Br\({}_{11}\) lines are well fitted in our approach, whilst in the fits by Repolust et al. they appear as too strong. Details about the consequences of such shallow Br\({}_{10/11}\) lines are discussed in Sect. 7. In the case of the first star, the high temperature forces an increase in the He abundance to fit the He i line at 1.70\(\mu\)m (our best model has \(Y_{\rm He}\)= 0.30).
Footnote 13: gravities given in Repolust et al. (2005) are corrected for centrifugal acceleration. Using their data, we have uncorrected them and have also calculated the corresponding \(\log Q\) for the comparison here.
Moreover, the \(\log Q\) of star #12 (HD 209 975) shows a large discrepancy, with a much lower value in our work, due to the reaction of the He ii lines to mass-loss. While the He i line and the Brackett lines have only a small response to an increased mass-loss, the He ii lines (already too shallow in our fit) would become even shallower. Indeed, grid models calculated with a \(\log Q\) similar to that of Repolust et al. (2005)lie just beyond our 1-\(\sigma\) uncertainty from the best-fit model. Finally, the helium abundances agree well, although a lot of upper or lower values are present.
Part of the larger dispersion (compared to the optical analysis, see Fig. 6) is attributed not to the effect of the improvements in Fastwind since Repolust et al. (2005) analyses were carried out (indeed, test calculations by J.P. have shown that the impact of such improvements on the IR signatures is marginal), but to the differences in the by-eye (as used by Repolust et al.) and automatic techniques. When the line fits are poorer, the subjective weight given to a particular fit increases, pushing the result into a given direction, whereas the automatic procedure still forces a compromise for all considered profiles.
An extreme example is given by star number #9 (HD 14 947). By means of our automatic fitting procedure, we find acceptable models (those that contribute to the final parameters values) that extend up to effective temperatures of 47 000 K, because of the uncertainties by a very weak He i line, biasing the final parameters toward hotter temperatures. As pointed out, the corresponding values by Repolust et al. are much lower, mainly because they neglected the deviations between synthetic and observed Br\({}_{10}\) and Br\({}_{11}\) lines.
The final comparison is that of the parameters derived from the optical versus the NIR (Fig. 10), as this will indicate their reliability when derived from the infrared alone. Globally, there is
\begin{table}
\begin{tabular}{l c c c c c c} \hline Star & \(T_{\rm eff}\)(KK) & \(\log g\) (dex) & \(\log Q\) & \(Y_{\rm He}\) & \(v_{\rm min}\) (km s\({}^{-1}\)) & \(\beta\) \\ \hline HD46223 & 41.2 \(\pm\) 1.4 & 3.79 \(\pm\) 0.10 & -12.7 \(\pm\) 0.2 & \(<\) 0.10 & \(>\) 5.0 & \(>\) 0.9 \\ HD15629 & 39.5 \(\pm\) 1.7 & 3.66 \(\pm\) 0.17 & -13.2 \(\pm\) 0.7 & \(<\) 0.09 & \(12.4\)\(\pm\) 7.4 & \(>\) 0.8 \\ HD46150 & 39.6 \(\pm\) 1.0 & 3.85 \(\pm\) 0.12 & -12.9 \(\pm\) 0.3 & \(<\) 0.08 & \(<\) 18.5 & \(>\) 0.8 \\ HD217086 & 36.9 \(\pm\) 1.1 & 3.86 \(\pm\) 0.15 & -13.8 \(\pm\) 1.2 & 0.15 \(\pm\) 0.07 & \(>\) 5.0 & \(>\) 0.8 \\ HD149757 & 32.3 \(\pm\) 1.7 & 3.58 \(\pm\) 0.31 & -13.7 \(\pm\) 1.3 & 0.19 \(\pm\) 0.10 & \(>\) 5.0 & \(>\) 0.8 \\ HD190864 & 36.8 \(\pm\) 1.0 & 3.64 \(\pm\) 0.14 & -12.7 \(\pm\) 0.3 & 0.21\(\pm\)0.09 & 12.4 \(\pm\) 7.4 & \(>\) 0.9 \\ HD203064 & 34.3 \(\pm\) 1.5 & 3.70 \(\pm\) 0.32 & -12.5 \(\pm\) 0.2 & 0.20 \(\pm\) 0.10 & \(>\) 5.0 & \(>\) 1.0 \\ HD15570 & 38.8 \(\pm\) 1.8 & 3.55 \(\pm\) 0.15 & -11.9 \(\pm\) 0.1 & 0.10 \(\pm\) 0.03 & \(<\) 19.9 & \(<\) 1.0 \\ HD14947 & 43.6 \(\pm\) 2.8 & 4.03 \(\pm\) 0.36 & -12.5 \(\pm\) 0.5 & \(>\) 0.17 & 12.4\(\pm\)7.4 \(>\) 0.9 & \\ HD30614 & 27.6 \(\pm\) 0.8 & 2.78 \(\pm\) 0.08 & -12.0 \(\pm\) 0.1 & \(<\) 0.17 & \(>\) 5.0 & \(<\)1.2 \\ HD210809 & 35.4 \(\pm\) 1.2 & \(>\) 3.80 & -12.8 \(\pm\) 0.3 & \(>\) 0.23 & \(>\) 10.4 & \(>\) 0.8 \\ HD209975 & 32.1 \(\pm\) 1.3 & 3.33 \(\pm\) 0.18 & \(<\) -13.4 & \(>\) 0.12 & 15.9\(\pm\)3.9 & \(<\) 1.2 \\ \hline \end{tabular}
\end{table}
Table 6: Stellar parameters obtained from the NIR analysis using unclumped models. Gravities do not include a centrifugal correction. For upper an lower limits see caption of Table 5.
Figure 8: Br\({}_{\gamma}\) for the rapidly rotating dwarf and giant stars HD 217 086 and HD 203 064. Both line profiles show a small blue emission peak close to the core of the line, resulting in a distorted blue wing. Red profiles are from best fitting models.
a fair global agreement within the errors, as shown by the mean values of the differences \(<\Delta T_{\rm eff}>\) = \(<T_{\rm eff}({\rm Opt})-T_{\rm eff}({\rm NIR})>\) = \(-83\pm 697\) K, \(<\Delta\log{g}>\) = \(-0.08\pm 0.07\) dex, and \(<\Delta\log{Q}>\) = \(+0.08\pm 0.10\) dex. Again, stars #9 (HD 14 947) and #11 (HD 210 809) show large differences, produced by the higher gravities and their impact on nearly all other parameters, and star#12 (HD 209 975) shows a too low log \(Q\) value in the infrared. With more He i lines, this effect does not appear in the optical. Finally, for the helium abundances, the average agreement is poorer than for the other parameters (\(<\Delta Y_{\rm He}>\) = \(-0.04\pm 0.01\)), but in this case, the statistics is not as good due to the large number of upper or lower limits present in the results. Nevertheless, a certain trend to derive higher abundances in the near infrared might become visible.
Globally, the infrared fits are worse than the optical ones, which reflects in larger systematic uncertainties (partly related to few dominating objects). Moreover, inspection of the \(\chi^{2}\) distributions from iacob_gbat and the fits from Fig. 5 indicate that this is not due to the differences in resolution and S/N between the optical and infrared spectra, but a consequence of a less accurate reproduction of the infrared lines given the model-inherent assumptions (e.g., a smooth wind until now). This finding is different from the results quoted by Repolust et al. (2005) who found comparable errors in both wavelength ranges, and reflects the different approach of error determination and also the different fitting procedure itself. Finally, there is a relatively large number of objects for which only upper or lower limits for the helium abundance could be derived, suggesting a lack of sensitivity of the infrared spectrum to that parameter (or a degeneracy because of the larger uncertainties involved). In our case, the problem lies partly in the lack of a sufficient number of suitable He lines, particularly from He i.
Overall, however, we may conclude that we can use the infrared spectra to determine stellar parameters in a similar way as we are used to do with the optical ones, but we observe specific trends and larger uncertainties that have to be taken into account.
## 5 Clumping
The line-driven winds from massive stars are prone to instabilities, in particular the line-deshadowing instability (LDI, e.g., Owocki & Rybicki 1984), which result in an inhomogeneous outflow (e.g., Owocki et al. 1988; Owocki 1991; Feldmeier 1995). These density inhomogeneities (clumps) modify the shape and strength of spectral lines formed in the wind, and need to be accounted for in corresponding wind diagnostics (e.g., Hillier 1991; Schmutz 1995; Hillier & Miller 1998; Crowther et al. 2002; Hillier et al. 2003; Bouret et al. 2003; Puls et al. 2006b, 2008). Particularly affected by these inhomogeneities is the emission in lines formed through recombination processes such as H\({}_{\alpha}\) or the NIR lines used as wind diagnostics.
In these processes, the emission is proportional to \(\rho^{2}\), and it is the difference between the averaged quantity \(\langle\rho^{2}\rangle\) (integrated over the optical path length) and the corresponding smooth wind quantity \(\langle\rho\rangle^{2}\) that leads to more emission in an inhomogeneous
Figure 9: Comparison between the stellar parameters obtained by Repolust et al. (2005) and our work, both from the NIR alone. Upper left panel: effective temperature. The dashed lines represent \(\pm\) 1000K; upper right panel: logarithmic gravity (uncorrected, \(\pm\) 0.1 dex). Star number 9 has been slightly shifted in both axes for clarity; lower left panel: log Q (\(\pm\)0.2 dex); lower right panel: \(Y_{\rm H\alpha}\)(\(\pm\)0.03), and stars #7, 9, 10, and 11 have been slightly shifted upward from its value in Repolust et al. (2005) (\(Y_{\rm He}\)= 0.20) to avoid overlap, as well as star #4 (\(Y_{\rm He}\)=0.15). Numbers indicate the stars as listed in Tab. 1.
structure for the same mean density \(\langle\rho\rangle\), since \(\langle\rho^{2}\rangle\geq\langle\rho\rangle^{2}\) always.
Alternatively, for an observed emission, one derives a lower mass-loss rate when adopting a clumped wind. Moreover, as clumping may be radially dependent, it may affect lines formed in different layers in the atmosphere in a different way, which may help (at least partially) to explain the discrepancies found in the previous sections when fitting either optical or NIR lines.
In the conventional approach considering optically thin clumps (which is appropriate for the diagnostics investigated in the current work, e.g., Sundqvist & Puls 2018), the wind structure is characterized by the so-called clumping factor, defined as
\[f_{\rm cl}=\frac{\langle\rho^{2}\rangle}{\langle\rho\rangle^{2}}\geq 1. \tag{1}\]
Under the simplifying assumption that the interclump matter is void, this clumping factor describes the clump overdensity \(\rho_{\rm cl}=f_{\rm cl}\langle\rho\rangle\).
As long as \(f_{\rm cl}\) is spatially constant, the wind emission in lines like H\({}_{\alpha}\) will be the same when adopting either a smooth wind with \(\dot{M}_{\rm unclumped}\) or an inhomogeneous one with \(\dot{M}_{\rm clumped}\), if both mass-loss rates are related via
\[\dot{M}_{\rm clumped}=\frac{\dot{M}_{\rm unclumped}}{\sqrt{f_{\rm cl}}}. \tag{2}\]
Thus, neglecting wind-clumping might lead to overestimated mass-loss rates, at least if, as adopted, the clumps remain optically thin at all considered wavelengths.
Optically thick clumping (also called "macro-clumping" or "porosity" - including porosity in velocity space -, e.g., Owocki et al. 2004; Oskinova et al. 2007; Owocki 2008; Surlan et al. 2013; Sundqvist et al. 2010, 2011, 2014) can lead to additional changes, even when the clumps remain optically thin for the majority of diagnostics/wavelengths. This is because important transitions such as the Lyman ionization and/or the Lyman lines become much easier optically thick than other processes (whenever neutral hydrogen is non-negligible), and are then desaturated because of porosity effects (for an instructive visualization of such effects, see Brands et al. 2022). Consequently, the hydrogen ionization and excitation may change, leading to a change in the global radiation field and (wind) plasma conditions14. Potentially affected are, in particular, the winds from massive late-type B and A-stars, where this effect might explain certain shortcomings in the current modeling of important wind-diagnostics such as H\({}_{\alpha}\) from such objects. Test calculations for O-type stars (Sundqvist & Puls 2018), on the other hand, indicate that in their parameter domain this should pose no problem, since hydrogen remains highly ionized. Thus, in the following, we will consider exclusively optically thin clumping.
Footnote 14: As long as clumps are optically thick only for specific transitions from trace ions or less abundant atomic species, porosity will affect the corresponding diagnostics (e.g., the UV PV-diagnostics, see Oskinova et al. 2007; Surlan et al. 2013; Sundqvist et al. 2010, 2011), but not the global atmospheric model and radiation field.
To this end, we compare three different clumping laws, \(f_{\rm cl}(r)\). First, we consider a linear increase of the clumping factor from
Figure 10: Comparison between stellar parameters obtained in the optical and the infrared. Upper left panel: effective temperature. The dashed lines represent \(\pm\)1000 K; upper right panel: logarithmic gravity (\(\pm\)0.1 dex). Star #9 has been slightly shifted from its value in Tab. 6; lower left panel: log \(Q\) (\(\pm\)0.2 dex); lower right panel: \(Y_{\rm{Hk}}\)(\(\pm\)0.03), and stars #10 and 8 have been slightly displaced from their values in Tab. 6. Numbers indicate the stars as listed in Tab. 1.
unity (smooth density in the photosphere/lowermost wind) to a maximum value between two points in the wind. After reaching this maximum, the clumping factor is adopted to remain constant. We call this the "linear law". The second law is the one suggested by Hillier et al. (2003) (Hillier law), where the clumping factor15 follows an exponential increase (as a function of velocity) until it reaches a maximum (and then stays constant). Finally, our third law bases on Najarro et al. (2011) (Najarro law) and is similar to the Hillier law in the lower wind, but includes an exponentially decreasing \(f_{\rm cl}(r)\) beyond its maximum. Fig 11 illustrates the different laws. The "Najarro law" is motivated by results from a combined analysis of UV, optical, NIR and L-band (including Br\({}_{\alpha}\)) spectra for a small O-star sample, as well as an NIR analysis of massive stars in the Quintuplet Cluster (Najarro et al. 2009), and turns out to be quite similar to theoretical predictions from radiation-hydrodynamic simulations including the LDI (e.g., Runacres & Owocki 2002, 2005).
Footnote 15: in fact, Hillier and coworkers adopt the volume filling factor as the basic quantity
The considered clumping laws are, among others, implemented in Fastwind, and require specific input parameters, as detailed in the following:
* the linear law is characterized by three parameters, \(f_{\rm cl}^{\rm max}\), \(v_{1}\), and \(v_{2}\), \[f_{\rm cl}(v) = 1\quad\mbox{for}\quad v(r)<v_{1}\] \[f_{\rm cl}(v) = 1+(f_{\rm cl}^{\rm max}-1)\times\left(\frac{v(r)-v_{1}}{v_{2}-v _{1}}\right)\quad\mbox{for}\quad v_{1}\leq v(r)\leq v_{2}\] \[f_{\rm cl}(v) = f_{\rm cl}^{\rm max}\quad\mbox{for}\quad v_{2}<v(r)\] (3) where \(f_{\rm cl}^{\rm max}\) is the maximum value for the clumping factor, \(v_{1}\) is the wind velocity at clumping onset (restricted to be larger/equal to the speed of sound), and \(v_{2}\) is the velocity where maximum clumping shall be reached.
* The Hillier law requires two input parameters and is expressed as \[f_{\rm V}(v)=f_{\rm V}^{\infty}+(1-f_{\rm V}^{\infty})\cdot\exp\left(-\frac{ v(r)}{v_{\rm cll}}\right),\] (4) where \(f_{\rm V}\leq 1\) is the volume filling factor (equal to the inverse of \(f_{\rm cl}\) when the interclump medium is assumed to be void, as frequently done). The two parameters defining this relation are \(f_{\rm V}^{\infty}\), the filling factor when the wind velocity reaches the terminal velocity (corresponding to \(1/f_{\rm cl}^{\rm max}\) in our tests), and \(v_{\rm cll}\) which marks the point where clumping begins to become important and controls how fast the function reaches its minimum. In this law, clumping begins to increase directly from the bottom of the photosphere on, but becomes significant only for \(v\gtrsim v_{\rm cll}\).
* the Najarro law is formulated as \[f_{\rm V}(v) = f_{\rm V}^{\infty}+(1-f_{\rm V}^{\infty})\cdot\exp\left(-\frac{ v(r)}{v_{\rm cll}}\right)+\] (5) \[+ (1-f_{\rm V}^{\infty})\cdot\exp\left(-\frac{v_{\infty}-v(r)}{v_{ \rm cl2}}\right)\] where \(f_{\infty}\) and \(v_{\rm cll}\) are the same quantities as in Hillier's law, whereas \(v_{\rm cl2}\) prescribes how fast the filling factor increases again after reaching its minimum (i.e., how fast \(f_{\rm cl}\) decreases after reaching its maximum). The above clumping law has been modified compared to the original formulation by Najarro et al. (2011), enforcing an unclumped outermost wind region with \(f_{\rm V}(v\approx v_{\infty})\to 1\).
Table 7 displays the various parameters adopted for our forthcoming tests. Overall, in the current section, we consider four different linear laws16, two variants of the Hillier law, and one of the Najarro law. For \(v_{1}\) and \(v_{2}\) (linear law) we adopt a compromise based on the range of values provided by Najarro et al. (2011), and fix these quantities in terms of a specific fraction of the terminal velocity. In this way, our \(v_{1}\) and \(v_{2}\) values (absolute velocities) are consistent with the ranges obtained by Najarro et al. In summary, all \(v_{1}\) values have been fixed to 10% of \(v_{\infty}\)(see below), whilst \(v_{2}\) varies in between 25 to 94% of \(v_{\infty}\).
Footnote 16: Table 7 contains also two additional linear laws that will be considered in Sects. 6 and 7.
When inspecting the current literature, the \(f_{\rm cl}^{\rm max}\) parameter covers a large range, from close to unity (unclumped) to values as high as 100. Here, we will test the values \(f_{\rm cl}^{\rm max}=[10,\,20]\), following Table 2 in Najarro et al. (2011). Obviously, such an approach has only an exploratory character, since it is highly unlikely that all or most stars follow such a restricted combination of the various parameters. Once we understand better how the profile fits and the derived stellar parameters react to clumping, we will be in a good position to consider at least \(f_{\rm cl}^{\rm max}\) as a free parameter in our fitting approaches covering the IR band. Such studies have already started in analyses of the combined optical and UV regime, cf. Hawcroft et al. (2021), Brands et al. (2022).
In our specific models based on the Hillier and Najarro laws, we adopt values that result in a similar maximum as the linear law with \(f_{\rm cl}^{\rm max}=10\), and a similar increase toward this maximum (see Fig. 11). We check two Hillier laws, with the Hillier\({}_{100}\) increasing faster toward maximum clumping than Hillier\({}_{200}\) (similar to Linear\({}_{10-025}\) vs. Linear\({}_{10-050}\)). We finally note that the quantitative behavior of the Hillier and Najarro laws, when expressed in terms of a radial coordinate, strongly depends on the adopted velocity law (\(v_{\infty}\) and \(\beta\)).
### FASTWIND coarse grid
Before analyzing the impact of clumping by means of a comparison between synthetic and actual spectra, we will test such impact for a small set of template models. To this end, we construct a coarse grid of models representing dwarfs, giants, and super
Figure 11: Comparison between three different clumping laws investigated in the current work (see Tab. 7). Blue: Linear\({}_{10-050}\); orange: Hillier\({}_{200}\); cyan: Najarro\({}_{200}\). The example refers to a velocity law with \(v_{\infty}\)= 1200 km s\({}^{-1}\)and \(\beta\) = 0.8.
giants at different temperatures (hot, mid, and cool), resulting in nine models covering the O-star parameter range. In Fig. 12 we display these models in the \(\log T_{\rm eff}-\log g\) plane, to illustrate the corresponding evolutionary stages. Table 8 lists these coarse grid models. All models have the same (solar) helium abundance and \(\beta\) velocity-field exponent. In the following sections, we will discuss all coarse grid models and corresponding synthetic spectra resulting from the application of the various clumping laws, and investigate and compare their specific impact.
### Clumping versus no clumping
First, we explore some general clumping effects by means of our coarse grid. Clumping modifies both radiative transfer and atomic occupation numbers (because of the altered density and radiation field), thus affecting the ionization equilibrium of all elements and consequently the stellar/wind parameters derived from model fits. Even though in our approach (at least) the subsonic stratification remains smooth, also the photospheric lines might become affected by clumping, to a various extent. This change is caused by the modified occupation numbers resulting from a modified inward directed radiation field, and particularly
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \hline Clumping law / label & \(f_{\rm cl}^{\rm max}\) & \(v_{1}/v_{\infty}\) & \(v_{2}/v_{\infty}\) & discussed/used \\ \hline \hline Linear\({}_{10-025}\) & 10 & 0.1 & 0.25 & Sects. 5-7 \\ \hline Linear\({}_{10-050}\) & 10 & 0.1 & 0.50 & Sects. 5-7 \\ \hline Linear\({}_{20-040}\) & 20 & 0.1 & 0.40 & Sect. 5 \\ \hline Linear\({}_{20-094}\) & 20 & 0.1 & 0.94 & Sect. 5 \\ \hline Linear\({}_{20-025}\) & 20 & 0.1 & 0.25 & Sects. 6-7 \\ \hline Linear\({}_{20-050}\) & 20 & 0.1 & 0.50 & Sects. 6-7 \\ \hline \hline Clumping law & \(f_{\rm cl}^{\rm max}\) & \(f_{\rm V}^{\infty}\) & \(v_{\rm ell}\) [km s\({}^{-1}\)] & \(v_{\rm el2}\) [km s\({}^{-1}\)] \\ \hline Hillier\({}_{100}\) & 10.5 & 0.095 & 100. & – \\ \hline Hillier\({}_{200}\) & 10.5 & 0.095 & 200. & – \\ \hline Najarro\({}_{2000}\) & 10.3 & 0.095 & 200. & 100. \\ \hline \hline \end{tabular}
\end{table}
Table 7: The clumping laws used in our analyses. See text.
Figure 12: Coarse-grid models in the \(\log T_{\rm eff}-\log g\) diagram, chosen to be representative for hot dwarfs to “cool” supergiants in the O-star regime. Overplotted in green are evolutionary tracks for Galactic nonrotating stars from Ekström et al. (2012), and the blue line defines the corresponding ZAMS. The numbers give the initial stellar masses in units of \(M_{\odot}\).
Figure 13: Comparison of synthetic H\({}_{\alpha}\) profiles, for models with \(T_{\rm eff}=36\,000\) K, \(\log g=\)3.40, and differing wind-clumping properties. Black: unclumped wind with \(\log Q=\)–11.90. Orange: clumped model with the same mass-loss rate/wind strength parameter, using the Linear\({}_{10-025}\) law. Red: clumped model with the same clumping properties/clumping law, but a mass-loss rate reduced by \(\sqrt{f_{\rm cl}^{\rm max}}\). Green and blue: same as the orange and red models, respectively, but using the Linear\({}_{10-050}\) law (the orange and green profiles are nearly coincident). All profiles have been broadened by \(V\sin i=V_{\rm mac}=50\) km s\({}^{-1}\), adopting a resolving power of 12 000.
because of a modified filling of the absorption cores due to a different wind structure.
As already indicated, the most prominent effect is an increase of the emission in lines such as H\({}_{\alpha}\). To obtain a similar emission in the clumped and unclumped case (to provide us with a similar fit quality when performing a hypothetical fit), we need to divide the unclumped mass-loss rate by \(\sqrt{f_{\rm cl}}\) (see Eq. 2); clearly, this is only an approximation, because of the radial dependence of \(f_{\rm cl}\). This means that the wind strength parameter \(Q\) for an unclumped wind will be (roughly) equivalent to a value \(Q^{\prime}=Q/\sqrt{f_{\rm cl}}\) for the clumped case, where in our approach we approximate \(f_{\rm cl}\) by its maximum value, \(f_{\rm cl}^{\rm max}\).
Figure 13 illustrates the potentially strong impact of clumping on the H\({}_{\alpha}\) line emission, by means of our grid model with \(T_{\rm eff}\) = 36 000 K, \(\log g\) =3.40 and an (unclumped) \(\log Q=-11.90\) (corresponding to the "mid-temperature supergiant" model). This unclumped model (profile in black) is compared with four clumped ones. For two of those, we have used both the Linear\({}_{10-025}\) (in red) and the Linear\({}_{10-050}\) law (in blue, for designations see Tab. 7) together with a reduced mass-loss rate, \(\log Q^{\prime}=\) -12.4 (because of \(f_{\rm cl}^{\rm max}=10\)), to obtain a roughly equivalent emission. As visible, all three H\({}_{\alpha}\) lines are fairly similar indeed. The blue one (with \(e_{2}=0.5\)\(\omega\)) displays a somewhat lower emission close to the core, because a large part of the lower/intermediate wind has a lower effective mass-loss rate than the model underlying the red profile, where \(f_{\rm cl}^{\rm max}\) is reached already at 0.25\(\omega\). The other two profiles (in green and orange) have been calculated from clumped models with identical clumping properties as above, but now with the same mass-loss rate as in the unclumped case. The large difference is obvious, with an H\({}_{\alpha}\) emission roughly corresponding to that of a smooth model with wind strength parameter \(\log Q+\sqrt{f_{\rm cl}^{\rm max}}=-11.4\). Here, both clumping laws deliver almost identical profiles, since due to the larger densities the line formation zone shifts to the outer wind, where both clumping laws are identical (\(f_{\rm cl}\approx f_{\rm cl}^{\rm max}\)).
Figure 14 shows, for the same mid-supergiant parameters, the differences between the unclumped and clumped (scaled \(\dot{M}\)!) models, in selected optical and NIR spectral lines. Though, as discussed above, the H\({}_{\alpha}\) emission remains almost identical, other lines react differently. Br\({}_{10}\) (and also Br\({}_{11}\), not shown) displays a weaker (and broader) absorption core for the clumped models, and Br\({}_{\gamma}\) is affected even stronger: whereas the unclumped model displays a slightly blue-shifted absorption, the clumped ones show a narrow central emission (red profile), or only weak absorption plus emission in the core region (blue profile).
Unlike these NIR H-lines, H\({}_{\beta}\) (clumped) presents more absorption in the core, which is also true for the He i lines in both wavelength regimes. Since in the considered parameter range the dominant helium ion is He ii (for the main part of the wind), He ii lines behave similar to H lines when they are dominated by recombination processes: in the NIR, He ii \(\lambda\)1.69 (not shown) and \(\lambda\)2.18\(\mu\)m show increased emission in the core (though on different scales), whilst He ii \(\lambda\)4686 remains mostly unchanged for the Linear\({}_{10-025}\) law, in analogy to H\({}_{\alpha}\). For Linear\({}_{10-050}\) the emission is clearly weaker, because of the lower effective mass-loss rate. In cooler winds, when He ii is no longer dominant, He ii \(\lambda\)4686 will behave differently from H\({}_{\alpha}\) (see Kudritzki et al. 2006).
For most lines, the line formation regions will be altered as a consequence of the different density structure in the clumped models. In particular, the increased absorption of many lines can be explained by their formation in the inner layers, before clumping plays a decisive role. In those cases, the dominant effect will be the decreased mass-loss rate in the clumped, \(\dot{M}\)-scaled models, resulting in a deeper absorption (less refilling than in the smooth models with larger \(\dot{M}\)).
As well, the line emission at the cores of Br\({}_{\gamma}\) and He ii \(\lambda\)2.18\(\mu\)m is a (indirect) consequence of the modified formation depth. Concentrating on Br\({}_{\gamma}\), we see at first that the wind emission in the line wings is almost identical for all three wind structures17, implying that such emission forms in the intermediate/outer wind where our scaling via \(\dot{M}\sqrt{f_{\rm cl}^{\rm max}}=\) const is applicable.
Footnote 17: except for the He i component blueward from line center, which is stronger in the clumped, low-\(\dot{M}\) model, see above.
The differences at line center, on the other hand, relate to different NLTE conditions in the upper photosphere/lower wind. For the unclumped model (with larger \(\dot{M}\)), the apparent absorption results from a comparatively low source function, when the lower level, \(n=4\), becomes overpopulated compared to the upper one, \(n=7\). For the clumped models, with lower \(\dot{M}\) in the still smooth transonic region, we find a similar effect as observed and modeled for Br\({}_{\alpha}\) from weak-winded O-stars (Najarro et al. 2011). Also here, the lower level becomes underpopulated compared to the upper one in the transonic region, increasing the source function considerably, and resulting in a narrow emission peak close to line center. From test calculations with an analogous unclumped model with identical, low mass-loss rate as the clumped models analyzed here, we find a similar emission peak (now inside a broad photospheric absorption - no wind emission in this case). To summarize, the central emission observed in various lines is often not directly related to clumping, but occurs from specific NLTE effects in the upper photosphere when the line is formed in the transonic region, where its strength is highly dependent on mass-loss rate.
Finally, we note that the models presented in Fig. 13 and Fig. 14 show the overall strongest effects within all models of our coarse grid. In general, the supergiant models (hot, mid, and cool) display the most pronounced effects, whilst for giants we find smaller changes, becoming negligible for dwarf models.
### Which clumping law to use?
The calculation of a full model grid is a numerically expensive task. Thus, before analyzing the real spectra, we performed a series of tests using the coarse grid to evaluate the differences among the clumping laws provided in Table 7. Our aim is to minimize the computational effort when considering the full grid. Fig. 15 visualizes the changes in the H\({}_{\alpha}\) and Br\({}_{\gamma}\) profiles from the most sensitive supergiant models (see Tab. 8) when applying the different clumping laws.
#### 5.3.1 Hillier vs. Najarro
We first compare the Hillier\({}_{200}\) with the Najarro\({}_{200}\) clumping laws (orange and cyan in Fig. 15; see Table 7, Fig. 11 and Eqs. 4, 5). The main difference between both laws concerns the outer wind layers, after the maximum clumping factor in the Najarro\({}_{200}\) law has been reached. Thereafter, the clumping factor decreases toward unity (no clumping) in case of Najarro\({}_{200}\), whilst it continues to increase asymptotically for Hillier's law, reaching its maximum at the outer wind boundary.
In Fig. 15 we can see the impact of these two laws on H\({}_{\alpha}\) and Br\({}_{\gamma}\). Indeed, the line profiles are very similar, and corresponding giant and dwarf models display even lower, almost invisible differences. This is not only true for the above two lines, but also for all H and He lines considered in the current study (not shown
here for brevity). We conclude that there are no significant differences when using either the Hillier or the Najarro law for the analysis of optical and NIR H/He spectra of typical O-stars.
The simple reason for these almost identical line shapes is that the lines have already formed when both laws begin to deviate18. Both NIR and optical lines are formed below \(2R_{*}\), and the influence of the clumping law beyond this point is irrelevant, contrasted to wavelengths in the UV, far-IR or radio regimes where corresponding diagnostics might form at much larger radii for sufficiently strong mass loss. For the purpose of our present work, however, we can restrict ourselves to one of these clumping laws, which, because of its higher simplicity, is the one suggested by Hillier.
Footnote 18: The (small) differences between both laws in the inner wind (Fig. 11) do not play any role.
#### 5.3.2 Hillier vs. Linear
We now compare the profiles obtained from the Hillier\({}_{200}\) and the Linear\({}_{10-050}\) laws (orange vs. blue in Fig. 15; see Table 7). The differences between both laws (see Fig. 11) are larger than those considered in the previous subsection, though in the inner layers, where most of the optical and NIR lines are formed, they are quite similar. It is thus not surprising that the largest differences between the resulting line profiles, shown in Fig. 15, are moderate. Again, the largest differences are found for the supergiant models, particularly at "cool" temperatures, whereas the giant and dwarf models display no significant differences at all.
Figure 14: Clumping effects for selected optical and NIR lines, for a subset of the models from Fig. 13 (same broadening parameters). Here, we compare the smooth model (in black) with the clumped models with decreased (scaled) mass-loss rate, in red for the Linear\({}_{10-050}\) law, and in blue for the Linear\({}_{10-050}\) one.
Figure 15: H\({}_{\rm a}\) and Br\({}_{\rm r}\) profiles for clumped supergiant models using different laws (see legend). Mass-loss rates have been scaled according to \(f_{\rm d}^{\rm max}\), and profiles have been broadened as in Fig. 13. Wavelengths are given in Å for H\({}_{\rm a}\) and in microns for Br\({}_{\rm r}\).
The already small discrepancies between the (supergiant) line profiles might become even smaller when the clumping-law is altered. In the same figure, we also display the results for the H\({}_{\rm{filter}}\)\({}_{100}\) and Linear\({}_{10-025}\) laws (dashed orange vs. red), i.e., when using lower values for \(v_{2}\) (in both cases, a factor of two lower than before). Now, the profile differences have almost vanished.
Summarizing, the prime differences between clumped and unclumped models mostly relate to the region of line formation and the overall clumping distribution, though not on the details of the specific clumping law (as long as there are enough parameters to describe the essential behavior).
Consequently, we conclude that for a first study, it is sufficient to consider only one family of clumping laws, and we decided to use the simple linear one.
### The Linear law: Varying the parameters
In the following, we explore the changes introduced when modifying the parameters of such linear laws. We concentrate on the maximum clumping factor, \(f_{\rm cl}^{\rm max}\), and the point where this maximum is reached, \(v_{2}\). We fix the point of clumping onset, \(v_{1}/v_{\infty}\) = 0.1, since this value has only a weak impact on the results as long as it is sufficiently small (0.1... 0.2), but larger than the speed of sound to keep the photosphere unclumped. This latter condition might need to be relaxed in forthcoming studies, given the possibility that also the photosphere might be affected by inhomogeneities (e.g., Puls et al. 2006b; Cantiello et al. 2009).
Figure 16 shows the four linear laws. For \(f_{\rm cl}^{\rm max}\), we consider two typical values, \(f_{\rm cl}^{\rm max}\)= 10 and 20. For \(f_{\rm cl}^{\rm max}\)=10, we choose two values for the point where this maximum is reached, namely \(v_{2}/v_{\infty}=0.25\) (Linear\({}_{10-025}\)) and 0.5 (Linear\({}_{10-050}\)), to simulate a rather steep and a moderate increase. To investigate the impact of clumping also in the outer wind (in addition to our considerations from Sect. 5.3.1) and in a systematic way, we proceed as follows. The \(v_{2}\) values for the \(f_{\rm cl}^{\rm max}\)=20-laws (first two of the corresponding entries in Table 7) are chosen such that the specific clumping factors are identical to their \(f_{\rm cl}^{\rm max}\)=10-counterparts in the inner wind, until \(f_{\rm cl}\)= 10 is reached, and then continue to increase until their maximum value, \(f_{\rm cl}^{\rm max}\)=20. This results in \(v_{2}/v_{\infty}\) = 0.40 (Linear\({}_{20-040}\)) and 0.94 (Linear\({}_{20-094}\)), respectively.
Again, in Fig. 17 we only display the line profiles obtained for the supergiant models (with mass-loss rates scaled by the corresponding factor, \((f_{\rm cl}^{\rm max})^{-1/2}\). At first we compare the H\({}_{\alpha}\) and Br\({}_{\gamma}\) profiles for the Linear\({}_{10-025}\) and Linear\({}_{20-040}\) laws (red vs. green), i.e., when the maximum clumping factor is reached in the inner wind layers, together with the profiles from the corresponding unclumped models (in black).
We see that the H\({}_{\alpha}\) profiles are similar for the unclumped and Linear\({}_{10-025}\) models, whereas the profiles for the Linear\({}_{20-040}\) law are somewhat different for the hot and mid-temperature supergiants, with less emission at lower velocities in the latter cases. This indicates that H\({}_{\alpha}\) is formed in a region where clumping fully compensates the lower mass-loss rate in Linear\({}_{10-025}\) (i.e., beyond \(v(r)/v_{\infty}=0.25\)), but where this is not yet the case for the Linear\({}_{20-040}\) law. We conclude that the differences between the two clumped models are due to the formation of H\({}_{\alpha}\) between \(v(r)/v_{\infty}=0.25\)...\(0.40\).
Contrasted to this behavior, the Br\({}_{\gamma}\) profiles of both clumped models show a strong central emission, very similar to each other, and differing from the (partly blue-shifted) absorption of the unclumped wind. Again, however, all emission wings are identical. In agreement with our argumentation from Sect. 5.2, we conclude that the wind emission in Br\({}_{\gamma}\) is mostly formed in layers where \(f_{\rm cl}\) has already reached its maximum value (i.e., beyond \(v(r)/v_{\infty}=0.4\)). The more central absorption or emission is controlled by the behavior of level \(n=4\) vs. level \(n=7\) in the transonic regime, with absorption for larger and emission for lower mass loss rates. Obviously, also the redward Stark-absorption wing becomes visible for the lowest mass-loss rate (Linear\({}_{20-040}\)),
A second comparison refers to Linear\({}_{10-050}\) (blue) vs. Linear\({}_{20-094}\) (magenta). Here the clumping degree increases more slowly with radius than above, and H\({}_{\alpha}\) is majority formed before the maximum clumping factor is reached. As a consequence, the decrease in mass-loss rate produces a lower wind emission in both clumped models. The effect is stronger for Linear\({}_{20-094}\), because of the larger decrease in \(\dot{M}\). Now, also for Br\({}_{\gamma}\) the line wings deviate from each other, with decreasing impact of wind emission, and particularly Linear\({}_{20-094}\) dis
Figure 16: Four different linear clumping laws considered in our study, with the clumping factor as a function of stellar radius (in units of the photospheric radius, \(R_{*}\), with \(\beta\)= 0.8). Red solid line: Linear\({}_{10-025}\); dashed blue line: Linear\({}_{10-060}\); dashed green line: Linear\({}_{20-040}\); dashed magenta line: Linear\({}_{20-094}\). See text.
Figure 17: H\({}_{\alpha}\) and Br\({}_{\gamma}\) profiles for the different clumping laws as indicated, including smooth winds. Mass-loss rates of clumped models have been scaled according to \(f_{\rm cl}^{\rm max}\), and profiles have been broadened as in Fig. 13.
plays a line profile dominated by photospheric absorption. Consistent with our previous argumentation, the extent of the central emission remains fairly unaffected by the differences in clumping (though it depends on the actual mass-loss rate).
Comparing now all five models in parallel, we conclude that
1. the wind emission increases when the maximum clumping factor is reached in the inner wind layers. In such models, the lines are formed in regions when clumping already fully compensates the decrease in mass-loss rate.
2. the maximum value \(f_{\rm cl}^{\rm max}\) is of less relevance whenever the clumping factor increases over an extended region. What actually matters is the value of the clumping factor in the line-forming region, together with the global mass-loss rate.
For the rest of our current study, and given its exploratory character, we will restrict our analysis to the linear clumping description. On the one hand, we will use the same Linear\({}_{10-025}\) and Linear\({}_{10-050}\) laws considered above. The two laws with \(f_{\rm cl}^{\rm max}=20\) as discussed in this section, however, are "only" linear extensions of these laws toward larger radii, studied to investigate potential effects from a highly clumped outermost wind. Since we argued that the decisive quantity is the value of the clumping factor in the line-forming region (often dominated by the lower and intermediate wind), in the next two sections we will use two alternative \(f_{\rm cl}^{\rm max}=20\)-laws (see below). In this way, we are able to simulate a larger diversity of potential line shapes and physical conditions, although this is still a severe simplification. For example, the most recent optical + UV analysis by Hawcroft et al. (2023) indicates a (maximum) clumping factor that increases with \(T_{\rm eff}\), and in future work a more extended parameter range (with respect to \(f_{\rm cl}^{\rm max}\) and \(e_{2}\)) needs to be examined also in the NIR.
## 6 FASTWIND clumping grid
For a (re)analysis of our optical and NIR observations using clumped models, we have calculated a full model grid and restricted ourselves to four clumping laws in total: the Linear\({}_{025}\) laws, with \([v_{1}/v_{\infty},v_{2}/v_{\infty}]=[0.1,\,0.25]\), and the Linear\({}_{050}\) laws with \([v_{1}/v_{\infty},v_{2}/v_{\infty}]=[0.1,\,0.50]\) (see Tab. 7), applying \(f_{\rm cl}^{\rm max}=10\) and \(f_{\rm cl}^{\rm max}=20\) in both cases. Table 9 shows the parameter ranges for the grids.
### Analysis with the Linear\({}_{10-025}\) clumping law
The results of the analyses with the iacob_gbat tool for the Linear\({}_{10-025}\) law can be found in Tables 10 (for the optical spectrum) and 11 (for the near infrared). The corresponding fits are displayed in Fig. 18.Moreover, in Fig. 19 we compare, for all supergiants of our sample, the spectral fits for selected optical and NIR lines for all clumping laws discussed in the following (including the homogeneous wind).
From both figures, we can see that the fits have a similar global quality as those for the unclumped models, but there are specific differences worth mentioning. We stress already here that the parameters of the globally best-fitting clumped and unclumped models are different; thus, the changes will not only be due to clumping, but also due to the parameter changes produced by it.
For the hot supergiants we observe two major changes in the optical. The first one is a distinct improvement in the fit of H\({}_{\alpha}\) (see Fig. 19, red vs. black profiles). A similar improvement is not seen for H\({}_{\beta}\) (see Figs. 5 and 18), that fits slightly better in the red wing, but clearly worse in the core, due to less core-filling in the inner layers19. Upper Balmer lines remain unaffected.
Footnote 19: At least in this specific case, this might suggest a lower value for \(v_{1}\) than adopted throughout this work.
The second one is a deeper absorption in He ii\(\lambda 4541\) that improves the fit. However, the good fit for He ii\(\lambda 4686\) without clumping slightly deteriorates, again because of less emission in the forming layers. The cool supergiants do not present the same global improvement in H\({}_{\alpha}\), but there is a partial improvement. Moreover, the He lines, particularly He ii\(\lambda 4686\), also improve slightly, including a correction in the apparent shift in the line core between the observations and the unclumped profile. This differential behavior in He ii\(\lambda 4686\) in (dense) hot and cool winds is expected because of the change in the dominant ionization stage of helium, as explained earlier, and strengthens our warning about the use of a single clumping law for all stars. We conclude that the Linear\({}_{10-025}\) improves H\({}_{\alpha}\) for the hot supergiants and improves the agreement between H\({}_{\alpha}\) and He ii\(\lambda 4686\) for the cool supergiants (but without reaching a good fit).
In the NIR, the fits to the spectra of the hot supergiant HD 15 570 and the cool one HD 210 809 improve considerably for Br\({}_{\gamma}\). The rest of the line fits also improve slightly in these stars, except for He ii\(\lambda 2.18\mu\)m that deteriorates significantly in HD 15 570. Unlike for the optical spectra, we now find changes also in the line fits for giants and dwarfs. Finally, there is a remarkably bad fit to the He i\(\lambda 1.70\mu\)m line in the cool supergiants HD 30 614 and HD 210 809, both in the models with and without clumping. Thus, in the NIR the major improvement of using the Linear\({}_{10-025}\) law regards the Br\({}_{\gamma}\) line of some supergiants.
### Analysis with the Linear\({}_{10-050}\) clumping law
The results from the analysis of our stellar sample with the Linear\({}_{10-050}\) law (that reaches the maximum clumping factor, \(f_{\rm cl}^{\rm max}\), further out than Linear\({}_{10-025}\) from the previous section) can be found in Tables 11 (for the optical spectrum) and 12 (for the infrared). The corresponding best fits can be inspected in Figure 11. For a comparison of supergiant fits, we refer again to Fig. 19.
As for the Linear\({}_{10-025}\) law, there is an improvement in the fit of H\({}_{\alpha}\) from the hot supergiants, that in fact show an excellent agreement in all optical lines. The cool supergiants HD 209 975 and HD 30 614 do not change significantly. But now we see a much better fit for H\({}_{\alpha}\) for the cool supergiant HD 210 809, indicating that the clumping distribution is more extended in this star than represented by Linear\({}_{10-025}\). For the rest of the sample (giants and dwarfs), we obtain similarly good fits with the Linear\({}_{10-050}\) law as with an homogeneous wind model.
\begin{table}
\begin{tabular}{l l} \hline \hline Parameter & Range of values \\ \hline \(T_{\rm eff}\)[K] & [22000–55000] (stepsize 1000 K) \\ \(\log g\) [\(g\) in cgs] & [2.6–4.3] (stepsize 0.1 dex) \\ \(v_{\rm mic}\) [km s\({}^{-1}\)] & 5,10,15,20 \\ \(Y_{\rm He}\) & 0.06, 0.10, 0.15, 0.20, 0.25, 0.30 \\ \(\log Q\) & -15.0, -14.0, -13.5, -13.0, -12.7, -12.5, \\ & -12.3, -12.1, -11.9, -11.7 \\ \(\beta\) & 0.8, 1.0, 1.3 \\ \hline \end{tabular}
\end{table}
Table 9: Range of parameters used to produce the grids of synthetic profiles for clumped winds with different clumping stratifications. As for the smooth wind grid, the metallicity composition is solar. \(\log Q\) values refer to unclumped winds. Units as in Tab. 2.
In the NIR, the fits to Br\({}_{\gamma}\) of HD 15 570 and HD 210 809 improve again significantly compared to the unclumped models. For all stars, the fits to Br\({}_{\gamma}\) and other H and He NIR lines (except for He ii \(\lambda\)2.18\(\mu\)m) improve slightly. An exception is He i \(\lambda\)1.70\(\mu\)m in the hot dwarf HD 46 223, where the insufficient quality is a consequence of the hotter temperature resulting from the global fit parameters when using Linear\({}_{10-050}\). This line remains also very badly fitted in the cool supergiants HD 30 614 and HD 210 809. Summarizing and overall, the global line profile fits in the NIR do not seem to be strongly affected by the different clumping distributions when using \(f_{\rm cl}^{\rm max}=10\).
### Analysis with the \(f_{\rm cl}^{\rm max}\) = 20 clumping laws
In this subsection, we analyze whether a higher (maximum) clumping factor enables an improvement in the fit quality for our sample. We compare here the fits of Linear\({}_{20-025}\) and Linear\({}_{20-050}\) with their corresponding counterparts, Linear\({}_{10-025}\) and Linear\({}_{10-050}\) as described above. The fits with these laws can be seen in Figs. 11 and 12 and again in Fig. 19 for the supergiants, and the derived parameters in Tables 11 and 13.
The changes when using the Linear\({}_{20-025}\) law as compared to the Linear\({}_{10-025}\) are mostly minor. The most affected line is of course H\({}_{a}\), with significant changes seen in the supergiants, reflecting their higher sensitivity to clumping: the fit improves for HD 210 809 and worsens for HD 14 947. HD 30 614 shows significant changes, improving the fit in the emission peak and the red wing but worsening it in the blue wing. HD 15 570 does not show changes in H\({}_{a}\), but the blue wings of the other Balmer lines are slightly less well reproduced with the Linear\({}_{20-025}\) law. Only the supergiant HD 209 975 remains almost unchanged. The optical wind lines of HD 14 947, HD 30 614 and HD 210 809 are the most sensitive to changes in the clumping law.
In the NIR there are few changes. The Br\({}_{\gamma}\) line of dwarfs and giants is marginally affected in many cases. Other lines do not change, except He i \(\lambda\)1.70\(\mu\)m in HD 203 064 and in HD 217 086, with an improvement in both cases.
For the supergiants, the Br\({}_{\gamma}\) line of HD 15 570 deteriorates when using Linear\({}_{20-025}\). For HD 14 947, the fit to Br\({}_{10}\) and He i 1.70\(\mu\)m improves, but Br\({}_{11}\) (not displayed) and the He ii 2.18\(\mu\)m become worse. The cool supergiants are not significantly affected.
When comparing the profile fits in the optical using the laws with larger \(v_{2}\), Linear\({}_{20-050}\) vs. Linear\({}_{10-050}\), we also find only small changes. H\({}_{a}\) is the most affected line, usually with more core absorption (or less emission). Concentrating on the supergiants, there is a clearly worse fit in HD 14 947 and HD 210 809 when using the Linear\({}_{20-050}\) law. HD 15 570 and HD 30 614 show an improvement in the red and a worsening in the blue wing, with the former object showing also an improvement in the emission peak. Other lines are not significantly affected.
In the NIR, Br\({}_{\gamma}\) shows small changes for nearly all stars, The weak changes for the supergiants HD 15 570 and HD 14 947 are now similar to those in the dwarfs HD 46 223 and HD 46 150. The other Brackett lines display a mixed behavior, as do the He i lines. Particularly, He ii 1.69\(\mu\)m (not shown), which is usually not affected, changes in HD 210 809, where it shows a better fit with the Linear\({}_{20-050}\) law. Overall, this time a larger number of stars present changes, but these are small compared to differences with homogeneous-wind profiles.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Star & \(T_{\rm eff}\) (KK) & \(\log g\) (dex) & \(\log Q\) & \(Y_{\rm life}\) & \(v_{\rm min}\) (km s\({}^{-1}\)) & \(\beta\) \\ \hline HD46223 & 43.4 \(\pm\)0.9 & 3.83 \(\pm\) 0.07 & -13.1 \(\pm\) 0.1 & 0.10 \(\pm\) 0.03 & 10.2 \(\pm\) 5.2 & \(>\) 1.0 \\ HD15629 & 42.3 \(\pm\)1.8 & 3.78 \(\pm\) 0.10 & -13.1 \(\pm\) 0.1 & 0.12 \(\pm\) 0.03 & 12.4 \(\pm\) 7.4 & \(>\)1.0 \\ HD46150 & 40.0 \(\pm\)0.8 & 3.80 \(\pm\) 0.08 & -13.4 \(\pm\) 0.2 & 0.10 \(\pm\) 0.03 & \(<\)11.8 & \(>\)0.8 \\ HD217086 & 37.0 \(\pm\)1.0 & 3.60 \(\pm\) 0.10 & -13.5 \(\pm\) 1.1 & 0.11 \(\pm\) 0.03 & 12.4 \(\pm\) 7.4 & 1.0 \(\pm\) 0.2 \\ HD149757 & 32.5 \(\pm\)0.9 & 3.82 \(\pm\) 0.17 & -14.0 \(\pm\) 1.0 & 0.11 \(\pm\) 0.03 & 12.0 \(\pm\) 7.0 & \(>\)0.8 \\ HD190864 & 37.2 \(\pm\)0.8 & 3.60 \(\pm\) 0.10 & -13.1 \(\pm\) 0.1 & 0.12 \(\pm\) 0.03 & 10.4 \(\pm\) 5.4 & \(>\)0.8 \\ HD203064 & 35.0 \(\pm\) 0.5 & 3.50 \(\pm\) 0.06 & -13.1 \(\pm\) 0.1 & 0.10 \(\pm\) 0.03 & \(>\)13.7 & 1.0 \(\pm\)0.2 \\ HD15750 & 39.8 \(\pm\) 0.6 & 3.48 \(\pm\) 0.07 & -12.4 \(\pm\) 0.1 & 0.10 \(\pm\) 0.03 & \(<\)19.9 & 1.1 \(\pm\)0.1 \\ HD14947 & 38.0 \(\pm\) 0.2 & 3.50 \(\pm\) 0.03 & -12.5 \(\pm\) 0.1 & 0.14 \(\pm\) 0.03 & \(<\)11.3 & \(>\)12 \\ HD30614 & 29.1 \(\pm\) 0.2 & \(<\)2.83 & -12.6 \(\pm\) 0.1 & \(>\)0.20 & \(>\)18.4 & 1.1 \(\pm\)0.1 \\ HD210809 & 31.0 \(\pm\) 0.8 & 3.05 \(\pm\) 0.12 & -12.7 \(\pm\) 0.1 & \(>\)0.13 & \(>\)14.9 & 1.1 \(\pm\)0.2 \\ HD209975 & 31.5 \(\pm\) 0.6 & 3.26 \(\pm\) 0.09 & -13.1 \(\pm\) 0.2 & 0.10 \(\pm\) 0.03 & \(<\)12.1 & 1.0 \(\pm\)0.2 \\ \hline \end{tabular}
\end{table}
Table 10: Stellar parameters obtained from the optical analysis using the Linear\({}_{10-025}\) clumping law. Upper and lower limits refer to the corresponding parameter ranges of our model grids only (see Table 9).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Star & \(T_{\rm eff}\) (KK) & \(\log g\) (dex) & \(\log Q\) & \(Y_{\rm life}\) & \(v_{\rm min}\) (km s\({}^{-1}\)) & \(\beta\) \\ \hline HD46223 & 42.7 \(\pm\) 1.7 & 3.83 \(\pm\) 0.10 & -14.1 \(\pm\) 1.4 & \(<\) 0.10 & \(>\)5.0 & \(<\) 1.3 \\ HD15629 & 40.8 \(\pm\) 1.2 & 3.85 \(\pm\) 0.10 & -13.0 \(\pm\) 1.3 & 0.10 \(\pm\) 0.03 & \(<\)19.9 & \(>\) 0.8 \\ HD46150 & 39.5 \(\pm\) 0.8 & 3.85 \(\pm\) 0.11 & -13.1 \(\pm\) 0.2 & \(<\)0.08 & 12.1 \(\pm\) 7.1 & \(>\)0.9 \\ HD217086 & 36.8 \(\pm\) 1.1 & 3.88 \(\pm\) 0.11 & -14.2 \(\pm\) 1.3 & 0.13 \(\pm\) 0.07 & \(>\)5.0 & \(>\) 0.8 \\ HD149757 & 32.5 \(\pm\) 1.6 & 3.52 \(\pm\) 0.22 & \(<\)-13.3 & 0.13 \(\pm\) 0.07 & \(>\)9.4 & \(<\) 1.3 \\ HD190864 & 37.5 \(\pm\) 1.0 & 3.85 \(\pm\) 0.10 & -12.9 \(\pm\) 0.1 & \(>\)0.16 & \(
Taken all our findings together, we conclude that, globally, clumping has sometimes positive impact on the fits to H\({}_{\alpha}\), Br\({}_{\gamma}\) and He ii \(\lambda\)4686 in supergiants. The impact may depend on the particular clumping law chosen, although the differences between the clumping laws explored are small (or even not present for most lines), and they do not offer a clear indication of which one better represents the distribution of inhomogeneities in the stellar wind. While for most cases the \(f_{\rm cl}^{\rm max}=10\) linear laws shows a better fit, we also find many counter-examples. This indicates that more work is needed to determine the actual clumping distribution in these stars.
## 7 Discussion of results: Impact of clumping laws on optical and NIR analyses
We are now able to compare the derived parameters, to see whether the introduction of the different clumping laws modifies our determinations or improves the agreement between optical and infrared stellar parameters. We will not consider microturbulence and \(\beta\) exponent, as they remain basically unrestricted in our analyses.
We begin by comparing the results obtained from the optical analysis for the effective temperature. Fig. 20 (upper left) shows the comparison for all five clumping laws (homogeneous wind and four linear laws, as discussed above). We see that the values for all stars are fully consistent for almost all laws within the uncertainties. The only exception is for the \(v_{2}/v_{\infty}=0.5\) laws in HD 15 570, that give a slightly lower \(T_{\rm eff}\). Thus, the temperature determination in the optical is not significantly affected by the presence of clumping or by differences in the clumping distribution (as far as it concerns the laws used in this work).
The comparison of the gravities obtained from the optical analyses is presented in Fig. 20 (upper right). The difference between dwarfs and giants on the one side, and supergiants on the other, is obvious. For giants and dwarfs we obtain similar values of \(\log g\), independent of the clumping laws used (including the absence of clumping). Also the uncertainties are similar ( but the uncertainties are significantly larger for the fast rotators, as could be expected). However, and except for HD 209 975, the situation is different for the supergiants. Here, the unclumped values are always larger than the clumped ones, and depend on the specific clumping law; in the cases of HD 15 570 and HD 14 947, quite significantly. This is a consequence of the lower mass-loss rate implied by clumping that renders the red wings of the Balmer lines and the core of He i lines deeper. A lower gravity (sometimes accompanied by a lower \(T_{\rm eff}\)) compensates for this effect.
We compare the results for the wind parameter \(\log Q\) in Fig. 20 (lower left). As we have used the same wind terminal velocity and stellar radius, this quantity is equivalent to the mass-loss rate. We see that the unclumped models give higher mass-loss rates, and that the correction increases with the maximum clumping factor, as expected. The mean differences in \(\log Q\) are
Figure 18: As Fig. 5 using the clumping law Linear\({}_{10-025}\).
somewhat below the "nominal" values of 0.5 (\(f_{\rm cl}^{\rm max}=10\)) and 0.65 dex (\(f_{\rm cl}^{\rm max}=20\)), with actual differences of 0.39\(\pm\)0.05 and 0.33\(\pm\)0.05 for the \(f_{\rm cl}^{\rm max}=10\) laws, and 0.54\(\pm\)0.05 and 0.48\(\pm\)0.08 for the \(f_{\rm cl}^{\rm max}=20\) laws, indicating that the diagnostic lines form before the maximum clumping factor is reached.
The helium abundances are compared in Fig. 20 (lower right). All determinations (unclumped and clumped models) result in equal values within typical uncertainties, again except for the supergiants where the dispersion is larger and lower limits become frequent. Although most values are still consistent within their uncertainties, in two cases they are not. The largest discrepancy is found for the cool and bright supergiant HD 30 614, which gives a higher He abundance when clumping is included. This higher He abundance is the result of a better fit to the He lines after the changes produced in H\({}_{\alpha}\). The second strong discrepancy is seen in HD 210 809, where the He abundance obtained with the Linear\({}_{20-050}\) law is much larger than any other value
Regarding an optical diagnostics, we conclude that only specific stellar parameters might be affected by different assumptions on the clumping conditions: gravities determined for supergiants, helium abundances in peculiar cases like HD 30 614, and wind strengths (beyond their explicit dependence on \(f_{\rm cl}^{\rm max}\)) for supergiants and rapidly rotating dwarfs.
The temperature values obtained from the analysis in the NIR are, in general, again consistent when introducing clumping (Fig. 21, upper left). The exception is the behavior of the hottest star, HD 46 223. Here, the models with \(v_{2}=0.5v_{\rm so}\) give a temperature much higher than those with \(v_{2}=0.25v_{\rm so}\) and the homogeneous wind models. The high temperature produces a bad fit to the He i lines compensated by a slightly better fit to all other lines. This is then due to the lack of sufficient constraints for such a case. Giving more weight to He i 1.70\(\mu\)m (as the only diagnostics strongly constraining the ionization balance) would reduce the discrepancy, as we have convinced ourselves.
The gravity values show a large dispersion, even for dwarfs, though they are all consistent within their 1-\(\sigma\) uncertainties (which are sometimes quite large, see Fig. 21, upper right). This is a consequence of the poorer fits to the Brackett lines (see Sect. 3.2.2), together with a more limited number of diagnostics.
This combination produces also a complicate behavior for the \(\log Q\) wind parameter. The general behavior is similar to the optical case, namely the \(\log Q\) values obtained with clumping are lower than the ones derived from a homogeneous wind model, though with a larger number of upper limits and larger error bars. This is particularly obvious for the rapidly rotating dwarfs. In a few peculiar cases we find clumped values that are larger than the unclumped ones. This happens for the Linear\({}_{10-050}\) law in HD 15 629, HD 14 947 and HD 210 809 (in the latter star, also for Linear\({}_{10-025}\) and Linear\({}_{20-050}\)). Though in all cases the values are consistent within the uncertainties (and even consistent with the expected behavior), the apparent problems result from the loss of
Figure 19: Comparison of spectral fits to selected optical and NIR lines from the supergiants of our sample. Observations: gray; synthetic profiles from best-fitting models without (black) and with clumping using various clumping laws (for color-coding, see legend). The stellar and wind parameters of the individual best-fitting models are provided in Tables 5, 10, A.1,A.3, A.5 for the optical lines, and in Tables 6, 11, A.2, A.4, A.6 for the NIR lines.
information produced by poorer fits. In fact, only for supergiants, an actual determination of \(\log Q\) for all clumping laws is possible (except for HD 209 975 for which only upper limits could be derived). With all these uncertainties, the He abundance remains nearly unconstrained.
The conclusion is that the impact of clumping on the derived parameters is similar in the NIR and in the optical. The NIR shows a larger scatter in the global trends and more upper/lower limits. A second conclusion is that, compared to the optical, the H and K band lines in the NIR do not offer us a clear advantage to characterize the clumping. However, this conclusion depends (until further evidence) on our assumptions about the shape of the clumping law and its parameters; the impact on the different lines will depend on the behavior of the clumping law in the line formation region.
We are now in a position to finally address the question whether the introduction of clumping improves the agreement between the optical and the infrared parameter determinations, compared to the assumption of a homogeneous wind. An improvement in this comparison would also provide additional hints on the most appropriate clumping law to be adopted.
Fig. 22, upper left, shows the comparison of \(T_{\rm eff}\) determinations in the optical and the infrared for the different clumping conditions explored in our experiment. Most stars have optical and NIR \(T_{\rm eff}\) determinations consistent within the uncertainties for all explored laws. However, there are also certain outliers. The most important are the supergiants HD 14 947 and HD 210 809, where the large discrepancies can be traced back to their shallow Br\({}_{10}\) and Br\({}_{11}\) lines (shallow compared to those from HD 15 570 and HD 209 975, which occupy similar parameter ranges, respectively; see Fig. 23). As shown by Repolust et al. (2005, their Fig. 1) for models with low wind densities, the cores of Br\({}_{10/11}\) strongly react to changes in gravity (an alteration of gravity mostly affects the depth of the line cores, contrasted to the behavior of the Balmer lines), where the depth decreases with increasing \(\log g\). For our objects with substantial mass-loss rates, shallow Br\({}_{10/11}\) lines can be only reproduced when in addition to a high gravity also the effective temperatures and mass-loss rates lie in a certain range. In particular, the mass-loss rates must not be too large, since otherwise Br\({}_{10/11}\) would become severely asymmetric, which is not observed. Taken all these constraints together, a fit of the shallow Br\({}_{10/11}\) lines pushes the gravity and the temperature toward values higher than derived from the optical, with the higher temperature also required for compensating for the shift of the Helium ionization equilibrium and the strong reaction of the He in lines (see again Fig. 1 in Repolust et al. 2005). The somewhat lower mass-loss rate (required to fit Br\({}_{10/11}\)) also prevents Br\({}_{\gamma}\) from entering into emis
Figure 20: Comparison of effective temperatures (upper left), gravity (upper right), \(\log Q\) (lower left) and \(Y_{\rm He}\) (lower right) obtained from the optical spectra with the different clumping laws considered in this work. The abscissa gives the identification number of the star. For each star, the results from the different clumping laws (including the smooth wind model) are plotted (see legend). Corresponding entries (except for the smooth wind results) have been slightly displaced on the abscissa. Stars are ordered as in Tab. 1: #1: HD 46 223; #2: HD 15 629; #3: HD 46 150; #4: HD 217 086; #5: HD 149 757; #6: HD 190 864; #7: HD 203 064; #8: HD 15 570; #9: HD 14 947; #10: HD 30 614; #11: HD 210 809; #12: HD 209 975. Rapid rotators are stars #4, #5, and #7.
sion in the hot supergiant HD 14 947, whereas the cooler supergiant HD 210 809 still partly fills its Br\({}_{\gamma}\). All these problems are, for example, not present in HD 15 570, because here the Br\({}_{10/11}\) lines have a "reasonable" depth allowing for these lines to be fit at parameters that are mostly compatible with the optical results (but see below). Whether these problems are related to the model calculation or to the reduction of NIR spectra remains an open question. Anyhow, and apart from a few individual cases that do not allow a generalization, no clumping law (including the homogeneous wind) shows a better agreement between the effective temperatures derived from the optical and the NIR than the other laws.
The gravity differences reflect the larger scatter obtained for this parameter in the NIR. Most stars are again close to the zero-difference line, again with the exceptions of the supergiants HD 14 947 and HD 210 809 already discussed above.
The difficulties to simultaneously fit the He i and He ii lines in HD 210 809 could point to a higher He abundance than considered in our grid (for this star and also for HD 14 947, we obtain mostly lower limits for the He abundance; however, this is not the case in the optical, cf. Figs. 20 and 21).
The optical/NIR differences in log \(Q\) show again a clear pattern: for dwarfs, the large uncertainties dominate, whereas for supergiants, results are consistent (although HD 209 975 is here an exception, with a very low log \(Q\) value derived in the NIR). The He abundances also scatter around the zero-line, with large uncertainties and numerous limits reflecting mainly the behavior in the NIR. Nevertheless, there are no obvious outliers, except for the unclumped values for HD 14 947 and HD 210 809, both suffering from the mass-loss dependence of Br\({}_{10/11}\) (see above). Interestingly, however, the log \(Q\) values from the optical and the NIR agree when clumping is considered.
Finally, when combining the optical and near NIR lines in a joint analysis, the results are dominated by the fits to the optical lines. This is a consequence of the larger number of optical lines and the better fit quality in this wavelength range.
The most important conclusion from our comparison is that whatever the differences between the optical and the NIR, the inclusion of the different clumping laws as explored in our work, does not contribute to a globally better agreement between the parameters derived from either wavelength range. For example, the average value of the differences for \(T_{\rm eff}\) is \(-0.1\pm 2.4\) kK for the unclumped values, and ranges from \(-0.7\pm 2.5\) kK to \(+1.6\pm 2.9\) kK for the various clumping laws, with no star showing a clearly better agreement when introducing clumping. Such a better agreement would require either even higher than current quality NIR observation with more diagnostic spectral lines or different types of clumping laws.
Alternatively, it is also possible that clumping has a different behavior in different stars, not only because of spectral type (cf. Hawcroft et al. 2023, as already discussed in Sect. 5) and luminosity class, but also because of additional differences like pulsations and wind variability (not to mention coassional mass ejections or local magnetic fields). This is particularly relevant for the two most extreme outliers, HD 14 974 and HD 210 809 (the latter well known for its notorious wind variability, see Markova et al. 2005), where the discrepancy of the optical and NIR results is rooted in the weak Br\({}_{10}\) and Br\({}_{11}\) lines (see above).
Figure 21: Same as Fig. 20, but for the stellar parameters derived from the NIR.
But also for the other objects analyzed here we are using single epoch observations, with the optical and NIR spectra taken at different times, so that line profile variability may play a role in the differences.
Besides the above possibility that the clumping conditions in both stars deviate strongly from our current assumptions (particularly regarding the lower wind), we cannot neglect the possibility that the (complex) NIR data reduction (see appendix in Hanson et al. 1996 and Hanson et al. 2005) is free from any problems, and that the actual line profiles might be stronger than adopted here (see also Repolust et al. 2005). Another possibility regards the question of (in)accurate hydrogen collision cross sections. Using the most up-to-date, ab initio values from Przybilla & Butler (2004) instead of the default values following Giovanardi et al. (1987) implemented into Fastwind only exacerbates the problem though, since the corresponding Brackett lines become even stronger then (see Fig. 15 in Repolust et al. 2005).
## 8 Conclusions
We have carried out a determination of stellar parameters and a study of the clumping effects in the optical and the NIR extending the automatic methods developed in our group (see Sect. 3.1 and 3.2). Our objectives were (a) to check whether we can obtain stellar parameters from the infrared, with the same or comparable accuracy to those in the optical; (b) to check whether the parameters obtained were consistent; (c) to study the effects of clumping in the determination of stellar parameters; and (d) to check whether clumping improves the agreement between the
Figure 23: Comparison of the observed Balmer and Brackett lines in HD 15 570 (black) and HD 14 947 (red). Whereas the Balmer lines are very similar, the Brackett lines Br\({}_{10}\) and Br\({}_{11}\) are much shallower in HD 14 947. See text.
Figure 22: Same as Fig. 20, but for the differences between optical and NIR determinations. Temperature differences are given in kK, and differences in gravity and log \(Q\) in dex.
infrared and the optical parameters. To these ends, we have extended the automatic tools to include the NIR spectra.
When analyzing the observed spectra in the optical and NIR with unclumped models, we reached the following conclusions:
* In many cases, test calculations revealed a problematic behavior of the Br lines. It was not possible to fit all of the observed lines simultaneously. We decided not to use the highest available line, Br\({}_{12}\), since this line deviates the most. However, Br\({}_{10/11}\) also frequently presented problems to achieve a consistent fit. We conclude that the Br lines need to be studied in more detail in the future.
* Globally, the quality of the fits to the optical spectrum is excellent. The only problems appear for supergiants, mostly related to H\({}_{\alpha}\) and sometimes to He ii 4686, with the fits improving with decreasing luminosity class. In the infrared, again the best fits are for dwarfs, and problems are concentrated in Br\({}_{7}\) (and sometimes the other Br lines), which in some supergiants appear in emission, while models still predict absorption. Helium lines in the NIR present a variety of fitting problems, which might also be related to a lower number of available lines.
* Both the optical and the NIR analyses without clumping show a good agreement with previous similar studies in the literature (Holgado et al. 2018; Repolust et al. 2005).
When comparing the results in the optical and the NIR derived from unclumped models, we find that:
* the rotational velocities derived from the NIR He i \(\lambda\)1.70\(\mu\)m line agree in most cases well with those derived with a higher accuracy from the optical metal lines (with the known limitations due to the larger intrinsic (Stark-) broadening).
* There is a good agreement between the parameters derived in the optical and the NIR, with some deviating individual cases (particularly HD 14 947 and HD 210 809). The uncertainties in the NIR are larger, mostly due to poorer fits and, to a lesser extent, to the low number of diagnostic lines. Helium abundances from the NIR frequently show upper and lower limits, indicating a lack of sensitivity to this parameter.
* We could thus derive stellar parameters from the infrared with an almost similar accuracy to the optical. The uncertainties are larger for the reasons given in the item above.
We then explored the effects of clumping using different clumping laws. We considered a Najarro-type law, two Hillier-type laws, and up to six different linear laws. We compared the behavior of the different laws and their impact on the line profiles. The main conclusions are:
* Using a coarse model grid, we show that clumping only had significant effects on the synthetic spectra of supergiants once we accounted for the corresponding mass-loss scaling relations as a function of the (maximum) clumping factor, \(f_{\rm cl}^{\rm max}\)(or minimum volume filling factor). For giant stars, effects are very modest and they are negligible for dwarfs.
* We find only small differences in the synthetic wind lines based on the various clumping laws, which indicates that these lines are formed in layers where the differences between these laws are not critical. Differences can also be present in the absorption cores of lines that are mainly formed in the photosphere, because of a different refilling by wind emission when the mass-loss rates have been appropriately scaled as a function of \(f_{\rm cl}^{\rm max}\).
* Together with \(f_{\rm cl}^{\rm max}\), the second relevant parameter is the extent of the region where the clumping factor increases until that maximum is reached. Both quantities define the distribution of the clumping factor in the line formation region.
* Primary differences between clumped and unclumped models are related to the modified density structure in the line-forming region. The different laws explored in this work do not trigger significant differences in the corresponding spectra (after scaling the mass-loss rates), since they all share the same general behavior in that region: clumping is adopted to start close to (but above) the photosphere, and increases more or less rapidly to a maximum. The wind emission increases when \(f_{\rm cl}^{\rm max}\) is reached in the inner and intermediate wind layers, and the behavior of the clumping laws in the outer wind does not affect the line formation region relevant for this work.
* Because of the rather weak differences raised by the three kinds of clumping laws investigated here, an analysis with the linear clumping law is sufficient for the exploratory character of this work, because if its conceptual simplicity.
* The central emission seen in some lines (either on top of an emission or absorption profile) is primarily related to a NLTE effect in the transonic region, affecting the occupation numbers of the upper and/or lower atomic levels. It is thus (almost) independent of the specific clumping stratification, though it depends on the actual mass-loss rate.
Subsequent to the above study of principal effects, we compared the fits obtained from four model grids with different linear clumping laws, discriminated by different combinations in \(v_{2}/v_{\infty}\)(= 0.25, 0.50) and \(f_{\rm cl}^{\rm max}\)(= 10, 20). We find that:
* Clumping usually has positive effects for the fit of H\({}_{\alpha}\), He ii 4686, and Br\({}_{7}\) in supergiants (particularly in hot supergiants), sometimes improving the consistency between the former two optical lines (for cool supergiants). However, there is a trend to worse fits in He ii 2.18 \(\mu\)m.
* The laws with \(v_{2}/v_{\infty}\)= 0.50 imply a larger number of changes when comparing the fits for the two \(f_{\rm cl}^{\rm max}\)values. This is a consequence of a more pronounced variation of the clumping factor along the line-forming regions.
* The actual impact on the line profiles depends on the specific clumping law, although differences between the laws are small in many cases. We note, however, that the best fit to individual lines in a given star may be reached with different clumping laws, pointing to a potentially more complex distribution than the one considered here.
We finally compared the stellar parameters obtained with the different clumping laws, to see whether the parameters change significantly and whether the agreement between optical and NIR parameters is better for a particular law. Our main conclusions are:
* In the optical, only \(\log g\) and \(\log Q\) in supergiants are affected by the use of different clumping laws (except for particular cases, such as \(Y_{\rm He}\) in HD 30 614 or the wind strength in rapidly rotating dwarfs).
* In the NIR, the Br lines are often responsible for problems in accurately determining \(\log g\) and consequently \(\log Q\).
* As in the unclumped case, we obtain similar stellar parameters in the optical and the NIR, although with a larger scatter and more upper and lower limits in the latter. HD 14 947 and HD 210 809 (both supergiants) are outliers in this respect, mainly due to problems with Br lines.
* In our analysis, the H and K bands did not offer a clear advantage over the optical wavelengths to characterize clumping.
* Regarding the consistency between optical and NIR parameters, none of the specific clumping laws displayed a better global agreement nor do clumped models agree better than unclumped ones. Results for \(\log Q\) are mostly consistent (the larger \(f_{\rm cl}^{\rm max}\), the lower the derived wind-strength), particularly for strong winds (supergiants).
Taking everything together, we reach the somewhat disappointing conclusions that the inclusion of the NIR ( as done here) still does not allow actual mass-loss rates to be derived. There is still the dichotomy between \(\dot{M}\) and \(f_{\rm cl}^{\rm max}\), which might be only broken by including lines that react in a different way than typical recombination lines such as H\({}_{\alpha}\). However, including UV P Cygni lines (when available) is difficult, because of the impact of X-ray emission, optically thick clumping, and saturation, though first analyses in such a respect have already been undertaken (Haverroft et al., 2021; Brands et al., 2022; Haverroft et al., 2023). One might question whether an analysis of the predicted central emission in, for example, Br\({}_{\gamma}\) might help, since this should depend on the actual \(\dot{M}\) alone, in the same spirit as Br\({}_{\alpha}\) for weak-winded stars (Najarro et al., 2011). Unfortunately, the predicted emission peak is quite narrow and small, much smaller than in the case of Br\({}_{\alpha}\), and most likely not useful for \(\dot{M}\) determinations. Finally, at least for late-type O supergiants and early-type B supergiants, constraints on the clumping properties and actual mass-loss rates might be feasible, because of the different behavior of H\({}_{\alpha}\) and He ii 4686 (Kudritzki et al., 2006; Holgado et al., 2018).
Our study indicates that future work requires some improvement in the treatment of the Br lines. We need to analyze a larger sample of stars considering a wavelength range as large as possible to find patterns among them that can be used to characterize the clumping laws. The positive view is that our models give consistent results between the optical and infrared wavelength regions, that the use of different clumping laws does not result in significant differences in the derived stellar parameters (although the use of a common clumping law may introduce some extra uncertainty for individual cases), and that the infrared contains enough information for a spectroscopic analysis with an accuracy that is quite similar to the optical.
###### Acknowledgements.
This research has been supported by the Generalitat Valenciana under grant PROMETEO/2019/041 and Spanish Ministerio de Ciencia e Innovacion (MCIN) with funding from the European Union NextCernationEU and Generalitat Valenciana in the call Programme de Planes Complementes de l'1D+i (PRTR 2022), (Project HIAMS, reference ASEAE/2022/017) and also MCN through the Spanish State Research Agency through grants PID2021-122978N-C22/2 and the Severo Ocho Programme 2020-2023 (CEX2019-000920-S) (MICINN/AEI/FEDER, UE).
|
2309.09344 | Efficient Belief Road Map for Planning Under Uncertainty | Robotic systems, particularly in demanding environments like narrow corridors
or disaster zones, often grapple with imperfect state estimation. Addressing
this challenge requires a trajectory plan that not only navigates these
restrictive spaces but also manages the inherent uncertainty of the system. We
present a novel approach for graph-based belief space planning via the use of
an efficient covariance control algorithm. By adaptively steering state
statistics via output state feedback, we efficiently craft a belief roadmap
characterized by nodes with controlled uncertainty and edges representing
collision-free mean trajectories. The roadmap's structured design then paves
the way for precise path searches that balance control costs and uncertainty
considerations. Our numerical experiments affirm the efficacy and advantage of
our method in different motion planning tasks. Our open-source implementation
can be found at https://github.com/hzyu17/VIMP/tree/BRM. | Zhenyang Chen, Hongzhe Yu, Yongxin Chen | 2023-09-17T18:22:46Z | http://arxiv.org/abs/2309.09344v1 | # Efficient Belief Road Map for Planning Under Uncertainty
###### Abstract
Robotic systems, particularly in demanding environments like narrow corridors or disaster zones, often grapple with imperfect state estimation. Addressing this challenge requires a trajectory plan that not only navigates these restrictive spaces but also manages the inherent uncertainty of the system. We present a novel approach for graph-based belief space planning via the use of an efficient covariance control algorithm. By adaptively steering state statistics via output state feedback, we efficiently craft a belief roadmap characterized by nodes with controlled uncertainty and edges representing collision-free mean trajectories. The roadmap's structured design then paves the way for precise path searches that balance control costs and uncertainty considerations. Our numerical experiments affirm the efficacy and advantage of our method in different motion planning tasks. Our open-source implementation can be found at [https://github.com/hxyu17/VIMP/tree/BRM](https://github.com/hxyu17/VIMP/tree/BRM).
## I Introduction
In the challenging realm of robotic motion planning, uncertainty presents a critical hurdle for effective operation in dynamic and complex real-world environments. Historically, motion planning under uncertainty evolved from deterministic motion planning foundations [1], adopting one of two primary trajectories: the optimization-based approach and the sampling-based strategy.
The trajectory optimization paradigm, extensively studied in works like [2] and [3], transforms planning challenges into optimal control problems. This transformation necessitates the resolution of the Hamilton-Jacobi-Bellman equation through dynamic programming techniques. However, while this method promises precision, it faces significant scalability issues [4][5], often at the cost of local solutions or even infeasibility. On the other end of the spectrum, sampling-based planning establishes motion planning as a search problem. By utilizing algorithms such as the probabilistic road maps (PRM) [6] and rapidly exploring random trees (RRT) [7][8], this approach leverages graph structures filled with random feasible states to pinpoint optimal paths. What sets this approach apart is its promise of probabilistically complete solutions, assuring an increasing likelihood of finding a feasible solution with more samples.
While deterministic motion planning offers a robust framework, introducing uncertainties complicates the scenario significantly. This led to the development of belief space planning [9][10][11][12]. Essentially an expansion of traditional planning to incorporate uncertainties, this approach has seen a growing emphasis on belief road maps (BRM) [13][14][15][16]. Unlike the conventional nodes of deterministic states in PRM, BRM employs state distributions, bringing forth unique challenges, especially regarding computational efficiency.
At the intersection of these challenges lies covariance steering, a discipline geared towards guiding distributions. Notably, in a series of studies [17][18], Chen et al. illustrated that linear system distribution steering can be achieved through closed-form solutions. More recent works [19][20] expanded on this by integrating safety constraints and addressing nonlinear control-affine dynamics.
Building upon these advances, our research introduces a nuanced covariance steering approach for graph-based motion planning. We tackle the BRM's existing challenges by proficiently crafting probabilistic graph edges. Incorporating state estimation [21] further aligns our methodology with the broader belief space planning framework, drawing parallels to chance-constrained strategies like CC-RRT* [22]. Empirical evidence, as we will present, accentuates the advantages of our approach over existing methods, showcasing both its effectiveness and efficiency in managing uncertainties.
## II Background
This section introduces belief space planning and covariance steering as key components of our proposed method.
### _Belief space planning_
Belief space planning addresses the challenge of making decisions with uncertain robot states where the belief state \(b\) is a composite representation of the robot's state and its
Fig. 1: A belief space graph depicting sampled state beliefs and the connected edges against obstacles. When prioritizing entropy cost over control energy, the planner opts for Path 2 (shown in purple), which allows for increased uncertainty, enhancing robot safety. Conversely, if control energy minimization is paramount, the planner selects the more direct Path 1 (shown in orange), demonstrating reduced uncertainty tolerance.
associated uncertainty. With new input \(u\) and observation \(z\), the state transition function \(\tau\) updates the belief state \(b^{{}^{\prime}}=\tau(b,u,z)\). Instead of always choosing the shortest path, belief space planners leverage the belief information and search for a more conservative motion plan when the state estimation is uncertain, as shown in figure 1. A significant concern when planning in belief spaces is the computational challenge due to the high dimensionality of belief states. This problem can be addressed by sampling-based algorithms like PRM.
The belief-space variant of the PRM is called the Belief Roadmap (BRM) [13]. The primary idea of BRM is to sample both configurations and their distributions in the belief state space, test them for feasibility, and then attempt to connect nearby configurations to form a roadmap. The BRM can be mathematically represented as a graph \(G=(V,E)\) where \(V=\{b_{i}\}\) is the set of nodes representing feasible belief states and \(E\) is the set of edges indicating belief paths between adjacent nodes. To construct BRM, for each pair of belief nodes \((b_{i},b_{j})\), a local planner attempts to find a feasible path considering both the spatial constraints and the belief evolution. The belief evolution accounts for uncertainty propagation, influenced by robot dynamics and environmental factors. Once the belief roadmap is constructed with the cost associated with traversal and belief uncertainty, an optimal path can be found by graph search algorithms like \(A^{*}\) and Dijkstra.
### _Covariance steering for control-affine systems_
The covariance steering problem for nonlinear systems remains a challenge. Recent progress established in [20] demonstrates an efficient algorithm tailored for control-affine systems. We present the main results in this section. The nonlinear system under consideration is
\[dX_{t}=f(t,X_{t})dt+B(t)(u_{t}dt+\sqrt{\epsilon}dW_{t}) \tag{1}\]
where \(X_{t}\in\mathbb{R}^{n}\) is the state vector, \(u_{t}\in\mathbb{R}^{p}\) is the input vector and \(f(t,X_{t})\) is the drift function. The input matrix \(B(t)\in\mathbb{R}^{n\times p}\) is assumed to be full rank. \(W_{t}\in\mathbb{R}^{p}\) represents a standard Wiener process [23], and \(\epsilon>0\) parameterizes the intensity of the disturbance. The covariance steering problem minimizes the control energy while seeking a state feedback policy to steer state statistics of the system from an initial value to a terminal one.
\[\min_{u} \mathbb{E}\left\{\int_{0}^{T}[\frac{1}{2}\|u_{t}\|^{2}+V(X_{t}) ]dt\right\} \tag{2a}\] \[dX_{t}=f(t,X_{t})dt+B(t)(u_{t}dt+\sqrt{\epsilon}dW_{t})\] (2b) \[X_{0}\sim\rho_{0},\quad X_{T}\sim\rho_{T}, \tag{2c}\]
where \(\rho_{0}\) (\(\rho_{T}\)) is a probability distribution with mean \(m_{0}\) (\(m_{T}\)) and covariance \(\Sigma_{0}\) (\(\Sigma_{T}\)).
By leveraging the Girsanov theorem, problem (2) can be transferred into a composite optimization problem which can be solved by the proximal gradient algorithm. From the results established in [20], each proximal gradient iteration with step size \(\eta\) amounts to solving the following linear covariance steering problem
\[\min_{u} \mathbb{E}\left\{\int_{0}^{T}[\frac{1}{2}\|u_{t}\|^{2}+\frac{1}{2 }X_{t}^{T}Q_{k}(t)X_{t}+X_{t}^{T}r_{k}(t)]dt\right\} \tag{3a}\] \[dX_{t}=\frac{1}{1+\eta}[A_{k}(t)+\eta\hat{A}_{k}(t)]X_{t}dt+ \frac{1}{1+\eta}[a_{k}(t)\] (3b) \[+\eta\hat{a}_{k}(t)]dt+B(t)(u_{t}dt+\sqrt{\epsilon}dW_{t})\] \[X_{0}\sim\rho_{0},\quad X_{T}\sim\rho_{T}, \tag{3c}\]
where \(A_{k}(t),a_{k}(t)\) are the results from last iteration. Also, \(\bar{x}_{k}(t)\) is the mean trajectory at \(k\), \(\hat{A}_{k}(t)\), \(\hat{a}_{k}(t)\) are linearization matrices along \(\bar{x}_{k}(t)\), and \(Q_{k}(t),r_{k}(t)\) are the weighting matrices [20].
This result bridges the gap between the non-linear covariance steering problem and the linear covariance steering problem. The linear covariance steering problem in 3 enjoys a closed-form feedback solution in the form [17][24]
\[u_{t}^{\star}=-B(t)^{T}\Pi(t)(X_{t}-x_{t}^{\star})+v_{t}^{\star}.\]
where \(\Pi(t)\) satisfies a coupled Riccati equations. This closed-form solution for the proximal gradient update allows us to solve the covariance steering problem for the control-affine systems with a sublinear rate [20].
## III Problem formulation
In this work, we consider the motion planning problem under uncertainty. Uncertainty of the robot results from three sources: robot motion, robot state estimation, and environment. In our work, we assume the environment is deterministic and only considers the uncertainty of the robot itself. Robots are nonlinear control-affine systems whose dynamics and sensor models are
\[dX_{t} =f(t,X_{t})dt+B(t)(u_{t}dt+\sqrt{\epsilon}dW_{t}) \tag{4a}\] \[z(t) =h(X_{t})+v(t),\quad v(t)\sim\mathcal{N}(0,R(t)) \tag{4b}\]
Here, the notations follow the above Section 1 and \(z(t)\) is the observation output with function \(h\) and Gaussian noise \(v(t)\). The dynamics and sensor model can be viewed as the belief transition function of the robot. For the uncertainty that stems from the robot motion and dynamic model, we denote \(\Sigma\) as the covariance of the actual robot states \(x\), which follows 4. \(\Sigma\) describes the influence of noise \(W_{t}\) on the ideal robot states which follow the uncorrupted dynamic model.
For the uncertainty that stems from the state estimation, we denote \(P(t)\) as the state error-covariance of the estimation error \(\tilde{x}\). It is worth noting that the covariance of the terminal state is required to be larger than the state error-covariance \(\Sigma_{T}>P(T)\) when using state output as feedback [21]. Denote estimated robot states as \(\hat{x}=x-\tilde{x}\) and its covariance as \(\hat{\Sigma}=\Sigma-P\). We hope to steer the state covariance \(\Sigma\) by controlling the estimated state covariance \(\hat{\Sigma}\). We can state our control problem as for the given waypoints \(x_{0},x_{T}\) and their estimated state covariance \(\hat{\Sigma}_{0},\hat{\Sigma}_{T}\), finding a control sequence \(u_{t}\) such that 1) control the mean of the robot states
from \(x_{0}\) to \(x_{T}\), 2) control the covariance of the robot states from \(\Sigma_{0}\) to terminal covariance \(\Sigma_{T}\) via output feedback \(\hat{x}\), 3) generate a collision-free mean trajectory and 4) minimize the objective function of expected control energy and a state cost
\[\min_{u}\mathbb{E}\left\{\int_{0}^{T}[\frac{1}{2}\|u_{t}\|^{2}+V(X_{t})]dt\right\} \tag{5}\]
## IV belief space collision-avoiding covariance steering
We hope to build a BRM and solve problem (5) by edge construction and graph search. Constructing edges in belief space is challenging in terms of computation [13] since it involves steering state statistics under safety constraints using partially observable state information. We leverage the proximal gradient algorithm for problem (2) with a collision-avoiding state cost [19]
\[V(X)=\|\mathrm{hinge}(S(X))\|^{2} \tag{6}\]
to achieve the node connection in a BRM. In (6), \(\mathrm{hinge}(\cdot)\) represents the hinge loss function, and \(S(\cdot)\) is a differentiable signed distance function to the obstacles. We showed in [19] that the proposed proximal gradient algorithm in [20] is effective and efficient in producing collision-free belief space trajectories.
### _Collision avoiding covariance steering_
By employing the hinge loss function \(\mathrm{hinge}(\cdot)\), we can succinctly define our cost function as (6) to penalize risky behaviors and circumvent obstacle collisions. For each iteration of the proximal gradient covariance steering algorithm associated with (3), rather than detailing the intricate mathematics of deriving the weighting matrices \(Q_{k}(t)\) and \(r_{k}(t)\), it suffices to say that they are formulated based on the gradient and Hessian of the cost function. They integrate the effects of system dynamics, control inputs, and uncertainties.
Upon solving (3) within the paradigm of linear covariance steering, the optimal control policy is formulated as:
\[u_{t}^{\star}=K_{k}(t)X_{t}+d_{k}(t)\]
This control policy, when injected into the closed-loop process, offers the subsequent update dynamics
\[dX_{t}=\frac{1}{1+\eta}[A_{k}(t)+\eta\hat{A}_{k}(t)]X_{t}dt+\] \[\frac{1}{1+\eta}[a_{k}(t)+\eta\hat{a}_{k}(t)]dt+B(t)(u_{t}^{\star }dt+\sqrt{\epsilon}dW_{t}),\]
From this, we deduce the iterative update rules:
\[A_{k+1}(t) = \frac{1}{1+\eta}[A_{k}(t)+\eta\hat{A}_{k}(t)]+B(t)K_{k}(t), \tag{7a}\] \[a_{k+1}(t) = \frac{1}{1+\eta}[a_{k}(t)+\eta\hat{a}_{k}(t)]+B(t)d_{k}(t). \tag{7b}\]
To synchronize the evolution of \(\bar{x}_{k}(t)\) and \(\Sigma_{k}(t)\) at each iteration \(k\), one can employ the aforementioned update rule (7), ensuring an efficient iterative process.
In the following discourse, we showcase the state connection algorithm (as presented in Algorithm 1). Given the constructs \(A_{k}(t)\) and \(a_{k}(t)\) at the \(k^{th}\) iteration, the algorithm commences by propagating the mean trajectory \(\bar{x}_{k}(t)\) and subsequently estimating the state covariance along the path, as represented in Figure 1(a). Leveraging the updated nominal trajectory, the algorithm exploits the control-observation separation principle to compute both the Kalman gain and the state error-covariance \(P_{k}(t)\).
### _Steering state statistics using partially-observed output_
To initialize the state prediction for each sampled state, we set \(\hat{x}_{k}(t_{0})=\mathbb{E}[x_{k}(t_{0})]\) and \(P_{k}(t_{0})\) is sampled from a proper space. At each iteration, the continuous-time EKF propagates state error covariance \(P_{k}(t)\) based on the linearized system dynamics model \(A_{k}(t),a_{k}(t)\) and updates the near-optimal Kalman gain. These steps are coupled in continuous time and governed by the following Riccati equations
\[\dot{P}(t) =F(t)P(t)+P(t)F(t)^{T}+B(t)QB^{T}(t) \tag{8}\] \[-P(t)H(t)^{T}R(t)^{-1}H(t)P(t)\]
where noise covariance \(Q=\epsilon\mathbf{I}_{n}\) and \(F(t)\) and \(H(t)\) represent the Jacobian matrices of the system dynamics function and measurement function, respectively, as
\[F(t)=\frac{\partial f}{\partial x}\bigg{|}_{\hat{x}(t),u(t)}\quad H(t)=\frac{ \partial h}{\partial x}\bigg{|}_{\hat{x}(t),u(t)}\]
The target uncertainty of the robot state is known from the sampling stage. With the uncertainty from sensing calculated, we are able to compute the terminal error covariance of the Kalman filter state
\[\hat{\Sigma}_{k}(t)=\Sigma(t)-P_{k}(t)\]
and use it as output state feedback to control the covariance of the path in the next iteration.
\(\hat{A}_{k}(t),\hat{a}_{k}(t)\) represent the Gaussian Markov process approximation of the trajectory at the current iteration, which can be calculated by linearizing the system with respect to the nominal trajectory. \(\hat{A}_{k}(t),\hat{a}_{k}(t)\) are used in the construction of cost matrices \(Q_{k}(t)\) and \(r_{k}(t)\). Solving the linear covariance steering problem in 3, the optimal control policy \(K_{k}(t),d_{k}(t)\) is calculated and \(A_{k+1}(t),a_{k+1}(t)\) are updated following (7), as shown in Figure 1(b).
### _Entropy regularized edge cost_
For every trajectory between states, the cost is calculated using the sum of control energy, collision cost, and entropy cost. Entropy cost is defined as
\[E(\Sigma_{k}^{i}(t))=-\int_{0}^{T}\log(|\Sigma_{k}^{i}(t)|)dt.\]
A smaller entropy cost indicates the trajectory allows higher tolerance in the robot uncertainty and requires less sensing and control effort to control the uncertainty. Leveraging the duality between stochastic control and variational inference,
the objectives for the linearized system in each step of our edge construction problem formulation (2) is equivalent to an entropy-regularized motion planning [25][19]
\[\max\ \mathbb{E}_{q}[-\log J]+H(q),\]
where \(J\) denotes a composite cost involving a prior process-induced cost and the collision cost
\[J=\mathrm{J}_{\mathrm{prior}}(X)+V(X),\]
and \(q\) is the joint Gaussian distribution induced by the stochastic process (4) after linearization. In other words, optimizing the problem (2) is equivalently optimizing an entropy-regularized motion planning objective for the path distribution. We found that a trajectory distribution with a smaller entropy cost is safer than one with a higher cost in a probability sense. In the same spirit, we define the total cost for the \(i^{\mathrm{th}}\) trajectory \(z_{k}^{i}\) is the weighted sum of the control energy along the mean trajectory and the entropy cost
\[c_{ij}=\frac{1}{2}\int_{0}^{T}||u^{*}(t)||^{2}+\|\mathrm{hinge}(S(X(t))\|^{2} dt+\alpha E(\Sigma_{k}^{i}(t)). \tag{9}\]
By setting \(\alpha\) differently, the planner can return different optimal paths with lower control effort or lower risks.
### _Uncertainty-aware State Sampler_
We utilize BRM to divide the original problem into several easier state connection subproblems. To leverage the PGCS state connection Algorithm 1, it is important to provide a meaningful covariance to represent the uncertainty for each sample state. Define the distance \(d_{obs}\) between an obstacle region \(\mathcal{X}_{obs}\) and sampled state \(x_{s}\) as the minimum distance from \(x\) to any point \(p_{obs}\in\mathcal{X}_{obs}\), and the corresponding point in \(\mathcal{X}_{obs}\) is the closet point \(p_{obs}^{c}\) to \(x\). For \(n\) dimensional spatial state space, we hope to find \(n\) such points \(p_{obs}^{c}\) and form a covariance ellipsoid with the center point \(x_{s}\). The covariance for spatial states can be calculated from the parameter for this ellipsoid and a given confidence level \(P_{conf}\), such that the actual state \(x\) distribution satisfies
\[P((x-x_{s})<d_{obs})>P_{conf}. \tag{10}\]
We assume a constant velocity and covariance at each sampled state. The direction of the velocity can align with the direction of the current node and adjacent node.
### _Main algorithm_
The implementation of the PGCS-BRM algorithm is summarized in Algorithm 2. To calculate the hinge loss of obstacles, a signed distance field is used which, together with the start, and target states, are initialized by the user. Then the uncertainty-aware sampler samples a certain number of states in the state space and their covariance matrices determined by the environment. The main loop starts in line 11, where each feasible sampled state is looped through and whose nearest neighbors are found. The number of neighbors found is determined by a preset neighbor distance and the total number of sampled states. Next, we connect the current state with all its feasible neighbors using the state connection Algorithm 1. For each state pair, the nonlinear covariance steering connection algorithm is run twice to generate two trajectories from two different directions. To ensure the connection algorithm returns a feasible solution, we need the estimated robot state error-covariance \(\hat{\Sigma}>0\).
Fig. 2: Procedure of edge construction in PGCS-BRM.
## V Experiment
We conducted several numerical experiments to validate the proposed method. All experiments are conducted on a machine with CPU of i7-12700KF and 16GiB memory.
### _Effect on Changing \(\alpha\)_
To demonstrate how \(\alpha\) impacts the returned path and the ability of PGCS-BRM to handle non-linear systems, we conduct experiments on 2-D planning for a risky area. We consider the same nonlinear dynamical system used in [26]
\[dx_{1} =x_{2}dt, \tag{11a}\] \[dx_{2} =(u-c_{d}\|x_{2}\|x_{2})dt+\sqrt{\epsilon}dW_{t}. \tag{11b}\]
1,000 states are sampled and more than 10,000 trajectories are generated for state connection. In the search phase, \(A^{*}\), a best-first search algorithm, is deployed to find a path to the given goal state with minimum total cost. In Figure 3(a) and 3(b), we show that by changing the weighting factor \(\alpha\), PGCS-BRM is able to build belief graphs and find a path with less control cost or less entropy cost.
### _Evaluation of Running Time_
We compare the proposed method with the CS-BRM method in [16] using a linear double integrator dynamics
\[dX_{t}=(\begin{bmatrix}0&I\\ 0&0\end{bmatrix}X_{t}+\begin{bmatrix}0\\ I\end{bmatrix}U)dt+\sqrt{\epsilon}dW_{t}. \tag{12}\]
We use a map of 5 rectangular obstacles to compare these two methods. In each experiment with a different number of nodes, the same sampling setup is deployed and we used the same start and goal states for the graph building. For PGCS-BRM, each edge building is set to execute 50 iterations of the proximal gradient with step size \(\eta=0.001\) and discretized into 50 timesteps. We recorded the times for constructing the belief space graph after node sampling and repeated each experiment three times. Both algorithms are able to build a belief roadmap, however, due to the high computation cost in performing Monte-Carlo collision checking and solving optimization problems, CS-BRM requires higher computation time. On the other hand, PGCS-BRM is able to penalize collision in the cost function and directly solve the nonlinear
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline Nodes & \multicolumn{2}{c|}{25} & \multicolumn{2}{c|}{50} & \multicolumn{2}{c|}{75} & \multicolumn{2}{c|}{100} \\ \hline Edges & 106 & 104 & 302 & 310 & 476 & 588 & 940 & 974 \\ \hline Time/s & 45.41 & 23.77 & 75.74 & 91.82 & 111.25 & 282.64 & 341.83 & 227.62 \\ \hline \end{tabular}
\end{table} TABLE I: Time consumption in different graph scales for a robot in 3D environment. We conduct two different experiments for the same number of nodes to show that when the number of nodes are relatively small, the variance in graph construction time is large.
Fig. 3: Belief Roadmap planning for a 2D environment. Red dashed ellipsoids represents the estimated state covariances \(P(t)\) propagated using (8), and light blue ellipsoids are the state covariances \(\Sigma(t)\). Notice that \(P_{T}\) is expected to be less than \(\Sigma_{T}\) at every end of an edge.
covariance steering with a sublinear rate. PGCS-BRM is around 100 times faster and only requires 3.38s to build a roadmap with 30 nodes, compared to CS-BRM which needs 782s on average.
### _3D Experiment_
We conduct experiments in an obstacle-clustered 3-D environment for a 3-D point robot model (12) to demonstrate the generalizability of the proposed methods in higher-dimension space. The visualization result is shown in Figure 6. We record the average time for building a PGCS-BRM with 25, 50, 75, and 100 nodes and different numbers of edges. On average, PGCS takes 0.23-0.48s to build an edge in 3D space under 100 max iterations. It is worth noticing that the actual run times vary because different states are sampled in each experiment.
## VI Conclusion and future work
This work presents an efficient belief space roadmap (PGCS-BRM) for planning under uncertainty. The proposed method models the belief as state distributions and leverages nonlinear covariance steering with safety constraints for edge construction. We also include an entropy cost in the edge costs to account for robustness under uncertainty. Experiments show that the proposed method effectively constructs BRMs in different dimensions and outperforms state-of-the-art sampling-based belief space planning methods. Though PGCS-BRM shows promising results in building a belief roadmap with controlled covariance, the generated trajectory is still rough and not smooth. This is mainly the result of the lack of reasonable velocity sampling. Unlike spatial states, there are no explicit constraints on velocity in the sampling stage, and poorly selected velocity might result in a non-smooth trajectory. Developing a better velocity sampling algorithm and smoothing algorithm can greatly enhance the performance of the current algorithms. Another future direction worth exploring is to deploy such an algorithm in time-varying environments. The ability to control the uncertainty in planning and quickly replan the route is essential in such scenarios.
Fig. 4: Different paths are chosen by different weights on entropy.
Fig. 5: Running time comparison of graph construction between PGCS-BRM and CS-BRM [16]. The proposed method is more than **100 times more efficient** in graph building.
Fig. 6: Belief space graph and an optimal path in 3D space. The graph consists of 100 sampled states. The red funnel shows the optimal path found by the PGCS-BRM. |
2302.14701 | The Contest Game for Crowdsourcing Reviews | We consider a contest game modelling a contest where reviews for $m$
proposals are crowdsourced from $n$ strategic agents} players. Player $i$ has a
skill $s_{i\ell}$ for reviewing proposal $\ell$; for her review, she
strategically chooses a quality $q \in \{ 1, 2, \ldots, Q \}$ and pays an
effort ${\sf f}_{q} \geq 0$, strictly increasing with $q$. For her effort, she
is given a strictly positive payment determined by a payment function, which is
either player-invariant, like, e.g., the popular proportional allocation
function, or player-specific; for a given proposal, payments are proportional
to the corresponding efforts and the total payment provided by the contest
organizer is 1. The cost incurred to player $i$ for each of her reviews is the
difference of a skill-effort function $\Lambda (s_{i},{ \sf f}_{q})$ minus her
payment. Skills may vary for arbitrary players and arbitrary proposals. A
proposal-indifferent player $i$ has identical skills: $s_{i\ell} = s_{i}$ for
all $\ell$; anonymous players means $s_{i} = 1$ for all players $i$. In a pure
Nash equilibrium, no player could unilaterally reduce her cost by switching to
a different quality. We present algorithmic results for computing pure Nash
equilibria. | Marios Mavronicolas, Paul G. Spirakis | 2023-02-28T16:15:48Z | http://arxiv.org/abs/2302.14701v2 | # The Contest Game for Crowdsourcing Reviews
###### Abstract
We consider a _contest game_ modelling a contest where reviews for \(m\)_proposals_ are crowdsourced from \(n\)_players_. Player \(i\) has a _skill_\(s_{i\ell}\) for reviewing proposal \(\ell\); for her review, she strategically chooses a _quality_\(q\in\{1,2,\ldots,Q\}\) and pays an _effort_\(\mathfrak{f}_{q}\geq 0\), strictly increasing with \(q\). For her effort, she is given a _payment_ determined by a _payment function_, which is either _player-invariant_, like, e.g., the popular _proportional allocation_, or _player-specific_. The _cost_ incurred to player \(i\) for each of her reviews is the difference of a _skill-effort_ function \(\Lambda(s_{i},\mathfrak{f}_{q})\) minus her payment. Skills may vary for _arbitrary players and arbitrary proposals_. A _proposal-indifferent player \(i\)_ has identical skills: \(s_{i\ell}=s_{i}\) for all \(\ell\); _anonymous players_ means \(s_{i}=1\) for all players \(i\). In a _pure Nash equilibrium,_ no player could unilaterally reduce her cost by switching to a different quality. We present three main results:
* We present a novel potential function to show that the contest game has always a pure Nash equilibrium for the model of arbitrary players and arbitrary proposals with a player-invariant payment function. A particular case of this result answers an intriguing open question from [4]. In contrast, inexistence is possible for a player-specific payment function; the corresponding decision problem is \(\mathcal{NP}\)-complete.
* We exploit an _increasing-differences_ property of the skill-effort function to devise, for constant \(Q\), a polynomial-time \(\Theta(n^{Q})\) algorithm for arbitrary players and arbitrary proposals, under a player-invariant payment function, to compute a pure Nash equilibrium; it is a special case of a \(\Theta\left(\max\{Q^{2},n\}\binom{n}{Q-1}\right)\) algorithm for arbitrary \(Q\) that we present. This settles the parameterized complexity of the problem with respect to the parameter \(Q\). The computed equilibrium is _contiguous_: players with better skills are contiguously assigned to lower qualities; contiguity is the crux to bypass the exponential barrier incurred when enumerating all profiles.
* A \(\Theta(\max\{Q,n\})\) algorithm for proposal-indifferent and anonymous players, under proportional allocation, for the special case where \(\Lambda(s_{i},\mathfrak{f}_{q})=s_{i}\,\mathfrak{f}_{q}\) and for a concrete scenario of _mandatory participation_ of players in the contest. Starting with the two highest qualities, we greedily proceed to the lowest, focusing each time on a pair of qualities: maintaining players previously assigned to higher qualities, we split the players assigned to the higher of the two between the two qualities currently considered so as to enforce equilibrium.
These results are complemented with extentions in various directions; for example, we devise simple \(\Theta(1)\) algorithms under proportional allocation, taking \(\Lambda(s_{i},\mathfrak{f}_{q})=s_{i}\,\mathfrak{f}_{q}\) and making stronger assumptions on skills and efforts for both arbitrary and proposal-indifferent and anonymous players.
Contests, Crowdsourcing Reviews, Payment function, Skill-Effort Function, Pure Nash Equilibrium, Potential Function, Contiguous Equilibrium [Marios Mavronicolas]Supported by research funds at the University of Cyprus.
Paul G. SpirakisSupported by the EPSRC grant EP/P02002X/1.
## 1 Introduction
_Contests_[36] are modelled as games where strategic contestants, or _players_, invest efforts in competitions to win valuable prizes, such as monetary awards, scientific credit or social reputation. Such competitions are ubiquitous in contexts such as promotion tournaments in organizations, allocation of campaign resources, content curation and selection in online platforms, financial support of scientific research by governmental institutions and question-and-answer forums. This work joins an active research thread on the existence, computation and efficiency of (pure) Nash equilibria in games for crowdsourcing, content curation, information aggregation and other relative tasks [1, 3, 4, 10, 11, 12, 13, 15, 22, 37].
In a _crowdsourcing contest_ (see, e.g., [5, 9, 30]), solutions to a certain task are solicited. When the task is the evaluation of proposals requesting funding, a set of expert advisors, or _reviewers,_ file peer-reviews of the proposals. We shall consider a contest game for crowdsourcing reviews, embracing and wide-extending a corresponding game from [4, Section 2] that was motivated by issues in the design of blockchains and cryptocurrencies. In the contest game, funding agencies wish to collect peer-reviews of esteem _quality. Costs_ are incurred to reviewers; they reflect various overheads, such as time, participation cost or reputational loss, and increase with the reviewers' _efforts_ and _skills_. Naturally, efforts increase with the qualities of the reviews. Efforts map collectively into _payments_ rewarded to the reviewers to counterbalance their efforts. We proceed to formalize these considerations.
### The Contest Game for Crowdsourcing Reviews
We assume familiarity with the basics of finite games, as articulated, e.g., in [21]. There are \(n\)_players_\(1,2,\ldots,n\), with \(n\geq 2\), and \(m\)_proposals_\(1,2,\ldots,m\), with \(m\geq 1\). Players simultaneously write a review for each proposal. In the general model of _arbitrary players and arbitrary proposals,_ each player \(i\in[n]\) comes with a _skill_\(s_{i\ell}\geq 1\) to review a proposal \(\ell\in[m]\); the set \(\{s_{i\ell}\}_{\ell\in[m]}\) is the unique intrinsic characteristic of player \(i\). In the special case of _proposal-indifferent players,_ for each player \(i\in[n]\), \(s_{i\ell}=s_{i}\) for all proposals \(\ell\in[m]\); note that the model of proposal-indifferent players coincides with that of arbitrary players and arbitrary proposals when \(m=1\). In the subcase of _proposal-indifferent and anonymous players,_\(s_{i}=1\) for all players \(i\in[n]\).
The _quality_ of a review is strategically chosen by the writing player from the set \(\{1,2,\ldots,Q\}\), with \(Q\geq 2\). Each player \(i\in[n]\) chooses a _strategy vector_\(\mathbf{q}_{i}=\langle q_{i1},\ldots,q_{im}\rangle\); each strategy \(q_{i\ell}\in[Q]\) represents the quality of the review player \(i\) writes for proposal \(\ell\). Denote as \(\mathsf{f}_{q}\) the _effort_ paid by a player writing a review of quality \(q\) for any proposal, where \(\mathsf{f}\) is an increasing function of \(q\); take that \(\mathsf{f}_{1}<\mathsf{f}_{2}<\ldots<\mathsf{f}_{Q}\). Clearly, larger values of \(Q\) allow for a more accurate distinction among efforts. For each proposal \(\ell\), denote as \(\mathbf{q}^{\ell}=\langle q_{1\ell},\ldots,q_{n\ell}\rangle\), the _quality vector_ for proposal \(\ell\). For a given quality vector \(\mathbf{q}^{\ell}\), the _load_ on quality \(q\), denoted as \(\mathsf{N}_{\mathbf{q}^{\ell}}(q)\), is the number of players choosing quality \(q\) in the quality vector \(\mathbf{q}^{\ell}\); \(\mathsf{Players}_{\mathbf{q}^{\ell}}(q)\) denotes the set of players choosing quality \(q\) for their review for proposal \(\ell\). Clearly, \(\sum_{q\in[Q]}\mathsf{N}_{\mathbf{q}^{\ell}}(q)=n\). Denote as \(\mathbf{Q}=\langle\mathbf{q}_{1},\ldots,\mathbf{q}_{n}\rangle\) the _strategy matrix_.
The _contest designer_ has _total available budget_\(\mathsf{B}\geq m\), distributed evenly among the \(m\) proposals; so the _available per-proposal budget_\(\beta=\frac{\mathsf{B}}{m}\geq 1\). Given a quality vector \(\mathbf{q}^{\ell}\) for proposal \(\ell\), and a player \(i\in[n]\), the _payment_ awarded to player \(i\in[n]\) for her review for proposal \(\ell\) is the value \(\mathsf{P}_{i\ell}(\mathbf{q}^{\ell})\) given by the _payment function_\(\mathsf{P}_{i\ell}\), obeying the _normalization condition_: for all quality vectors \(\mathbf{q}^{\ell}\), \(\sum_{k\in[n]}\mathsf{P}_{k\ell}(\mathbf{q}^{\ell})\leq 1\). The definition of the payment function together with the assumption \(\frac{\mathsf{B}}{m}\geq 1\) fulfil the _bounded-budget constraint_: for each proposal \(\ell\in[m]\), \(\sum_{k\in[m]}\mathsf{P}_{k\ell}(\mathbf{q}^{\ell})\leq\beta\). Modelling \(\mathsf{P}\) as a function of \(\mathbf{q}^{\ell}\), for a
given proposal \(\ell\in[m]\), is implicitly assuming that the payment does not depend on any distinguishing characteristic of the players, like skill; so for any players \(i\) and \(k\) with \(s_{i\ell}=s_{k\ell}\), \(\mathsf{P}_{i\ell}(\mathbf{q}^{\ell})=\mathsf{P}_{k\ell}(\mathbf{q}^{\ell})\). Formally, a payment function \(\mathsf{P}_{i\ell}\) is _player-invariant_ if for each proposal \(\ell\in[m]\), for every quality vector \(\mathbf{q}^{\ell}\), for any players \(i,k\in[n]\) with \(q_{i\ell}=q_{k\ell}\), \(\mathsf{P}_{i\ell}(\mathbf{q}^{\ell})=\mathsf{P}_{k\ell}(\mathbf{q}^{\ell})\). A generalization of a player-invariant payment function results by allowing the payment to player \(i\in[n]\) for his review for proposal \(\ell\in[m]\) to be a function \(\mathsf{P}_{i\ell}(i,\mathbf{q}^{\ell})\) of \(i\) and \(\mathbf{q}^{\ell}\); it is called a _player-specific_ payment function. In the specific model we consider, dependence of \(\mathsf{P}_{i\ell}\) on \(i\) could only be dependence on \(s_{i\ell}\): the unique distinguishing feature of player \(i\).
Some prominent examples of player-invariant payment functions are:
* The _proportional allocation_\(\mathsf{PA}_{i\ell}(\mathbf{q}^{\ell})=\frac{\mathsf{f}_{q_{i\ell}}}{\mathsf{ F}+\sum_{k\in[n]}\mathsf{f}_{q_{k\ell}}}\), where \(\mathsf{F}\geq 0\); thus, \(\sum_{k\in[n]}\mathsf{PA}_{k\ell}(\mathbf{q}^{\ell})=\frac{\sum_{k\in[n]} \mathsf{f}_{q_{k\ell}}}{\mathsf{F}+\sum_{k\in[n]}\mathsf{f}_{q_{k\ell}}}\leq 1\). \(\mathsf{F}\) models an "overhead" effort by the contest designer, adding to the total effort of reviewers and thereby reducing their payment.
* The _equal sharing_\(\mathsf{ES}_{i\ell}(\mathbf{q}^{\ell})=\mathsf{C}_{\mathsf{ES}}\cdot\frac{ \mathsf{f}_{q_{i\ell}}}{[\{k\in[n]\mid\mathsf{f}_{q_{k\ell}}=\mathsf{f}_{q_{i \ell}}\}]}\). Since \(\sum_{k\in[n]}\mathsf{ES}_{k\ell}(\mathbf{q}^{\ell})=\mathsf{C}_{\mathsf{ES}}\cdot \frac{\sum_{k\in[n]}\mathsf{f}_{q_{k\ell}}}{[\{k\in[n]\mid\mathsf{q}_{k\ell}= \mathsf{q}_{i\ell}\}]}\), we take \(\mathsf{C}_{\mathsf{ES}}=\left(\max_{\mathbf{q}^{\ell}}\frac{\sum_{k\in[n]} \mathsf{f}_{q_{i\ell}}}{[\{k\in[n]\mid\mathsf{q}_{k\ell}=\mathsf{q}_{i\ell}\}] }\right)^{-1}\).
* The \(K\)-Top _allocation_\(K\)-\(\mathsf{Top}_{i\ell}(\mathbf{q}^{\ell})=\mathsf{C}_{K\text{-}\mathsf{Top}}\cdot \left\{\begin{array}{ll}0\,,&\text{if }\mathsf{f}_{i\ell}\leq\mathsf{f}_{Q-K}\\ \frac{\mathsf{f}_{i\ell}}{|k\in[n]\mid\mathsf{f}_{k\ell}=\mathsf{f}_{i\ell}|} \,,&\text{if }\mathsf{f}_{i\ell}>\mathsf{f}_{Q-K}\end{array}\right..\)
Since \(\sum_{k\in[n]}K\text{-}\mathsf{Top}_{k\ell}(\mathbf{q}^{\ell})=\frac{\sum_{k \in[n],\mathsf{f}_{k\ell}>\mathsf{f}_{Q-K}}\mathsf{f}_{k\ell}}{\left|\widehat {k}\in[n]\mid\mathsf{f}_{\widehat{k}\ell}=\mathsf{f}_{i\ell}\right|}\), we take \(\mathsf{C}_{K\text{-}\mathsf{Top}}=\left(\max_{\mathbf{q}^{\ell}}\frac{\sum_{k \in[n],\mathsf{f}_{k\ell}>\mathsf{f}_{Q-K}}\mathsf{f}_{k\ell}}{\left|\widehat {k}\in[n]\mid\mathsf{f}_{\widehat{k}\ell}=\mathsf{f}_{i\ell}\right|}\right)^{ -1}\).
Listed in [36, Section 6.1.3] are more examples of player-invariant payment functions, including _proportional-to-marginal contribution_ (motivated by the marginal contribution condition in _(monotone) valid utility games_[34]) and _Shapley-Shubick_[31, 32]. Proportional allocation with \(\mathsf{F}=0\) (resp, Equal sharing) is considered in the related works [3, 4] (resp., [12]).
The _maximum effort constraint_ restricts the review of multiple proposals by requiring that in a matrix strategy \(\mathbf{Q}\), for every player \(i\in[n]\), \(\sum_{\ell\in[m]}\mathsf{N}(s_{i\ell},\mathsf{f}_{i\ell})\leq\mathsf{T}\), for some \(\mathsf{T}>0\), where the _skill-effort function_\(\mathsf{\Lambda}:\mathbb{R}_{\geq 1}\times\mathbb{R}_{>0}\to\mathbb{R}_{\geq 0}\), with \(\mathsf{\Lambda}(\cdot,0)=0\), is monotonically increasing in both skill and effort; it models their combined effect. Clearly, the maximum effort constraint is satisfied for _every_ matrix strategy \(\mathbf{Q}\) when \(\max_{i\in[n],\ell\in[n]}\mathsf{\Lambda}(s_{i\ell},\mathsf{f}_{Q})\leq \frac{\mathsf{T}}{\mathsf{m}}\); henceforth, we shall assume this inequality so as to factor out this constraint in the analysis. On the other hand, no matrix strategy satisfies the maximum effort constraint when \(\min_{i\in[n],\ell\in[n]}\mathsf{\Lambda}(s_{i\ell},\mathsf{f}_{1})>\frac{ \mathsf{T}}{\mathsf{m}}\). Given a strategy matrix \(\mathbf{Q}\), the _total payment_\(\mathsf{TP}_{i}(\mathbf{Q})\) awarded to player \(i\in[n]\) is \(\mathsf{TP}_{i}(\mathbf{Q})=\sum_{\ell\in[m]}\mathsf{P}_{i\ell}(\mathbf{q}^{ \ell})\). Since for each proposal \(\ell\in[m]\), \(\sum_{k\in[n]}\mathsf{P}_{k\ell}(\mathbf{q}^{\ell})\leq 1\), it follows that \(\sum_{k\in[n]}\mathsf{TP}_{k}(\mathbf{Q})\leq m\).
For a strategy matrix \(\mathbf{Q}\), the _cost function_, or _cost_, of player \(i\in[n]\) is defined as \(\mathsf{C}_{i}(\mathbf{Q})=\sum_{\ell\in[m]}\left(\mathsf{\Lambda}(s_{i\ell}, \mathsf{f}_{q_{i\ell}})-\mathsf{P}_{i\ell}(\mathbf{q}^{\ell})\right)=\sum_{\ell \in[m]}\mathsf{\Lambda}(s_{i\ell},\mathsf{f}_{q_{i\ell}})-\mathsf{TP}_{i}( \mathbf{Q})\). Each player seeks to minimize her cost. In a _pure Nash equilibrium_\(\mathbf{Q}\), for every player \(i\in[n]\), for every proposal \(\ell\in[m]\) and for every deviation of player \(i\) to strategy \(q\in[Q]\), \(q\neq q_{i\ell}\), \(\mathsf{C}_{i}(\mathbf{Q})\leq\mathsf{C}_{i}(q,\mathbf{Q}_{-i\ell})\), where the strategy matrix \((q,\mathbf{Q}_{-i\ell})\) results from \(\mathbf{Q}\) by replacing \(q_{i\ell}\) with \(q\); so no reviewer could reduce her cost by unilaterally switching to a different quality for her review for a proposal. Denote as \(\mathsf{PNE}\) in Contest Game the problem of computing a pure Nash equilibrium for the contest game with arbitrary participation. Clearly, the assumption \(\min_{i\in[n],\ell\in[n]}\mathsf{\Lambda}(s_{i\ell},\mathsf{f}_{1})>\frac{ \mathsf{T}}{\mathsf{m}}\) excludes the existence of a pure Nash equilibrium. So we assume that \(\min_{i\in[n],\ell\in[n]}\mathsf{\Lambda}(s_{i\ell},\mathsf{f}_{1})\leq\frac{ \mathsf{T}}{\mathsf{m}}\) in order to render \(\mathsf{PNE}\) in Contest Game non-trivial.
### Results
We study the existence and the computation of pure Nash equilibria for the contest game. _Do pure Nash equilibria always exist for arbitrary players and arbitrary proposals with a player-invariant or player-specific payment function and for arbitrary \(n\),\(m\) and \(Q\)?_ For a particular special case of the contest game, this has been advocated as a significant open problem in [4, Section 6]. _What is the time complexity of computing one or deciding its existence? Is this complexity affected by properties of the skill-effort or payment function, or by numerical properties of skills and efforts, and how?_ We shall present three major results.
Our first major result is that every contest game, with arbitrary players and arbitrary proposals and with a player-invariant payment function, has a pure Nash equilibrium for any values of \(n\), \(m\) and \(Q\) and any skill-effort function \(\mathsf{\Lambda}\) (Theorem 1). We construct a _potential function_[25] which may be of independent interest and resort to the fact that every _potential game_ has a pure Nash equilibrium [25]. By Theorem 1, the contest game with proportional allocation, equal sharing, \(K\)-Top allocation and the other examples of player-invariant payment functions has a pure Nash equilibrium. In contrast, when the payment function is player-specific, there are simple contest games with no pure Nash equilibria (Proposition 2); the \(\mathcal{NP}\)-completeness of deciding the existence of a pure Nash equilibrium follows by a simple reduction from the problem of deciding the existence of a pure Nash equilibrium in a strategic game [29, Theorem 2.4.1] (Theorem 3) For the remaining two results, we take \(m=1\).
For our second major result, we assume that the skill-effort function has the _increasing-differences_ property; this is a required property on utilities for the class of _supermodular games_[33] (or _games with strategic complementarities_), an important class of games, widely studied in Economic Theory.1 We present \(\Theta\left(\max\{Q^{2},n\}\binom{n}{Q-1}\right)\) and \(\Theta\left(n\cdot Q^{2}\cdot\binom{n}{Q-1}\right)\) algorithms for player-invariant and player-specific payment functions, respectively, to either compute a pure Nash equilibrium or decide its existence, respectively (Theorem 4). Exhaustive enumeration of _all_ profiles incurs an _exponential_\(\Theta(Q^{n})\) cost. To bypass the intractability, we focus on _contiguous_ profiles, where any players \(i\) and \(k\), with \(s_{i}\geq s_{k}\), are assigned to qualities \(q\) and \(q^{\prime}\), respectively, with \(q\leq q^{\prime}\); they offer a significant advantage: the cost for their exhaustive enumeration drops to \(\Theta\left(\binom{n}{Q-1}\right)\), which is \(\Theta(n^{Q-1})\), _polynomial_ in \(n\) when \(Q\) is constant. We prove the _Contiguification Lemma_: any pure Nash equilibrium for the contest game can be transformed into a contiguous one (Proposition 5). So, it suffices to search for a contiguous, pure Nash equilibrium. We present an algorithm that searches over contiguous profiles; it is polynomial-time \(\Theta(n^{Q})\) for _constant_\(Q\), when the payment function is either player-invariant or player-specific. The algorithm works with the same complexities in corresponding cases when the skill-effort function is separable (Corollary 8). For proposal-indifferent and anonymous players, we get a simplification of the algorithm with a slightly improved time complexity (Corollary 9).
Footnote 1: The increasing-differences property strengthens the _single-crossing_ condition. Supermodular games always have a pure Nash equilibrium. For related work, see, e.g., [14, 24] or [27, Section 2.1].
Our third major result is an algorithm for the model of proposal-indifferent and anonymous players, with proportional allocation and \(\mathsf{\Lambda}(s_{i},\mathsf{f}_{q})=s_{i}\,\mathsf{f}_{q}\). We make two technical assumptions on efforts: (C1) \(\mathsf{f}_{1}>\frac{1}{n}\), and (C2) for any quality \(q\leq Q-1\), \(\mathsf{f}_{q}>\frac{\mathsf{f}_{Q}}{(n-1)^{2}}\). Since \(\mathsf{f}_{q}\) increases with \(q\), (C1) implies that \(\mathsf{f}_{q}>\frac{1}{n}\) for all qualities \(q\in[Q]\); thus, all efforts should be large enough. We present an algorithm which, perhaps surprisingly, computes
a pure Nash equilibrium in \(\Theta(\max\{Q,n\})\) time (Theorem 10). We proceed by adding one quality at a time, starting with \(Q\) and going down. In each iteration, we consider one pair of qualities at a time: the currently added \(q\) with the immediately lower \(q-1\), where \(Q\geq q\geq 2\). We compute a pure Nash equilibrium _as if_ only the qualities \(q\) and \(q-1\) were available in the contest; we neither consider other qualities (higher or lower) nor reassign players assigned previously to qualities higher than \(q\). Instead, we split, into \(q\) and \(q-1\), the players assigned to \(q\) immediately before; we prove "no-interference" with previously assigned players. With a challenging analysis, we prove inductively that the newly computed assignment is a pure Nash equilibrium _as if_ only qualities from \(Q\) through \(q-1\) were available. So at termination, a pure Nash equilibrium is computed. Since, in each iteration, players assigned to qualities higher or equal to \(q\) are not reassigned, the number of players not permanently assigned (those assigned to \(q-1\)) may either "shrink" or remain constant till we run out of qualities.
To complement the three main results, we present a very simple, \(\Theta(1)\) algorithm that works under proportional allocation, taking \(\mathsf{\Lambda}(s_{i},\mathsf{f}_{q})=s_{i}\,\mathsf{f}_{q}\) and making stronger assumptions on skills and efforts. Specifically, under the assumption \(\min_{i\in[n]}s_{i}\geq\frac{\mathsf{f}_{2}}{\mathsf{f}_{2}-\mathsf{f}_{1}}\) on skills and efforts, it works for arbitrary players; under the assumption \(\mathsf{f}_{2}-\mathsf{f}_{1}\geq 1\), it works for anonymous players. The algorithm simply assigns all players to the lowest quality \(1\); so it runs in optimal time \(\Theta(1)\). We give simple proofs that the computed assignment is a pure Nash equilibrium (Theorems 16 and 17).
### Related Work and Comparison
Games employing proportional allocation, equal sharing and \(K\)-Top allocation have been studied, for example, in [15, 26, 38], in [12, 22] and in [10, 20, 37], respectively. Accounts on proportional allocation and equal sharing in simultaneous contests appear in [36, Section 5.4 & Section 5.5], respectively. Player-invariant payment functions enhance _Anonymous Independent Reward Schemes (AIRS)_[6], where payments, termed as _rewards,_ are only allowed to depend on the quality of the individual review, or _content_ in the context of user-generated content platforms. Player-specific payment functions are motivated by _player-specific payoff functions_ in congestion games [23]. Closest to the contest game are the games in [3, 4, 12].
The game in [4] models non-mandatory participation by setting \(\mathsf{f}_{1}=0\); thus, by definition of the skill-effort function and under any of the three prominent examples of player-invariant payment functions, a player choosing quality \(1\) has cost \(0\). So the analysis in [4] for two and three qualities is simpler as it deals with only one and two efforts. In general, algorithms explicitly using either assumption (that \(\mathsf{f}_{1}>0\) or \(\mathsf{f}_{1}=0\)) cannot transfer from one game to the other. Theorem 1 generalizes [4, Theorem 3], proved for the special case of non-mandatory participation (with \(\mathsf{f}_{1}=0\)), proportional allocation and \(Q=3\). For [4, Theorem 1], a constructive proof is outlined that there is a pure Nash equilibrium in the special case with three qualities \(0\), \(1\) and \(\alpha>1\) and respective efforts. In contrast, Theorems 1 and 4 do _not_ fix \(\mathsf{f}_{1}\) and consider an arbitrary player-invariant payment function and an arbitrary \(Q\). It is argued in [4, proof of Theorem 1] that their constructive round-based procedure converges to a pure Nash equilibrium after \(n\) rounds, but no analysis is provided for the number of steps taken in each round. To the best of our understanding, this is \(\Theta(n)\), for a total of \(n^{2}\) steps. In contrast, the algorithm in Theorem 4 is \(\Theta(n^{2})\) when \(Q=2\) bypassing all restrictions in [4, proof of Theorem 1].
The contest game is related to _project games_[3], where each _weighted_ player \(i\) selects a single _project_\(\sigma_{i}\in S_{i}\) among those available to him, but several players may select the same
project. Weights \(w_{i,\sigma_{i}}\) are project-specific; they are called _universal_ when they are fixed for the same project and _identical_ when the fixed weights are the same over all projects. The utility of player \(i\) is a fraction \(r_{\sigma_{i}}\) of the proportional allocation of weights on the project \(\sigma_{i}\). Projects can be considered to correspond to qualities in the contest game, which, in contrast, has, in general, neither weights nor fractions but has the extra term \(\mathsf{\Lambda}(s_{i},\mathsf{f}_{q})\) in the cost.
In the game of Elkind _et al._[12], there are \(m\)_activities_ and player \(i\in[n]\) chooses an _output vector_\(\mathbf{b}_{i}=\langle b_{i1},\ldots,b_{im}\rangle\), with \(b_{i\ell}\in\mathbb{R}_{\geq 0}\), \(\ell\in[m]\); the case \(b_{i\ell}=0\) corresponds to non-mandatory participation. In contrast, there are no activities in the contest game; nevertheless, one may view proposals and strategy vectors in it (as well as in the contest game of Birmpas _et al._[4]) as activities and output vectors, respectively. There are \(C\geq 1\)_contests_ awarding prizes to the players based on their output vectors; allocation is equal sharing in [12], by which players receiving a prize share are "filtered" using a function \(f_{c}\) associated with contest \(c\). The special case of the game in [12] with \(C=m\) can be cast as a contest game with contests corresponding to proposals. however, it "filters" players receiving a prize share due to the definition of the functions \(f_{c}\).
The contest game may be viewed as a tuple of inter-related games, each corresponding to a proposal. Each such game is a variation of _(singleton) congestion games_[19, 35], where the links are the \(Q\) qualities and the delay of a player on the link it chooses is a function of the loads on _all_ links, as opposed to the load on the chosen link, as in classical congestion games [28]; the load on a link is the number of players choosing the corresponding quality.
## 2 (In)Existence of a Pure Nash Equilibrium
We show:
There is a a pure Nash equilibrium for the model of arbitrary players and arbitrary proposals with a player-invariant payment function.
A cost minimization strategic game is an _(exact) potential game_[25] if there is a _potential function_\(\Phi\), mapping profiles to numbers, such that for each player \(i\in[n]\) with cost function \(\mathsf{C}_{i}\), for any pair \(q_{i}\) and \(q^{\prime}_{i}\) of his strategies, and for any partial profile \(\mathbf{q}_{-i}\), \(\mathsf{C}_{i}(q^{\prime}_{i},\mathbf{q}_{-i})-\mathsf{C}_{i}(q_{i},\mathbf{q }_{-i})=\Phi(q_{i},\mathbf{q}_{-i})-\Phi(q^{\prime}_{i},\mathbf{q}_{-i})\). It is known that every potential game has at least one pure Nash equilibrium and best-response dynamics converge to it [25]. Recall the \(k\)-th _Harmonic Number_\(\mathsf{H}_{k}=1+\frac{1}{2}+\ldots+\frac{1}{k}\), where \(k\in\mathbb{N}_{>0}\). By convention, take \(\mathsf{H}_{0}=0\).
Define the function \(\Phi:\{\mathbf{Q}\}\to\mathbb{R}\) as \(\Phi(\mathbf{Q})=\sum_{\ell\in[m]}\Phi_{\ell}(\mathbf{q}^{\ell})\), where the function \(\Phi_{\ell}:\{\mathbf{q}^{\ell}\}\to\mathbb{R}\), with \(\ell\in[m]\), is given by
\[\Phi_{\ell}(\mathbf{q}^{\ell}) = \sum_{q\in[Q]}\mathsf{\Gamma}_{\ell}(\mathsf{N}_{\mathbf{q}^{ \ell}}(q))\cdot\mathsf{H}_{\mathsf{N}_{\mathbf{q}^{\ell}}(q)}-\sum_{k\in[n]} \Lambda(s_{k\ell},\mathsf{f}_{q_{k\ell}})\,;\]
the function \(\mathsf{\Gamma}_{\ell}:\mathbb{N}\cup\{0\}\to\mathbb{R}\) will be defined later. We prove that \(\Phi\) is a potential.
Fix a strategy matrix \(\mathbf{Q}\). Consider a player \(i\in[n]\) switching from strategy \(q_{i\ell}\), for some proposal \(\ell\in[m]\), to strategy \(\widehat{q}_{i\ell}\), while other players do not change strategies. So the quality vector \(\mathbf{q}^{\ell}=\langle q_{1\ell},\ldots,q_{(i-1)\ell},q_{i\ell},q_{(i+1) \ell},\ldots,q_{n\ell}\rangle\) is transformed into \(\widehat{\mathbf{q}}^{\ell}:=\langle q_{1\ell},\ldots,q_{(i-1)\ell},\widehat{q }_{i\ell},q_{(i+1)\ell},\ldots,q_{n\ell}\rangle\). Clearly, \(\mathsf{N}_{\widehat{q}^{\ell}}(q_{i\ell})=\mathsf{N}_{\mathbf{q}^{\ell}}(q_{ i\ell})-1\), \(\mathsf{N}_{\widehat{q}^{\ell}_{i}}(\widehat{q}_{i\ell})=\mathsf{N}_{\mathbf{q}^{ \ell}}(\widehat{q}_{i\ell})+1\) and \(\mathsf{N}_{\widehat{q}^{\ell}}(\widehat{q})=\mathsf{N}_{\mathbf{q}^{\ell}}( \widehat{q})\) for each quality \(\widehat{q}\neq q_{i\ell},\widehat{q}_{i\ell}\). To shorten notation, denote \(q_{i\ell}\) and \(\widehat{q}_{i\ell}\) as \(q\) and \(\widehat{q}\), respectively. Set \(\widehat{\mathbf{Q}}:=(\mathbf{Q}_{-i},\widehat{\mathbf{q}}_{i})\), where \(\widehat{\mathbf{q}}_{i}=\langle q_{i1},q_{i2},\ldots,\widehat{q}_{i\ell}, \ldots,q_{im}\rangle\). So,
\[\mathsf{C}_{i}(\widehat{\mathbf{Q}})-\mathsf{C}_{i}(\mathbf{Q}) = \left[\mathsf{P}_{i\ell}(\mathbf{q}^{\ell})\right]_{\left[\mathsf{ N}_{\mathbf{q}^{\ell}}(\mathbf{q}),\mathsf{N}_{\mathbf{q}^{\ell}}(\widehat{q}) \right]}-\left[\mathsf{P}_{i\ell}(\widehat{\mathbf{q}}^{\ell})\right]_{\left[ \mathsf{N}_{\mathbf{q}^{\ell}}(\mathbf{q})-1,\mathsf{N}_{\mathbf{q}^{\ell}}( \widehat{q})+1\right]}+\Lambda(s_{i\ell},\mathsf{f}_{q})-\Lambda(s_{i\ell}, \mathsf{f}_{q})\,,\]
where \(\big{[}\mathsf{P}_{i\ell}(\mathbf{q}^{\ell})\big{]}_{[\mathsf{N}_{\mathbf{q}^{ \ell}}(q),\mathsf{N}_{\mathbf{q}^{\ell}}(\widehat{q})]}\) and \(\big{[}\mathsf{P}_{i\ell}(\mathbf{q}^{\ell})\big{]}_{[\mathsf{N}_{\mathbf{q}^{ \ell}}(q)-1,\mathsf{N}_{\mathbf{q}^{\ell}}(\widehat{q})+1]}\) denote the payments awarded to \(i\) (regarding proposal \(\ell\)) when the two loads on qualities \(q\) and \(\widehat{q}\) are \(\big{(}\mathsf{N}_{\mathbf{q}^{\ell}}(q),\mathsf{N}_{\mathbf{q}^{\ell}}( \widehat{q})\big{)}\) and \(\big{(}\mathsf{N}_{\mathbf{q}^{\ell}}(q)-1,\mathsf{N}_{\mathbf{q}^{\ell}}( \widehat{q})+1\big{)}\), respectively, while loads on other qualities remain unchanged. So \(\big{[}\mathsf{P}_{i\ell}(\mathbf{q}^{\ell})\big{]}_{[\mathsf{N}_{\mathbf{q}^ {\ell}}(q),\mathsf{N}_{\mathbf{q}^{\ell}}(\widehat{q})]}=\mathsf{P}_{i\ell}( \mathbf{q}^{\ell})\) and \(\big{[}\mathsf{P}_{i\ell}(\mathbf{q}^{\ell})\big{]}_{[\mathsf{N}_{\mathbf{q}^ {\ell}}(q)-1,\mathsf{N}_{\mathbf{q}^{\ell}}(\widehat{q})+1]}=\mathsf{P}_{i \ell}(\widehat{\mathbf{q}}^{\ell})\). Clearly,
\[\Phi(\mathbf{Q})-\Phi(\widehat{\mathbf{Q}}) = \Phi_{\ell}(\mathbf{q}^{\ell})-\Phi_{\ell}(\widehat{\mathbf{q}}^{ \ell})\] \[= \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}}}}} \mathsf{\mathsf{\mathsf{P}_{i\ell}(\mathbf{q}^{\ell})} \big{]}_{[\mathsf{N}_{\mathbf{q}^{\ell}}(q)-1,\mathsf{N}_{\mathbf{q}^{\ell}}( \widehat{q})+1]}+\big{[}\mathsf{P}_{i\ell}(\mathbf{q}^{\ell})\big{]}_{[ \mathsf{N}_{\mathbf{q}^{\ell}}(q)-1,\mathsf{N}_{\mathbf{q}^{\ell}}(\widehat{q })+1]}\] \[= \Big{(}\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf } } { }}}}}}}}}}}}}}}}}}} \;
**Proposition 2**.: _For the model of arbitrary players and arbitrary proposals with a player-specific payment function, there is not necessarily a pure Nash equilibrium._
Proof.: Consider the contest game with players \(1\) and \(2\), two qualities \(1\) and \(2\) and a single proposal; similarly to _Matching Pennies,_ player \(1\) has big payment when alone on a quality, else very small and player \(2\) has big payment when not alone, else very small. Formally, define \(\mathsf{f}_{q}=1\), \(\mathsf{f}_{q}=2\), \(s_{1}=s_{2}=1\), \(\mathsf{P}_{1}(1,1)=\mathsf{P}_{1}(2,2)=10^{3}\)\(\mathsf{P}_{1}(1,2)=\mathsf{P}_{1}(2,1)=10\), \(\mathsf{P}_{2}(1,2)=\mathsf{P}_{2}(2,1)=10^{3}\) and \(\mathsf{P}_{2}(1,1)=\mathsf{P}_{2}(2,2)=10\). It is straightforward to verify that none of the quality vectors \(\langle 1,1\rangle\), \(\langle 2,2\rangle\), \(\langle 1,2\rangle\) and \(\langle 2,1\rangle\) is a pure Nash equilibrium.
Denote as \(\exists\textsc{PNE}\) in Contest Game the problem of deciding if there exists a pure Nash equilibrium for the contest game with arbitrary participation. We show:
**Theorem 3**.: \(\exists\textsc{PNE}\) in Contest Game _is \(\mathcal{NP}\)-complete for the model of arbitrary players and arbitrary proposals with a player-specific payment function._
Proof.: \(\exists\textsc{PNE}\) in Contest Game\(\in\mathcal{NP}\) since a non-deterministic Turing machine can guess a quality vector and verify the conditions for a pure Nash equilibrium. To prove \(\mathcal{NP}\)-hardness, we reduce from the problem of computing a pure Nash equilibrium in a finite strategic game [29, Theorem 2.4.1]. So consider such a game with \(n\) players, \(m\) strategies and payoff functions \(\{\mathsf{F}_{i}\}_{i\in[n]}\) represented by a polynomial-time algorithm computing for each pair of a pure profile \(\mathbf{s}\) and a player \(i\in[n]\), the payoff \(\mathsf{F}(i,\mathbf{s})\) of player \(i\) in \(\mathbf{s}\). Construct a contest game for crowdsourcing reviews with a single proposal, \(n\) players, \(Q=m\), so that the strategy vectors coincide with pure profiles of the strategic game, and \(s_{i}=1\) for all players \(i\in[n]\); define the payment function as \(\mathsf{P}_{i}(i,\mathbf{q})=\mathsf{F}_{i}(i,\mathbf{s})+\Lambda(s_{i}, \mathsf{f}_{q})\) for a player \(i\) and a strategy vector \(\mathbf{q}\); thus, \(\mathsf{C}_{i}(\mathbf{q})=\mathsf{F}_{i}(i,\mathbf{s})\).
## 3 Skill-Effort Functions with Increasing Differences
A function \(\Lambda:\mathbb{R}^{2}\to\mathbb{R}\) of two variables \(y\) and \(z\)_has increasing differences_ if for all \(y,y^{\prime}\) with \(y\geq y^{\prime}\), the difference \(\Lambda(y,z)-\Lambda(y^{\prime},z)\) is monotonically increasing in \(z\); that is, for all \(z\geq z^{\prime}\), \(\Lambda(y,z)-\Lambda(y^{\prime},z)\geq\Lambda(y,z^{\prime})-\Lambda(y^{\prime},z^{\prime})\). We show:
**Theorem 4**.: _Consider the model of arbitrary players and arbitrary proposals with \(m=1\) and a skill-effort function that has increasing differences. Then, there is a \(\Theta\left(\max\{Q^{2},n\}\cdot\binom{n}{Q-1}\right)\) (resp., \(\Theta\left(n\cdot Q^{2}\cdot\binom{n}{Q-1}\right)\)) algorithm that solves PNE in Contest Game (resp. \(\exists\textsc{PNE}\) in Contest Game) for a player-invariant (resp., player-specific) payment function; for constant \(Q\), it is a \(\Theta(n^{Q})\) polynomial algorithm._
Proof.: Order the players so that \(s_{1}\geq s_{2}\geq\ldots\geq s_{n}\). Recall that \(\mathsf{f}_{1}<\mathsf{f}_{2}<\ldots<\mathsf{f}_{Q}\). Represent a strategy vector \(\mathbf{q}\) using a _load vector_\(\mathbf{x}=\langle\mathsf{N}_{\mathbf{x}}(1),\mathsf{N}_{\mathbf{x}}(2), \ldots,\mathsf{N}_{\mathbf{x}}(Q)\rangle\) and specifying which \(\mathsf{N}_{\mathbf{x}}(q)\) players choose each quality \(q\in[Q]\). Say that \(\mathbf{x}\) is _contiguous_ if players \(1\) to \(\mathsf{N}_{\mathbf{x}}(1)\) choose quality \(1\), players \(\mathsf{N}_{\mathbf{x}}(1)+1\) to \(\mathsf{N}_{\mathbf{x}}(1)+\mathsf{N}_{\mathbf{x}}(2)\) choose quality \(2\), and so on till players \(\sum_{q\in[Q-1]}\mathsf{N}_{\mathbf{x}}(q)+1\) to \(n\) choose quality \(q_{\mathsf{last}}\leq Q\) such that for each quality \(\widehat{q}>q_{\mathsf{last}}\), \(\mathsf{N}_{\mathbf{x}}(\widehat{q})=0\); so for any players \(i\) and \(k\) with \(i<k\), choosing distinct qualities \(q\) and \(q^{\prime}\), respectively, we have \(q<q^{\prime}\). Clearly, a contiguous load vector determines by itself which \(\mathsf{N}_{\mathbf{x}}(q)\) players choose each quality \(q\in[Q]\). Given a contiguous load vector \(\mathbf{x}\), denote, for each quality \(q\in[Q]\) such that \(\mathsf{Players}_{\mathbf{x}}(q)\neq\emptyset\), the minimum and the maximum, respectively, player index \(i\in\mathsf{Players}_{\mathbf{x}}(q)\) as \(\mathsf{first}_{\mathbf{x}}(q)\) and \(\mathsf{last}_{\mathbf{x}}(q)\), respectively. Clearly, \(\mathsf{first}_{\mathbf{x}}(q)=\sum_{\widehat{q}<q}\mathsf{N}_{\mathbf{x}} (\widehat{q})+1\) and \(\mathsf{last}_{\mathbf{x}}(q)=\sum_{\widehat{q}\leq q}\mathsf{N}_{\mathbf{x}} (\widehat{q})\); so \(\mathsf{first}_{\mathbf{x}}(1)=1\) for \(\mathsf{N}_{\mathbf{x}}(1)>0\) and \(\mathsf{last}_{\mathbf{x}}(Q)=n\) for \(\mathsf{N}_{\mathbf{x}}(Q)>0\).
Say that an _inversion_ occurs in a load vector \(\mathbf{x}\) if there are players \(i\) and \(k\) with \(i<k\) choosing qualities \(q\) and \(q^{\prime}\), respectively, with \(q>q^{\prime}\); thus, \(s_{i}\geq s_{k}\) while \(\mathsf{f}_{q}>\mathsf{f}_{q^{\prime}}\). Call \(i\) an _inversion witness_; call \(i\) and \(k\) an _inversion pair_. Since \(\Lambda\) has increasing differences, \(\Lambda(s_{i},q)-\Lambda(s_{i},q^{\prime})\geq\Lambda(s_{k},q)-\Lambda(s_{k},q ^{\prime})\). We prove:
[Contigification Lemma] Any pair of a pure Nash equilibrium \(\mathbf{x}=\langle\mathsf{N}_{\mathbf{x}}(1),\ldots,\mathsf{N}_{\mathbf{x}}\rangle\) and sets \(\mathsf{Players}_{\mathbf{x}}(q)\) for each quality \(q\in[Q]\) can be transformed into a contiguous pure Nash equilibrium.
Proof.: It suffices to prove the claim for a player-specific payment function. If no inversion occurs in \(\mathbf{x}\), then \(\mathbf{x}\) is contiguous and we are done. Else take the earliest inversion witness \(i\), together with the earliest player \(k\) such that \(i\) and \(k\) make an inversion. We have that \(\mathsf{C}_{i}(\mathbf{x})=\Lambda(s_{i},\mathsf{f}_{q})-\mathsf{P}_{i}( \mathbf{x})\) and \(\mathsf{C}_{k}(\mathbf{x})=\Lambda(s_{k},\mathsf{f}_{q^{\prime}})-\mathsf{P}_ {k}(\mathbf{x})\). Since \(\mathbf{x}\) is a pure Nash equilibrium, player \(k\) does not want to switch to quality \(q\), which happens if and only if \(\Lambda(s_{k},\mathsf{f}_{q^{\prime}})-\mathsf{P}_{k}(\mathbf{x})\leq\Lambda( s_{k},\mathsf{f}_{q})-\mathsf{P}_{k}(x_{1},\ldots,x_{q}+1,\ldots,x_{q^{\prime}}-1, \ldots,x_{Q})\) or
\[\Lambda(s_{k},\mathsf{f}_{q})-\Lambda(s_{k},\mathsf{f}_{q^{\prime}}) \geq -\mathsf{P}_{k}(\mathbf{x})+\mathsf{P}_{k}(\mathsf{N}_{\mathbf{x }}(1),\ldots,\mathsf{N}_{\mathbf{x}}(q^{\prime})-1,\ldots,\mathsf{N}_{ \mathbf{x}}(q)+1,\ldots,\mathsf{N}_{\mathbf{x}}(Q))(\mathsf{C}.1)\,;\]
player \(i\) does not want to move to quality \(q^{\prime}\), which happens if and only if \(\Lambda(s_{i},\mathsf{f}_{q})-\mathsf{P}_{i}(\mathbf{x})\leq\Lambda(s_{i}, \mathsf{f}_{q^{\prime}})-\mathsf{P}_{i}(x_{1},\ldots,x_{q}-1,\ldots,x_{q^{ \prime}}+1,\ldots,x_{Q})\) or
\[\Lambda(s_{i},\mathsf{f}_{q})-\Lambda(s_{i},\mathsf{f}_{q^{\prime}}) \leq \mathsf{P}_{i}(\mathbf{x})-\mathsf{P}_{i}(\mathsf{N}_{\mathbf{x }}(1),\ldots,\mathsf{N}_{\mathbf{x}}(q^{\prime})+1,\ldots,\mathsf{N}_{ \mathbf{x}}(q)-1,\ldots,\mathsf{N}_{\mathbf{x}}(Q))(\mathsf{C}.2)\,.\]
Swap the qualities chosen by players \(i\) and \(k\); so they now choose \(q^{\prime}\) and \(q\), respectively. Choices of other players are preserved. Denote as \(\mathbf{x}^{\prime}\) the resulting load vector; clearly, for each \(\widehat{q}\in[Q]\), \(\mathsf{N}_{\mathbf{x}^{\prime}}(\widehat{q})=\mathsf{N}_{\mathbf{x}}( \widehat{q})\). We prove:
The earliest inversion witness in \(\mathbf{x}^{\prime}\) is either \(i\) or some player \(\widehat{i}>i\).
Proof.: Assume, by way of contradiction, that the earliest inversion witness in \(\mathbf{x}^{\prime}\) is a player \(j<i\). Since the earliest inversion witness in \(\mathbf{x}\) is \(i\), \(j\) is not an inversion witness in \(\mathbf{x}\). Let \(\widehat{q}\) be the quality chosen by \(j\) in \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\). Since players other than \(i\) and \(k\) do not change qualities in \(\mathbf{x}^{\prime}\), \(j\) makes an inversion pair with either \(i\) or \(k\) in \(\mathbf{x}^{\prime}\). There are two cases.
* \(j\) makes an inversion pair with \(i\) in \(\mathbf{x}^{\prime}\): Since \(i\) chooses quality \(q^{\prime}\) in \(\mathbf{x}^{\prime}\), it follows that \(\widehat{q}>q^{\prime}\). Since \(k>j\) and \(k\) chooses quality \(q^{\prime}\) in \(\mathbf{x}\), this implies that \(j\) and \(k\) make an inversion pair in \(\mathbf{x}\).
* \(k\) makes an inversion pair with \(k\) in \(\mathbf{x}^{\prime}\): Since \(k\) chooses quality \(q\) in \(\mathbf{x}^{\prime}\), it follows that \(\widehat{q}>q\). Since \(i>j\) and \(i\) chooses quality \(q\) in \(\mathbf{x}\), this implies that \(j\) and \(i\) make an inversion pair in \(\mathbf{x}\).
In either case, since \(i>j\), \(i\) is not the earliest witness of inversion in \(\mathbf{x}\). A contradiction.
We continue to prove:
[\(\mathsf{x}^{\prime}\) is a pure Nash equilibrium]
Proof.: For \(\mathbf{x}^{\prime}\) to be a pure equilibrium, player \(i\) should not want to switch to quality \(q\), which happens if and only if \(\Lambda(s_{i},\mathsf{f}_{q^{\prime}})-\mathsf{P}_{i}(\mathbf{x}^{\prime}) \leq\Lambda(s_{i},\mathsf{f}_{q})-\mathsf{P}_{i}(\mathsf{N}_{\mathbf{x}^{ \prime}}(1),\ldots,\mathsf{N}_{\mathbf{x}^{\prime}}(q^{\prime})-1,\ldots, \mathsf{N}_{\mathbf{x}^{\prime}}(q)+1,\ldots,\mathsf{N}_{\mathbf{x}^{\prime}}( Q))\), or
\[\Lambda(s_{i},\mathsf{f}_{q})-\Lambda(s_{i},\mathsf{f}_{q^{\prime}}) \geq \mathsf{P}_{i}(\mathbf{x}^{\prime})-\mathsf{P}_{i}(\mathsf{N}_{ \mathbf{x}^{\prime}}(1),\ldots,\mathsf{N}_{\mathbf{x}^{\prime}}(q^{\prime})-1, \ldots,\mathsf{N}_{\mathbf{x}^{\prime}}(q)+1,\ldots,\mathsf{N}_{\mathbf{x}^{ \prime}}(Q))\,,\]
which holds since \(\Lambda(s_{i},q)-\Lambda(s_{i},q^{\prime})\geq\Lambda(s_{k},q)-\Lambda(s_{k},q^{ \prime})\) and due to (C.1). player \(k\) should not want to switch to quality \(q^{\prime}\), which holds if and only if \(\Lambda(s_{k},\mathsf{f}_{q})-\mathsf{P}_{k}(\mathbf{x}^{\prime})\leq\Lambda(s _{k},\mathsf{f}_{q^{\prime}})-\mathsf{P}_{k}(\mathsf{N}_{\mathbf{x}^{\prime}}(1 ),\ldots,\mathsf{N}_{\mathbf{x}^{\prime}}(q)-1,\ldots,\mathsf{N}_{\mathbf{x}^ {\prime}}(q^{\prime})+1,\ldots,\mathsf{N}_{\mathbf{x}^{\prime}}(Q))\) or \[\Lambda(s_{k},\mathsf{f}_{q})-\Lambda(s_{k},\mathsf{f}_{q^{\prime}}) \leq \mathsf{P}_{k}(\mathbf{x}^{\prime})-\mathsf{P}_{k}(\mathsf{N}_{ \mathbf{x}^{\prime}}(1),\ldots,\mathsf{N}_{\mathbf{x}^{\prime}}(q^{\prime})+1,\ldots,\mathsf{N}_{\mathbf{x}^{\prime}}(q)-1,\ldots,\mathsf{N}_{\mathbf{x}^{ \prime}}(Q))\] which holds since \(\Lambda(s_{i},q)-\Lambda(s_{i},q^{\prime})\geq\Lambda(s_{k},q)-\Lambda(s_{k}, q^{\prime})\) and due to (C.2). All other players do not want to switch in \(\mathbf{x}^{\prime}\) since _(i)_ they did not want to switch in \(\mathbf{x}\), _(ii)_ they choose the same qualities as in \(\mathbf{x}\) and _(iii)_\(\mathsf{N}_{\mathbf{x}^{\prime}}(\widehat{q})=\mathsf{N}_{\mathbf{x}}(\widehat {q})\) for all qualities \(\widehat{q}\in[Q]\). Hence, \(\mathbf{x}^{\prime}\) is a pure Nash equilibrium.
Now the earliest witness of inversion, if any, in \(\mathbf{x}^{\prime}\) is greater than \(i\), the earliest witness of inversion in \(\mathbf{x}\). It follows inductively that a contiguous pure Nash equilibrium exists.
By Theorem 1 and Proposition 5, it suffices to search over contiguous profiles. Fix a load vector \(\mathbf{x}\) and a quality \(q\in[Q]\) such that \(\mathsf{Players}_{\mathbf{x}}(q)\neq\emptyset\). No player choosing quality \(q\) wants to switch to the quality \(q^{\prime}\neq q\) if and only if for all players \(i\in\mathsf{Players}_{\mathbf{x}}(q)\),
\[\Lambda(s_{i},\mathsf{f}_{q})-\mathsf{P}_{i}(\mathbf{x}) \leq \Lambda(s_{i},\mathsf{f}_{q^{\prime}})-\mathsf{P}_{i}(\mathsf{N}_ {\mathbf{x}}(1),\ldots,\mathsf{N}_{\mathbf{x}}(q)-1,\ldots,\mathsf{N}_{ \mathbf{x}}(q^{\prime})+1,\ldots,\mathsf{N}_{\mathbf{x}}(Q))\text{ or}\] \[\Lambda(s_{i},\mathsf{f}_{q^{\prime}})-\Lambda(s_{i},\mathsf{f}_{q }) \geq -\mathsf{P}_{i}(\mathbf{x})+\mathsf{P}_{i}(\mathsf{N}_{\mathbf{x} }(1),\ldots,\mathsf{N}_{\mathbf{x}}(q)-1,\ldots,\mathsf{N}_{\mathbf{x}}(q^{ \prime})+1,\ldots,\mathsf{N}_{\mathbf{x}}(Q))\,.\] (C.4) For a player-invariant payment function \(\mathsf{P}\), both \(\mathsf{P}_{i}(\mathbf{x})\) and \(\mathsf{P}_{i}(\mathsf{N}_{\mathbf{x}}(1),\ldots,\mathsf{N}_{\mathbf{x}}(q)-1,\ldots,\mathsf{N}_{\mathbf{x}}(q^{\prime})+1,\ldots,\mathsf{N}_{\mathbf{x}}( Q))\) are constant over all players \(i\) choosing quality \(q\) in \(\mathbf{x}\) and switching to quality \(q^{\prime}\) in \((\mathsf{N}_{\mathbf{x}}(1),\ldots,\mathsf{N}_{\mathbf{x}}(q)-1,\ldots, \mathsf{N}_{\mathbf{x}}(q^{\prime})+1,\ldots,\mathsf{N}_{\mathbf{x}}(Q))\); so fix any such player \(\widehat{i}\). Then, (C.4) holds for all players \(i\in\mathsf{Players}_{\mathbf{x}}(q)\) if and only if
\[\min_{i\in\mathsf{Players}_{\mathbf{x}}(q)}(\Lambda(s_{i},\mathsf{f}_{q^{ \prime}})-\Lambda(s_{i},\mathsf{f}_{q})) \geq -\mathsf{P}_{i}(\mathbf{x})+\mathsf{P}_{i}(\mathsf{N}_{\mathbf{x} }(1),\ldots,\mathsf{N}_{\mathbf{x}}(q)-1,\ldots,\mathsf{N}_{\mathbf{x}}(q^{ \prime})+1,\ldots,\mathsf{N}_{\mathbf{x}}(Q))\,.\] (C.5)
Hence, no player choosing quality \(q\in[Q]\) wants to switch to a different quality if and only if (C.5) holds for each quality \(q^{\prime}\neq q\), where \(\widehat{i}\in\mathsf{Players}_{\mathbf{x}}(q)\) is arbitrarily chosen.
For a player-specific payment function, \(\mathsf{P}_{i}(\mathbf{x})\) and \(\mathsf{P}_{i}(\mathsf{N}_{\mathbf{x}}(1),\ldots,\mathsf{N}_{\mathbf{x}}(q)-1,\ldots,\mathsf{N}_{\mathbf{x}}(q^{\prime})+1,\ldots,\mathsf{N}_{\mathbf{x}}( Q))\) are not constant anymore over all players \(i\) choosing quality \(q\) in \(\mathbf{x}\) and switching to quality \(q^{\prime}\) in \((\mathsf{N}_{\mathbf{x}}(1),\ldots,\mathsf{N}_{\mathbf{x}}(q)-1,\ldots, \mathsf{N}_{\mathbf{x}}(q^{\prime})+1,\ldots,\mathsf{N}_{\mathbf{x}}(Q))\). Hence, no player choosing quality \(q\in[Q^{\prime}]\) wants to switch to a quality \(q^{\prime}\neq q\) if and only if (C.4) holds for all players \(i\in\mathsf{Players}_{\mathbf{x}}(q)\).
To compute a pure Nash equilibrium, for a player-invariant payment function, (resp., for a player specific payment function), we enumerate all contiguous load vectors \(\mathbf{x}=\langle\mathsf{N}_{\mathbf{x}}(1),\mathsf{N}_{\mathbf{x}}(2),\ldots, \mathsf{N}_{\mathbf{x}}(Q)\rangle\), searching for one that satisfies (C.5) (resp., (C.4)), for each quality \(q\in[Q]\) and for an arbitrarily chosen player \(\widehat{i}\in\mathsf{Players}_{\mathbf{x}}(q)\) (resp., for all players \(i\in\mathsf{Players}_{\mathbf{x}}(q)\); clearly, there are \(\binom{n}{Q-1}\) contiguous load vectors.
* For a player-invariant payment function, checking (C.5) for a quality \(q\in[Q]\) entails the computation of the minimum of a function on a set of size \(\mathsf{N}_{\mathbf{x}}(q)\); computation of the minima for all qualities \(q\in[Q]\) takes time \(\sum_{q\in[Q]}\Theta(\mathsf{N}_{\mathbf{x}}(q))=\Theta\left(\sum_{q\in[Q]} \mathsf{N}_{\mathbf{x}}(q)\right)=\Theta(n)\). Thus, for a player-invariant payment function, the total time is \(\binom{n}{Q-1}\cdot\left(\Theta(n)+\Theta(Q^{2})\right)=\Theta\left(\max\{n,Q^{2 }\}\cdot\binom{n}{Q-1}\right)\).
* For a player-specific payment function, checking (C.4) for a quality \(q\in[Q]\) entails no minimum computation but must be repeated \(n\) times for all players \(i\in[n]\); checking that the inequality holds for a particular \(q^{\prime}\neq q\) takes time \(\Theta(1)\), so checking that it holds for all qualities \(q^{\prime}\neq q\) takes time \(\Theta(Q)\), and checking that it holds for all \(q\in[Q]\) takes time \(\Theta(Q^{2})\). Thus, for a player-specific payment function, the total time is \(\Theta\left(n\cdot Q^{2}\cdot\binom{n}{Q-1}\right)\).
For constant \(Q\), for either a player-invariant or a player-specific payment function, this is a polynomial \(\Theta\left(n^{Q}\right)\) algorithm.
For a player-invariant payment function, by Theorem 1 and Lemma 5, a contiguous load vector satisfying (C.5) for each quality \(q\in[Q]\) exists and will be found by the algorithm enumerating all contiguous load vectors. For a player-specific payment function, by Lemma 5, such a vector exists if and only if it will be found by the algorithm enumerating all contiguous load vectors. Hence, the algorithm solves PNE in Contest Game (resp., \(\exists\)PNE in Contest Game).
Say that \(\Lambda\) is _separable_ if \(\Lambda(\mathbf{x},\mathbf{y})=\Lambda_{1}(\mathbf{x})\Lambda_{2}(\mathbf{y})\) for all \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{2}\), where \(\Lambda_{1}:\mathbb{R}_{\geq 1}\rightarrow\mathbb{R}_{>0}\) and \(\Lambda_{2}:\mathbb{R}_{>0}\rightarrow\mathbb{R}_{>0}\) are each monotonically increasing in skill and effort, respectively. Consider skills \(s_{i}\) and \(s_{k}\) and qualities \(q\) and \(q^{\prime}\) such that \(s_{i}\geq s_{k}\) and \(q>q^{\prime}\). Then,
\[(\Lambda(s_{i},\mathsf{f}_{q})-\Lambda(s_{k},\mathsf{f}_{q}))-( \Lambda(s_{i},\mathsf{f}_{q^{\prime}})-\Lambda(s_{k},\mathsf{f}_{q^{\prime}}))\] \[= \Lambda_{1}(s_{i})\,\Lambda_{2}(\mathsf{f}_{q})-\Lambda_{1}(s_{k })\,\Lambda_{2}(\mathsf{f}_{q})-(\Lambda_{1}(s_{i})\,\Lambda_{2}(\mathsf{f}_{ q^{\prime}})+\Lambda_{1}(s_{k})\,\Lambda_{2}(\mathsf{f}_{q^{\prime}}))\] \[= \left(\Lambda_{1}(s_{i})-\Lambda_{1}(s_{k})\right)(\Lambda_{2}( \mathsf{f}_{q})-\Lambda_{2}(\mathsf{f}_{q^{\prime}}))\] \[\geq 0\,,\]
since \(\Lambda_{1}\) is monotonically increasing in skill and \(\Lambda_{2}\) is strictly monotonically increasing in effort. So any separable skill-effort function \(\Lambda\) has increasing differences. Hence, Theorem 4 immediately implies:
Consider the model of arbitrary players and arbitrary proposals with \(m=1\) and a separable skill-effort function. Then, there is a \(\Theta\left(\max\{Q^{2},n\}\cdot\big{(}\begin{matrix}0\\ -1\end{matrix}\big{)}\right)\) (resp., \(\Theta\left(n\cdot Q^{2}\cdot\big{(}\begin{matrix}n\\ -1\end{matrix}\big{)}\right)\) algorithm that solves PNE in Contest Game (resp. \(\exists\)PNE in Contest Game) for a player-invariant (resp., player-specific) payment function; for constant \(Q\), it is a \(\Theta(n^{Q})\) polynomial algorithm.
Clearly, when the players are proposal-indifferent and anonymous, player-specific payment functions collapse on player-invariant payment functions and no minimum computation in (C.5) is needed. This leads to a slight simplification of the algorithm with a slightly improved running time for a constant \(Q\):
Consider the model of proposal-indifferent and anonymous players with \(m=1\) and a skill-effort function that has increasing differences. Then, there is a \(\Theta\left(Q^{2}\cdot\big{(}\begin{matrix}n\\ Q\end{matrix}\big{)}\right)\) algorithm that solves PNE in Contest Game for a player-invariant (resp., player-specific) payment function; for constant \(Q\), it is a \(\Theta(n^{Q-1})\) polynomial algorithm.
## 4 Proportional Allocation, Anonymous Players and Mandatory Participation
We show:
Consider the model of anonymous players with \(m=1\) and proportional allocation. Assume that (C1) \(\mathsf{f}_{1}>\frac{1}{n}\), and (C2) for each quality \(q\in[Q]\), \(\mathsf{f}_{q}>\frac{\mathsf{f}_{Q}}{(n-1)^{2}}\). Then, there is a \(\Theta(\max\{Q,n\})\) algorithm that solves PNE in Contest Game.
Note that since \(\mathsf{f}\) is strictly increasing in \(q\), assumption (C1) implies that numerators and denominators in the proportional allocation function are strictly positive.
Proof.: We present a greedy algorithm, which is inductive on the number of qualities to which the players are assigned. Roughly speaking, we start by assigning the \(n\) players to the two highest qualities \(Q\) and \(Q-1\) so that the resulting assignment is a pure Nash equilibrium _as if_ only the qualities \(Q\) and \(Q-1\) were available while the remaining qualities are ignored. We then proceed by adding lower qualities, one at a time and towards the lowest quality \(1\), so that, for the addition of quality \(q\), players assigned in the immediately earlier addition of the higher quality \(q+1\) to quality \(q\) are now assigned to qualities \(q\) and \(q-1\); the players assigned in other earlier additions to qualities higher than \(q\) are retained. We will prove that the number of players that had been assigned to quality \(q\) in the immediately previous iteration \(q+1\) is non-zero (Proposition 11). It turns out that, after the addition of each quality \(q\), the resulting assignment of the \(n\) players is a pure Nash equilibrium _as if_ only the qualities added so far _and_ the newly added quality \(q-1\) were available (Proposition 12); it will be called a _pure Nash equilibrium restricted to qualities \(Q,Q-1,\ldots,q,q-1\)_. So, at termination, a pure Nash equilibrium will have been reached.
We continue with the formal details of the algorithm and its proof. Henceforth, drop, for simplicity, the index \(\mathbf{q}^{1}\) from \(\mathsf{N}_{\mathbf{q}^{1}}(q)\). For any pair of distinct qualities \(q,q^{\prime}\), with \(Q\geq q^{\prime}>q\geq 1\), denote as \(X(q,q^{\prime}):=\sum_{q^{\prime}\geq k\geq q}\mathsf{N}(k)\) and \(\Psi(q,q^{\prime}):=\sum_{q^{\prime}\geq k\geq q}\mathsf{N}(k)\,\mathsf{f}_{k}\) the total number of players assigned to and the total load on qualities \(q,\ldots,q^{\prime}\).
For \(q:=Q\) down to \(2\), do:
* Retain the \(X(q+1,Q)\) players already assigned to qualities \(Q,Q-1,\ldots,q+1\).
* If \(\mathsf{N}(q)=0\) right after iteration \(q+1\), then **exit**. Else do:
* Find the smallest \(x\), where \(0\leq x\leq n-\sum_{q+1\leq k\leq Q}\mathsf{N}(k)\), so that assigning \(x\) players to \(q\) and \(n-X(q+1,Q)-x\) players to \(q-1\) is a pure Nash equilibrium restricted to the qualities \(q\) and \(q-1\).
* The assignment is such a pure Nash equilibrium if and only if \[\begin{split}&\mathsf{f}_{q}-\frac{\mathsf{f}_{q}}{\Psi(q+1,Q)+x \mathsf{f}_{q}+(n-X(q+1,Q)-x)\mathsf{f}_{q-1}}\\ \leq&\mathsf{f}_{q-1}-\frac{\mathsf{f}_{q-1}}{\Psi(q+1,Q)+(x-1)\mathsf{f}_{q}+(n-X(q+1,Q)-x+1)\mathsf{f}_{q-1}}\end{split}\] (1) for \(x>0\), and \[\begin{split}&\mathsf{f}_{q-1}-\frac{\mathsf{f}_{q-1}}{\Psi(q+1,Q)+x\mathsf{f}_{q}+(n-X(q+1,Q)-x)\mathsf{f}_{q-1}}\\ \leq&\mathsf{f}_{q}-\frac{\mathsf{f}_{q}}{\Psi(q+1,Q)+(x+1)\mathsf{f}_{q}+(n-X(q+1,Q)-x-1)\mathsf{f}_{q-1}}\end{split}\] (2) for \(n-X(q+1,Q)-x>0\), respectively. */ /
* Since proportional allocation is player-invariant, Theorem 1 implies that such an \(x\) exists; it can be found in time \(\Theta(n-X(q+1,Q))\) by exhaustive search.*/
* Assign \(\mathsf{N}(q):=x\) players to \(q\) and \(n-\sum_{q+1\leq k\leq Q}\mathsf{N}(k)-x\) players to \(q-1\).
Consider an iteration \(q\), \(Q\geq q\geq 3\), where **exit** is executed. As a result, the qualities \(q-2,\ldots,1\) would remain "unseen" by the algorithm at such an "early" termination; hence, a player assigned to quality \(q^{\prime}\in\{Q,Q-1,\ldots,q+1\}\) in an earlier iteration might prefer to move to some "unseen" quality, and the final assignment computed in iteration \(q\) would not be a pure Nash equilibrium. (Note that iteration \(2\) may leave no "unseen" qualities.) We prove that "early" termination is impossible:
**Proposition 11**.: **exit** _is never executed._
Proof.: Assume, by way of contradiction, that **exit** is executed for the first time in iteration \(q\), where \(Q\geq q\geq 3\), as a result of having set \(\mathsf{N}(q)=0\) in iteration \(q+1\). By the algorithm, the assignment computed in iteration \(q+1\) is a pure Nash equilibrium restricted to the qualities \(q+1\) and \(q\); so, a player assigned to quality \(q+1\) does not want to switch to quality \(q\), or
\[\mathsf{f}_{q+1}\left(1-\frac{1}{\psi(q+2,Q)+\mathsf{N}(q+1)\, \mathsf{f}_{q+1}}\right) \leq \mathsf{f}_{q}\left(1-\frac{1}{\Psi(q+2,Q)+\left(\mathsf{N}(q+1) -1\right)\mathsf{f}_{q+1}+\mathsf{f}_{q}}\right)\] \[= \mathsf{f}_{q}\left(1-\frac{1}{\Psi(q+1,Q)+\mathsf{N}(q+1)\, \mathsf{f}_{q+1}+\mathsf{f}_{q}-\mathsf{f}_{q+1}}\right)\,.\]
Since \(\mathsf{f}_{q+1}>\mathsf{f}_{q}\),
\[\mathsf{f}_{q+1}\,\left(1-\frac{1}{\Psi(q+2,Q)+\mathsf{N}(q+1)\mathsf{f}_{q+ 1}}\right) > \mathsf{f}_{q}\,\left(1-\frac{1}{\Psi(q+2,Q)+\mathsf{N}(q+1) \mathsf{f}_{q+1}+\mathsf{f}_{q}-\mathsf{f}_{q+1}}\right)\,.\]
A contradiction.
Proposition 11 implies that "early" termination is not possible; thus, \(\mathsf{N}(q-1)>0\) in every iteration \(q\geq 3\). We continue with the _Correctness Lemma_; the correctness of the algorithm will follow from the case \(q=2\) of _Correctness Lemma_.
**Proposition 12** (Correctness Lemma).: _Right after iteration \(q\), where \(Q\geq q\geq 2\), there has been computed a pure Nash equilibrium restricted to the qualities \(Q,Q-1,\ldots,q,q-1\)._
Proof.: By backward induction on \(q\).
_Basis case_: When \(q=Q\), the algorithm computes a pure Nash equilibrium restricted to the qualities \(Q\) and \(Q-1\), as the Correctness Lemma requires.
_Induction hypothesis:_ Assume inductively that right after iteration \(q\) of the algorithm, where \(Q\geq q>2\), there has been computed a pure Nash equilibrium restricted to the qualities \(Q,Q-1,\ldots,q,q-1\); so for all pairs of distinct qualities \(q^{\prime}\) and \(q^{\prime\prime}\), with \(q-1\leq q^{\prime},q^{\prime\prime}\leq Q\), a player assigned to \(q^{\prime}\) does not want to switch to \(q^{\prime\prime}\) and vice versa. So,
\[\mathsf{f}_{q^{\prime}}-\frac{\mathsf{f}_{q^{\prime}}}{\psi(q^{ \prime},q^{\prime\prime})+\mathsf{N}(q^{\prime})\mathsf{f}_{q^{\prime}}+(n- \chi(q^{\prime},q^{\prime\prime})-\mathsf{N}(q^{\prime}))\mathsf{f}_{q^{ \prime\prime}}}\] \[\leq \mathsf{f}_{q^{\prime\prime}}-\frac{\mathsf{f}_{q^{\prime\prime} }}{\psi(q^{\prime},q^{\prime\prime})+(\mathsf{N}(q^{\prime})-1)\mathsf{f}_{q^{ \prime}}+(n-\chi(q^{\prime},q^{\prime\prime})-\mathsf{N}(q^{\prime})+1) \mathsf{f}_{q^{\prime\prime}}}\,,\]
for \(\mathsf{N}(q^{\prime})>0\), and
\[\mathsf{f}_{q^{\prime\prime}}-\frac{\mathsf{f}_{q^{\prime\prime} }}{\psi(q^{\prime},q^{\prime\prime})+\mathsf{N}(q^{\prime})\mathsf{f}_{q^{ \prime}}+(n-\chi(q^{\prime},q^{\prime\prime})-\mathsf{N}(q^{\prime}))\mathsf{ f}_{q^{\prime\prime}}}\] \[\leq \mathsf{f}_{q^{\prime}}-\frac{\mathsf{f}_{q^{\prime}}}{\psi(q^{ \prime},q^{\prime\prime})+(\mathsf{N}(q^{\prime})+1)\mathsf{f}_{q^{\prime}}+( n-\chi(q^{\prime},q^{\prime\prime})-\mathsf{N}(q^{\prime})-1)\mathsf{f}_{q^{ \prime\prime}}}\]
for \(n-\chi(q^{\prime},q^{\prime\prime})-\mathsf{N}(q^{\prime})>0\), where
\[\chi_{q^{\prime},q^{\prime\prime}}(q-1) := \sum_{Q\geq k\geq q-1,k\neq q^{\prime},q^{\prime\prime}}\mathsf{ N}(k)\]
and
\[\psi_{q^{\prime},q^{\prime\prime}}(q-1) = \sum_{Q\geq k\geq q-1,k\neq q^{\prime},q^{\prime\prime}}\mathsf{ N}(k)\ \mathsf{f}_{k}\]
are the total number of players assigned to qualities other than \(q^{\prime}\) and \(q^{\prime\prime}\) right after iteration \(q\) of the algorithm and the corresponding total load, respectively.
_Inductive Step:_ We shall prove that right after iteration \(q-1\), where \(Q\geq q-1\geq 2\), there has been computed a pure Nash equilibrium as if only the qualities \(Q,Q-1,\ldots,q-1,q-2\) were available. By the _Induction Hypothesis,_ right after iteration \(q\), for any pair of qualities \(q^{\prime}\) and \(q^{\prime\prime}\), where \(Q\geq q^{\prime},q^{\prime\prime}\geq q-1\), there has been computed a pure Nash equilibrium as if only the qualities \(Q,Q-1,\ldots,q,q-1\) were available. By the algorithm, the loads on qualities \(Q,Q-1,\ldots,q\) after iteration \(q\) are preserved in iteration \(q-1\). So we only have to prove the following property for any quality \(\widehat{q}\), where \(Q\geq\widehat{q}\geq q\):
Assume that \(\mathsf{N}(\widehat{q})\geq 1\). Then, a player that had been assigned to \(\widehat{q}\) before iteration \(q\) does not want to switch to either \(q-1\) or \(q-2\) right after iteration \(q-1\).
For the proofs of Lemmas 4 and 4, we shall abuse notation to denote as \(\mathsf{C}_{\widehat{q}}(\mathsf{N}(\widehat{q}),\mathsf{N}(q-1),\mathsf{N} (q-2))\), \(\mathsf{C}_{q-1}(\mathsf{N}(\widehat{q}),\mathsf{N}(q-1),\mathsf{N}(q-2))\) and \(\mathsf{C}_{q-2}(\mathsf{N}(\widehat{q}),\mathsf{N}(q-1),\mathsf{N}(q-2))\) the costs incurred to a player assigned to qualities \(\widehat{q}\), \(q-1\) and \(q-2\), respectively; so, we omit reference to the loads on qualities other than \(\widehat{q}\), \(q-1\) and \(q-2\). (Since players are anonymous, all players assigned to a particular quality incur the same cost.)
Proof.: We have to prove that
\[\textbf{(A)}:\ \ \mathsf{C}_{\widehat{q}}(\mathsf{N}(\widehat{q}), \mathsf{N}(q-1),\mathsf{N}(q-2)) \leq \mathsf{C}_{q-1}(\mathsf{N}(\widehat{q})-1,\mathsf{N}(q-1)+1, \mathsf{N}(q-2))\]
and
\[\textbf{(B)}:\ \ \mathsf{C}_{\widehat{q}}(\mathsf{N}(\widehat{q}), \mathsf{N}(q-1),\mathsf{N}(q-2)) \leq \mathsf{C}_{q-2}(\mathsf{N}(\widehat{q})-1,\mathsf{N}(q-1), \mathsf{N}(q-2)+1)\]
where \(\mathsf{N}(\widehat{q})>0\). We start with **(A)**, which is expressed as
\[\mathsf{f}_{\widehat{q}}-\frac{\mathsf{f}_{\widehat{q}}}{\psi_{ \widehat{q},q-1}(q-1)+\mathsf{N}(\widehat{q})\,\mathsf{f}_{\widehat{q}}+ \mathsf{N}(q-1)\,\mathsf{f}_{q-1}+(n-\chi(\widehat{q},q-1)-\mathsf{N}(q-1)) \,\mathsf{f}_{q-2}}\] \[\leq \mathsf{f}_{q-1}-\frac{\mathsf{f}_{q-1}}{\psi_{\widehat{q},q-1} (q-1)+(\mathsf{N}(\widehat{q})-1)\,\mathsf{f}_{\widehat{q}}+(\mathsf{N}(q-1) +1)\,\mathsf{f}_{q-1}+(n-\chi(\widehat{q},q-1)-\mathsf{N}(\widehat{q})- \mathsf{N}(q-1))\,\mathsf{f}_{q-2}}\]
where \(\psi_{\widehat{q},q-1}(q-1)\geq 0\), \(\mathsf{N}(\widehat{q})>0\) and \(0\leq\mathsf{N}(q-1)\leq n-\chi(\widehat{q},q-1)-\mathsf{N}(\widehat{q})\). By setting \(\widehat{q}\)
and \(q-1\) for \(q^{\prime}\) and \(q^{\prime\prime}\), respectively, in (3), it suffices to prove that
\[\underbrace{\underbrace{\psi_{\widehat{q},q-1}(q-1)+\mathsf{N}( \widehat{q})\mathsf{f}_{q}+(n-\chi(\widehat{q},q-1)-\mathsf{N}(\widehat{q})) \mathsf{f}_{q-1}}_{:=\mathsf{A}}}_{:=\mathsf{A}}\] \[-\underbrace{\underbrace{\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N} (\widehat{q})-1)\mathsf{f}_{\widehat{q}}+(n-\chi_{\widehat{q},q-1}^{-}(q-1)- \mathsf{N}(\widehat{q})+1)\mathsf{f}_{q-1}}_{:=\mathsf{B}}}_{:=\mathsf{B}}\] \[\leq \underbrace{\mathsf{f}_{\widehat{q}}}_{\widehat{q}}\] \[-\underbrace{\psi_{\widehat{q},q-1}(q-1)+\mathsf{N}(\widehat{q}) \mathsf{f}_{\widehat{q}}+(n-\chi_{\widehat{q},q-1}^{-}(q-1)-\mathsf{N}( \widehat{q})+1)\mathsf{f}_{q-1}}_{:=\mathsf{A}}\underbrace{\mathsf{f}_{q-1}}_ {:=\mathsf{A}}\] \[-\underbrace{\underbrace{\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N} (\widehat{q})-1)\mathsf{f}_{\widehat{q}}+(n-\chi_{\widehat{q},q-1}^{-}(q-1)- \mathsf{N}(\widehat{q})+1)\mathsf{f}_{q-1}}_{:=\mathsf{A}}+\underbrace{(n- \chi_{\widehat{q},q-1}^{-}(q-1)-\mathsf{N}(\widehat{q})-\mathsf{N}(q-1)) \mathsf{f}_{q-2}}_{:=\mathsf{A}}}_{:=\mathsf{A}}\] \[= \underbrace{\underbrace{\psi_{\widehat{q},q-1}(q-1)+\mathsf{N}( \widehat{q})\mathsf{f}_{\widehat{q}}+(n-\chi_{\widehat{q},q-1}^{-}(q-1)- \mathsf{N}(\widehat{q}))\mathsf{f}_{q-1}}_{:=\mathsf{A}}+\underbrace{(n- \chi_{\widehat{q},q-1}^{-}(q-1)-\mathsf{N}(\widehat{q})-\mathsf{N}(q-1))( \mathsf{f}_{q-2}-\mathsf{f}_{q-1})}_{:=\mathsf{A}}}_{:=\mathsf{A}}\] \[-\underbrace{\underbrace{\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N} (\widehat{q})-1)\mathsf{f}_{\widehat{q}}+(n-\chi_{\widehat{q},q-1}^{-}(q-1)- \mathsf{N}(\widehat{q})+1)\mathsf{f}_{q-1}}_{:=\mathsf{A}}+\underbrace{(n- \chi_{\widehat{q},q-1}^{-}(q-1)-\mathsf{N}(\widehat{q})-\mathsf{N}(q-1))( \mathsf{f}_{q-2}-\mathsf{f}_{q-1}}_{:=\mathsf{A}}}_{:=\mathsf{A}}\] \[-\underbrace{\underbrace{\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N} (\widehat{q})-1)\mathsf{f}_{\widehat{q}}+(n-\chi_{\widehat{q},q-1}^{-}(q-1)- \mathsf{N}(\widehat{q})+1)\mathsf{f}_{q-1}}_{:=\mathsf{A}}+\underbrace{(n- \chi_{\widehat{q},q-1}^{-}(q-1)-\mathsf{N}(\widehat{q})-\mathsf{N}(q-1))( \mathsf{f}_{q-2}-\mathsf{f}_{q-1}}_{:=\mathsf{A}}}_{:=\mathsf{A}}\] \[-\underbrace{\underbrace{\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N} (\widehat{q})-1)\mathsf{f}_{\widehat{q}}+(n-\chi_{\widehat{q},q-1}^{-}(q-1)- \mathsf{N}(\widehat{q})+1)\mathsf{f}_{q-1}}_{:=\mathsf{A}}+\underbrace{(n- \chi_{\widehat{q},q-1}^{-}(q-1)-\mathsf{N}(\widehat{q})-\mathsf{N}(q-1))( \mathsf{f}_{q-2}-\mathsf{f}_{q-1}}_{:=\mathsf{A}}}_{:=\mathsf{A}}\] \[-\underbrace{\underbrace{\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N} (\widehat{q})-1)\mathsf{f}_{\widehat{q}}+(n-\chi_{\widehat{q},q-1}^{-}(q-1)- \mathsf{N}(\widehat{q})+1)\mathsf{f}_{q-1}}_{:=\mathsf{A}}+\underbrace{(n- \chi_{\widehat{q},q-1}^{-}(q-1)-\mathsf{N}(\widehat{q})-\mathsf{N}(q-1))( \mathsf{f}_{q-2}-\
= Now,
\[\lambda_{1}\] \[= \mathfrak{f}_{\widehat{q}}\mathsf{B}-\mathfrak{f}_{q-1}\mathsf{A}\] \[= \mathfrak{f}_{\widehat{q}}\left[\psi_{q,q-1}(q-1)+(\mathsf{N}( \widehat{q})-1)\mathfrak{f}_{\widehat{q}}+(n-\chi_{q,q-1}^{\sim}(q-1)-\mathsf{ N}(\widehat{q})+1)\mathfrak{f}_{q-1}\right]\] \[-\mathfrak{f}_{q-1}\,\left[\psi_{\widehat{q},q-1}(q-1)+\mathsf{ N}(\widehat{q})\mathfrak{f}_{\widehat{q}}+(n-\chi_{\widehat{q},q-1}(q-1)- \mathsf{N}(\widehat{q}))\mathfrak{f}_{q-1}\right]\] \[= \psi_{\widehat{q},q-1}(q-1)\left(\mathfrak{f}_{\widehat{q}}- \mathfrak{f}_{q-1}\right)\] \[+\left[\mathsf{N}(\widehat{q})-1\right]\,\mathfrak{f}_{\widehat {q}}^{2}-\left[n-\chi_{\widehat{q},q-1}(q-1)-\mathsf{N}(\widehat{q})\right] \,\mathfrak{f}_{q-1}^{2}+\left[n-\chi_{\widehat{q},q-1}(q-1)-2\mathsf{N}( \widehat{q})+1\right]\mathfrak{f}_{\widehat{q}}\mathfrak{f}_{q-1}\] \[= \psi_{\widehat{q},q-1}(q-1)(\mathfrak{f}_{\widehat{q}}- \mathfrak{f}_{q-1})\] \[+\mathsf{N}(\widehat{q})\mathfrak{f}_{\widehat{q}}^{2}+x_{q} \mathfrak{q}^{2}-2\mathsf{N}(\widehat{q})\mathfrak{f}_{\widehat{q}}\mathfrak{ f}_{q-1}-\mathfrak{f}_{\widehat{q}}^{2}-(n-\chi_{\widehat{q},q-1}(q-1)) \mathfrak{f}_{q-1}^{2}+\mathfrak{f}_{\widehat{q}}\mathfrak{f}_{q-1}(n-\chi_ {\widehat{q},q-1}(q-1)+1)\] \[= \psi_{\widehat{q},q-1}(q-1)(\mathfrak{f}_{\widehat{q}}- \mathfrak{f}_{q-1})\] \[+\mathsf{N}(\widehat{q})(\mathfrak{f}_{\widehat{q}}-\mathfrak{ f}_{q-1})^{2}-(\mathfrak{f}_{\widehat{q}}-\mathfrak{f}_{q-1})^{2}+\mathfrak{f}_{q- 1}^{2}-2\mathfrak{f}_{\widehat{q}}\mathfrak{f}_{q-1}\] \[-(n-\chi_{\widehat{q},q-1}(q-1))\mathfrak{f}_{q-1}^{2}+(n-\chi_ {\widehat{q},q-1}(q-1)+1)\mathfrak{f}_{\widehat{q}}\mathfrak{f}_{q-1}\] \[= \psi_{\widehat{q},q-1}(q-1)(\mathfrak{f}_{\widehat{q}}- \mathfrak{f}_{q-1})\] \[+(\mathsf{N}(\widehat{q})-1)(\mathfrak{f}_{\widehat{q}}- \mathfrak{f}_{q-1})^{2}+(n-\chi(\widehat{q},q-1)-1)\mathfrak{f}_{q-1}( \mathfrak{f}_{\widehat{q}}-\mathfrak{f}_{q-1})\] \[= (\mathfrak{f}_{\widehat{q}}-\mathfrak{f}_{q-1})\left[\psi_{ \widehat{q},q-1}(q-1)+(\mathsf{N}(\widehat{q})-1)(\mathfrak{f}_{\widehat{q} }-\mathfrak{f}_{q-1})+(n-\chi_{\widehat{q},q-1}(q-1)-1)\mathfrak{f}_{q-1}\right]\] \[> 0\,,\]
since:
* \(\psi_{\widehat{q},q-1}(q-1)\geq 0\).
* \(\mathsf{N}(\widehat{q})\geq 1\).
* Either \(x_{\widehat{q}}=1\), in which case, since \(n\geq 2\), there is at least one player assigned to \(\overline{q-1}\) after iteration \(q\), so that \(\chi_{\widehat{q},q-1}^{\sim}(q-1)\leq n-2\) (otherwise, there would be no iteration \(q-1\)), which implies that \(n-\chi_{\widehat{q},q-1}^{\sim}(q-1)-1\geq 1\), or \(\mathsf{N}(\widehat{q})\geq 2\), so that \(\chi_{\widehat{q},q-1}^{\sim}(q-1)\leq n-2\), which implies again that \(n-\chi_{\widehat{q},q-1}^{\sim}(q-1)-1\geq 1\).
* Finally, \[\lambda_{1}+\mu_{1}\] \[= \mathsf{f}_{q}\mathsf{B}-\mathsf{f}_{q-1}\mathsf{A}+(\mathsf{f}_{q -2}-\mathsf{f}_{q-1})\Delta\] \[= \mathsf{f}_{q}^{\mbox{\tiny$\frown$}}(\mathsf{A}-\mathsf{f}_{q}+ \mathsf{f}_{q-1})-\mathsf{f}_{q-1}\mathsf{A}+(\mathsf{f}_{q-2}-\mathsf{f}_{q-1 })\Delta\] \[= (\mathsf{f}_{\mbox{\tiny$\frown$}}-\mathsf{f}_{q-1})(\mathsf{A}- \mathsf{f}_{\mbox{\tiny$\frown$}}+\Delta)\] \[= (\mathsf{f}_{\mbox{\tiny$\frown$}}-\mathsf{f}_{q-1})\,.\] \[[\psi_{q,q-1}(q-1)+\mathsf{N}(\widehat{q})\mathsf{f}_{\mbox{ \tiny$\frown$}}+(n-\chi_{\widehat{q},q-1}^{\mbox{\tiny$\frown$}}(q-1)- \mathsf{N}(\widehat{q}))\mathsf{f}_{q-1}-\mathsf{f}_{\mbox{\tiny$\frown$}}\] \[+(n-\chi_{\widehat{q},q-1}(q-1)-\mathsf{N}(\widehat{q})-\mathsf{ N}(q-1))(\mathsf{f}_{q-2}-\mathsf{f}_{q-1})]\] \[= (\mathsf{f}_{\mbox{\tiny$\frown$}}-\mathsf{f}_{q-1})\,.\] \[[\psi_{q,q-1}^{\mbox{\tiny$\frown$}}(q-1)+\mathsf{N}(\widehat{q} )\mathsf{f}_{\mbox{\tiny$\frown$}}+(n-\chi_{\widehat{q},q-1}^{\mbox{\tiny$ \frown$}}(q-1)-\mathsf{N}(\widehat{q}))\mathsf{f}_{q-1}-\mathsf{f}_{\mbox{ \tiny$\frown$}}\] \[-(n-\chi_{\widehat{q},q-1}^{\mbox{\tiny$\frown$}}(q-1)-\mathsf{ N}(\widehat{q}))\mathsf{f}_{q-1}+(n-\chi_{\widehat{q},q-1}^{\mbox{\tiny$ \frown$}}(q-1)-\mathsf{N}(\widehat{q}))\mathsf{f}_{q-2}\] \[+\underbrace{\mathsf{N}(q-1)(\mathsf{f}_{q-1}-\mathsf{f}_{q-2})} _{\geq 0}\] \[\geq (\mathsf{f}_{\mbox{\tiny$\frown$}}-\mathsf{f}_{q-1})\,.\] \[[\psi_{q,q-1}^{\mbox{\tiny$\frown$}}(q-1)+\underbrace{\mathsf{N }(\widehat{q})(\mathsf{f}_{\mbox{\tiny$\frown$}}-\mathsf{f}_{q-2})+(n-\chi_{ \widehat{q},q-1}^{\mbox{\tiny$\frown$}}(q-1))\mathsf{f}_{q-2}-\mathsf{f}_{ \mbox{\tiny$\frown$}}]}_{\geq (\mathsf{f}_{\mbox{\tiny$\frown$}}-\mathsf{f}_{q-1})\left[\psi_{q,q-1} (q-1)+(n-\chi_{\widehat{q},q-1}^{\mbox{\tiny$\frown$}}(q-1)-1)\mathsf{f}_{q- 2}\right]\] \[> 0\,,\]
since \(\psi_{\widehat{q},q-1}^{\mbox{\tiny$\frown$}}(q-1)\geq 0\) and \(n-\chi_{\widehat{q},q-1}^{\mbox{\tiny$\frown$}}(q-1)-1\geq 1\), as proved in the previous item.
It follows that \(\frac{\lambda_{1}}{\lambda_{2}}\leq\frac{\lambda_{1}+\mu_{1}}{\lambda_{2}+\mu_ {2}}\) if and only if \(\lambda_{1}\mu_{2}\leq\lambda_{2}\mu_{1}\). Now
\[\lambda_{1}\mu_{2} = (\mathsf{f}_{\mbox{\tiny$\frown$}}-\mathsf{f}_{q-1})\left[\psi_{ \widehat{q},q-1}^{\mbox{\tiny$\frown$}}(q-1)+(\mathsf{N}(\widehat{q})-1)( \mathsf{f}_{\mbox{\tiny$\frown$}}-\mathsf{f}_{q-1})+(n-\chi_{\widehat{q},q-1 }^{\mbox{\tiny$\frown$}}(q-1)-1)\mathsf{f}_{q-1}\right]\cdot\] \[\Delta(\mathsf{A}+\mathsf{B}+\Delta)\,,\]
where
\[\mathsf{A}+\mathsf{B}+\Delta = 2\psi_{\widehat{q},q-1}(q-1)+(2\mathsf{N}(\widehat{q})-1) \mathsf{f}_{\mbox{\tiny$\frown$}}+(2(n-\chi_{\widehat{q},q-1}^{\mbox{\tiny$ \frown$}}(q-1)-\mathsf{N}(\widehat{q})+1))\mathsf{f}_{q-1}\] \[+(n-\chi_{\widehat{q},q-1}^{\mbox{\tiny$\frown$}}(q-1)-\mathsf{ N}(\widehat{q})-\mathsf{N}(q-1))(\mathsf{f}_{q-2}-\mathsf{f}_{q-1})\] \[= 2\psi_{\widehat{q},q-1}(q-1)+(2\mathsf{N}(\widehat{q})-1) \mathsf{f}_{\mbox{\tiny$\frown$}}+(n-\chi_{\widehat{q},q-1}^{\mbox{\tiny$ \frown$}}(q-1)-\mathsf{N}(\widehat{q})+1+\mathsf{N}(q-1))\mathsf{f}_{q-1}\] \[+(n-\chi_{\widehat{q},q-1}^{\mbox{\tiny$\frown$}}(q-1)-\mathsf{ N}(\widehat{q})-\mathsf{N}(q-1))\mathsf{f}_{q-2}\,,\]
and
\[\lambda_{2}\mu_{1} = \mathsf{AB}\cdot(\mathsf{f}_{\mbox{\tiny$\frown$}}-\mathsf{f}_{q- 1})\Delta\] \[= \left[\psi_{\widehat{q},q-1}^{\mbox{\tiny$\frown$}}(q-1)+\mathsf{ N}(\widehat{q})\mathsf{f}_{\mbox{\tiny$\frown$}}+(n-\chi_{\widehat{q},q-1}^{ \mbox{\tiny$\frown$}}(q-1)-\mathsf{N}(\widehat{q}))\mathsf{f}_{q-1}\right]\cdot\] \[\left[\psi_{\widehat{q},q-1}^{\mbox{\tiny$\frown$}}(q-1)+( \mathsf{N}(\widehat{q})-1)\mathsf{f}_{\mbox{\tiny$\frown$}}+(n-\chi_{\widehat{q},q-1}^{\mbox{\tiny$\frown$}}(q-1)-\mathsf{N}(\widehat{q})+1)\mathsf{f}_{q-1} \right]\cdot(\mathsf{f}_{\mbox{\tiny$\frown$}}-\mathsf{f}_{q-1})\Delta\,.\]
Since \(\Delta\leq 0\) and \(\mathfrak{f}_{q}\!-\mathfrak{f}_{q-1}>0\), it follows that \(\lambda_{1}\mu_{2}\leq\lambda_{2}\mu_{1}\) if and only if
\[\left[\psi_{q,q-1}(q-1)+(\mathsf{N}(\widehat{q})-1)(\mathfrak{f}_ {q}\!-\mathfrak{f}_{q-1})+(n-\chi_{\widehat{q},q-1}(q-1)-1)\mathfrak{f}_{q-1} \right]\cdot\] \[\left[2\,\psi_{\widehat{q},q-1}(q-1)+(2\mathsf{N}(\widehat{q})-1 )\mathfrak{f}_{q}\!+(n-\chi_{\widehat{q},q-1}(q-1)-\mathsf{N}(\widehat{q})+1+ \mathsf{N}(q-1)\mathfrak{f}_{q-1}\right.\] \[\left.+(n-\chi_{\widehat{q},q-1}(q-1)-\mathsf{N}(\widehat{q})- \mathsf{N}(q-1))\mathfrak{f}_{q-2}\right]\] \[\geq \left[\psi_{\widehat{q},q-1}(q-1)+\mathsf{N}(\widehat{q}) \mathfrak{f}_{q}\!+(n-\chi_{\widehat{q},q-1}(q-1)-\mathsf{N}(\widehat{q})) \mathfrak{f}_{q-1}\right]\cdot\] \[\left[\psi_{q,q-1}(q-1)+(\mathsf{N}(\widehat{q})-1)\mathfrak{f} _{q}\!+(n-\chi_{\widehat{q},q-1}(q-1)-\mathsf{N}(\widehat{q})+1)\mathfrak{f} _{q-1}\right]\,.\] if and only if \[\left[\underbrace{\psi(\widehat{q},q-1)+(\mathsf{N}(\widehat{q} )-1)\mathfrak{f}_{q}\!+(n-\chi_{\widehat{q},q-1}(q-1)-\mathsf{N}(\widehat{q}) )\mathfrak{f}_{q-1}}_{\mathsf{I}}\right]\cdot\] \[+\underbrace{\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N}(\widehat{q} )-1)\mathfrak{f}_{q}\!+(\mathsf{N}(q-1)+1)\mathfrak{f}_{q-1}+(n-\chi_{ \widehat{q},q-1}(q-1)-\mathsf{N}(\widehat{q})-\mathsf{N}(q-1))\mathfrak{f}_{ q-2}}_{\mathsf{I}}\] \[\geq \left[\underbrace{\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N}( \widehat{q})-1)\mathfrak{f}_{q}\!+(n-\chi_{\widehat{q},q-1}(q-1)-\mathsf{N} (\widehat{q}))\mathfrak{f}_{q-1}}_{\mathsf{I}}\!+\!\mathfrak{f}_{q}\right]\cdot\] \[\left[\underbrace{\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N}( \widehat{q})-1)\mathfrak{f}_{q}\!+(n-\chi_{\widehat{q},q-1}(q-1)-\mathsf{N} (\widehat{q})\mathfrak{f}_{q-1}}_{\mathsf{I}}\!+\!\mathfrak{f}_{q-1}\right]\,.\] if and only if \[(\Theta-\mathfrak{f}_{q}\!-\mathfrak{f}_{q-1})\mathfrak{I} \geq \mathfrak{f}_{q}\mathfrak{f}_{q-1}\] or \[\underbrace{\left[\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N}( \widehat{q})-1)\mathfrak{f}_{q}\!+\mathsf{N}(q-1)\mathfrak{f}_{q-1}+(n-\chi_{ \widehat{q},q-1}(q-1)-\mathsf{N}(\widehat{q})-\mathsf{N}(q-1))\mathfrak{f}_{ q-2}\right]}_{\Theta-\mathfrak{f}_{q}\!-\mathfrak{f}_{q-1}}\cdot\] \[\underbrace{\left[\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N}( \widehat{q})-1)\mathfrak{f}_{q}\!+(n-\chi_{\widehat{q},q-1}(q-1)-\mathsf{N} (\widehat{q}))\mathfrak{f}_{q-1}\right]}_{\mathsf{I}}\] \[\geq \mathfrak{f}_{q}\mathfrak{f}_{q-1}\,.\] To prove the last inequality, we consider two cases according to the value of \(\mathsf{N}(\widehat{q})\):
* \(\mathsf{N}(\widehat{q})\geq 2\): Clearly, \(n-\chi_{\widehat{q},q-1}(q-1)-\mathsf{N}(\widehat{q})-\mathsf{N}(q-1)\geq 0\). Since \(\psi_{\widehat{q},q-1}(q-1)\geq 0\) and \(\mathsf{N}(q-1)\geq 0\), it follows that \((\Theta-\mathfrak{f}_{q}\!-\mathfrak{f}_{q-1})\geq\mathfrak{f}_{q}\) and \(\mathfrak{I}\geq\mathfrak{f}_{\widehat{q}}\). Hence, \((\Theta-\mathfrak{f}_{q}\!-\mathfrak{f}_{q-1})\cdot\mathfrak{I}\geq\mathfrak{f} _{q}^{2}>\mathfrak{f}_{q}^{2}\mathfrak{f}_{q-1}\), as needed.
\[\begin{split}\text{\sf=}\ \text{\sf N}(\widehat{q})=1\text{: Then,}\\ &\quad\quad\left(\Theta-\mathsf{f}_{\widehat{q}}-\mathsf{f}_{q-1} \right)\cdot\mathsf{\Gamma}\\ \geq&\quad\left[\psi_{\widehat{q},q-1}(q-1)+\text{\sf N }(q-1)\mathsf{f}_{q-1}+(n-\chi_{\widehat{q},q-1}^{\sim}(q-1)-1-\text{\sf N}(q -1))\mathsf{f}_{q-2}\right]\cdot\\ &\quad\quad\left[\psi_{\widehat{q},q-1}(q-1)+(n-\chi_{\widehat{ q},q-1}^{\sim}(q-1)-1)\mathsf{f}_{q-1}\right]\\ =&\quad\left[\psi_{\widehat{q},q-1}(q-1)-\chi_{ \widehat{q},q-1}^{\sim}(q-1)\mathsf{f}_{q-2}+\underbrace{\text{\sf N}(q-1)( \mathsf{f}_{q-1}-\mathsf{f}_{q-2})}_{\geq 0}+(n-1)\mathsf{f}_{q-2}\right] \cdot\\ &\quad\quad\left[\psi_{\widehat{q},q-1}^{\sim}(q-1)-\chi_{ \widehat{q},q-1}(q-1)\mathsf{f}_{q-1}+(n-1)\mathsf{f}_{q-1}\right]\,.\end{split}\]
Since \(q\) is the lowest quality that is higher than \(q-1\), it follows that
\[\begin{split}\psi_{q,q-1}^{\sim}(q-1)&\geq\quad \chi_{\widehat{q},q-1}^{\sim}(q-1)\mathsf{f}_{q}\ \ >\ \ \chi_{\widehat{q},q-1}^{\sim}(q-1)\mathsf{f}_{q-2}\,;\end{split}\]
similarly,
\[\begin{split}\psi_{q,q-1}^{\sim}(q-1)&>\quad\chi_{ \widehat{q},q-1}^{\sim}(q-1)\mathsf{f}_{q-1}\,.\end{split}\]
It follows that
\[\begin{split}(\Theta-\mathsf{f}_{\widehat{q}}-\mathsf{f}_{q-1} )\cdot\mathsf{\Gamma}&>\quad(n-1)^{2}\mathsf{f}_{q-1}\mathsf{f}_{q -2}\ \ >\ \ \mathsf{f}_{q}\mathsf{f}_{q-1},,\end{split}\]
as needed.
We continue with **(B)**, which is expressed as
\[\begin{split}\mathsf{f}_{\widehat{q}}-\frac{\mathsf{f}_{ \widehat{q}}}{\psi_{\widehat{q},q-1}(q-1)+\text{\sf N}(\widehat{q})\mathsf{ f}_{\widehat{q}}+\text{\sf N}(q-1)\mathsf{f}_{q-1}+(n-\chi_{\widehat{q},q-1}^{ \sim}(q-1)-\text{\sf N}(\widehat{q})-\text{\sf N}(q-1)\mathsf{f}_{q-2}}\\ \leq&\mathsf{f}_{q-2}-\frac{\mathsf{f}_{q-2}}{\psi_{ \widehat{q},q-1}(q-1)+(\text{\sf N}(\widehat{q})-1)\mathsf{f}_{\widehat{q}}+ \text{\sf N}(q-1)\mathsf{f}_{q-1}+(n-\chi_{\widehat{q},q-1}^{\sim}(q-1)-\text {\sf N}(\widehat{q})-\text{\sf N}(q-1)+1)\mathsf{f}_{q-2}}\,,\end{split} \tag{5}\]
where \(\psi_{\widehat{q},q-1}^{\sim}(q-1)\geq 0\), \(\text{\sf N}(\widehat{q})>0\) and \(0\leq\text{\sf N}(q-1)\leq n-\chi_{\widehat{q},q-1}^{\sim}(q-1)-\text{\sf N}( \widehat{q})\). Setting _(i)_\(q-1\) and \(q-2\) for \(q\) and \(q-1\), respectively, in **(1)** and _(ii)_\(\widehat{q}\) and \(q-1\) for \(q^{\prime}\) and \(q^{\prime\prime}\), respectively, in **(3)**, we get that
\[\begin{split}\mathsf{f}_{\widehat{q}}-\mathsf{f}_{q-2}=& \mathsf{f}_{\widehat{q}}-\mathsf{f}_{q-1}+\mathsf{f}_{q-1}-\mathsf{f}_{q-2}\\ \leq&\frac{\mathsf{f}_{\widehat{q}}}{\psi_{\widehat{ q},q-1}(q-1)+\text{\sf N}(\widehat{q})\mathsf{f}_{\widehat{q}}+(n-\chi_{ \widehat{q},q-1}^{\sim}(q-1)-\text{\sf N}(\widehat{q}))\mathsf{f}_{q-1}}\\ &-\frac{\mathsf{f}_{q-1}}{\psi_{\widehat{q},q-1}(q-1)+(\text{\sf N }(\widehat{q})-1)\mathsf{f}_{\widehat{q}}+(n-\chi_{\widehat{q},q-1}^{\sim}(q-1 )-\text{\sf N}(\widehat{q})+1)\mathsf{f}_{q-1}}\\ &\frac{\mathsf{f}_{q-1}}{\psi_{\widehat{q},q-1}(q-1)+\text{\sf N}( \widehat{q})\mathsf{f}_{\widehat{q}}+x_{q-1}\mathsf{f}_{q-1}+(n-\chi_{ \widehat{q},q-1}^{\sim}(q-1)-\text{\sf N}(\widehat{q})-\text{\sf N}(q-1) \mathsf{f}_{q-2}}\\ &-\frac{\mathsf{f}_{q-2}}{\psi_{\widehat{q},q-1}(q-1)+\text{\sf N}( \widehat{q})\mathsf{f}_{\widehat{q}}+(\text{\sf N}(q-1)-1)\mathsf{f}_{q-1}+(n- \chi_{\widehat{q},q-1}^{\sim}(q-1)-\text{\sf N}(\widehat{q})-\text{\sf N}(q-1) +1)\mathsf{f}_{q-2}}\,.\end{split} \tag{6}\]
By (6), it suffices, for proving (5), to prove that
\[\frac{\mathsf{f}_{\widehat{q}}}{\psi_{\widehat{q},q-1}(q-1)+\mathsf{ N}(\widehat{q})\mathsf{f}_{\widehat{q}}+(n-\chi_{\widehat{q},q-1}^{\widehat{ \alpha},q-1}(q-1)-\mathsf{N}(\widehat{q}))\mathsf{f}_{q-1}}\] \[-\frac{\mathsf{f}_{q-1}}{\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N} (\widehat{q})-1)\mathsf{f}_{\widehat{q}}+(n-\chi_{\widehat{q},q-1}^{\widehat{ \alpha},q-1}(q-1)-\mathsf{N}(\widehat{q})+1)\mathsf{f}_{q-1}}\] \[+\frac{\mathsf{f}_{q-1}}{\psi_{\widehat{q},q-1}(q-1)+\mathsf{N} (\widehat{q})\mathsf{f}_{\widehat{q}}+\mathsf{N}(q-1)\mathsf{f}_{q-1}+(n-\chi_ {\widehat{q},q-1}^{\widehat{\alpha},q-1}(q-1)-\mathsf{N}(\widehat{q})- \mathsf{N}(q-1))\mathsf{f}_{q-2}}\] \[-\frac{\mathsf{f}_{q-2}}{\psi_{\widehat{q},q-1}(q-1)+\mathsf{N} (\widehat{q})\mathsf{f}_{\widehat{q}}+(\mathsf{N}(q-1)-1)\mathsf{f}_{q-1}+(n -\chi_{\widehat{q},q-1}^{\widehat{\alpha},q-1}(q-1)-\mathsf{N}(\widehat{q})- \mathsf{N}(q-1)+1)\mathsf{f}_{q-2}}\] \[\leq \frac{\mathsf{f}_{\widehat{q}}}{\psi_{\widehat{q},q-1}(q-1)+ \mathsf{N}(\widehat{q})\mathsf{f}_{\widehat{q}}+\mathsf{N}(q-1)\mathsf{f}_{q -1}+(n-\chi_{\widehat{q},q-1}^{\widehat{\alpha},q-1}(q-1)-\mathsf{N}(\widehat {q})-\mathsf{N}(q-1))\mathsf{f}_{q-2}}\] \[-\frac{\mathsf{f}_{q-2}}{\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N} (\widehat{q})-1)\mathsf{f}_{\widehat{q}}+\mathsf{N}(q-1)\mathsf{f}_{q-1}+(n -\chi_{\widehat{q},q-1}^{\widehat{\alpha},q-1}(q-1)-\mathsf{N}(\widehat{q})- \mathsf{N}(q-1)+1)\mathsf{f}_{q-2}}\] or \[\frac{\mathsf{f}_{\widehat{q}}}{\psi_{\widehat{q},q-1}^{ \widehat{\alpha},q-1}(q-1)+\mathsf{N}(\widehat{q})\mathsf{f}_{\widehat{q}}+( n-\chi_{\widehat{q},q-1}^{\widehat{\alpha},q-1}(q-1)-\mathsf{N}(\widehat{q})) \mathsf{f}_{q-1}}\] \[-\frac{\mathsf{f}_{q-1}}{\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N }(\widehat{q})-1)\mathsf{f}_{\widehat{q}}+(n-\chi_{\widehat{q},q-1}^{\widehat {\alpha},q-1}(q-1)-\mathsf{N}(\widehat{q})+1)\mathsf{f}_{q-1}}\] \[\leq \frac{\mathsf{f}_{q-2}}{\psi_{\widehat{q},q-1}^{\widehat{\alpha},q-1}(q-1)+\mathsf{N}(\widehat{q})\mathsf{f}_{\widehat{q}}+(\mathsf{N}(q-1)- 1)\mathsf{f}_{q-1}+(n-\chi_{\widehat{q},q-1}^{\widehat{\alpha},q-1}(q-1)- \mathsf{N}(\widehat{q})-\mathsf{N}(q-1)+1)\mathsf{f}_{q-2}}\] \[-\frac{\mathsf{f}_{q-1}}{\psi_{\widehat{q},q-1}(q-1)+\mathsf{N} (\widehat{q})\mathsf{f}_{q}+\mathsf{N}(q-1)\mathsf{f}_{q-1}+(n-\chi_{ \widehat{q},q-1}^{\widehat{\alpha},q-1}(q-1)-\mathsf{N}(\widehat{q})-\mathsf{ N}(q-1))\mathsf{f}_{q-2}}\] \[+\frac{\mathsf{f}_{\widehat{q}}}{\psi_{\widehat{q},q-1}(q-1)+ \mathsf{N}(\widehat{q})\mathsf{f}_{\widehat{q}}+\mathsf{N}(q-1)\mathsf{f}_{q -1}+(n-\chi_{\widehat{q},q-1}^{\widehat{\alpha},q-1}(q-1)-\mathsf{N}(\widehat {q})-\mathsf{N}(q-1))\mathsf{f}_{q-2}}\] \[-\frac{\mathsf{f}_{q-2}}{\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N }(\widehat{q})-1)\mathsf{f}_{\widehat{q}}+\mathsf{N}(q-1)\mathsf{f}_{q-1}+(n -\chi_{\widehat{q},q-1}^{\widehat{\alpha},q-1}(q-1)-\mathsf{N}(\widehat{q})- \mathsf{N}(q-1)+1)\mathsf{f}_{q-2}}\,.\] Note that \[\psi_{\widehat{q},q-1}^{\widehat{\alpha},q-1}(q-1)+(\mathsf{N} (\widehat{q})-1)\mathsf{f}_{\widehat{q}}+\mathsf{N}(q-1)\mathsf{f}_{q-1}+(n- \chi_{\widehat{q},q-1}^{\widehat{\alpha},q-1}(q-1)-\mathsf{N}(\widehat{q})- \mathsf{N}(q-1)+1)\mathsf{f}_{q-2}\] \[< \psi_{\widehat{q},q-1}^{\widehat{\alpha},q-1}(q-1)+\mathsf{N}( \widehat{q})\mathsf{f}_{\widehat{q}}+\mathsf{N}(q-1)\mathsf{f}_{q-1}+(n-\chi_ {\widehat{q},q-1}^{\widehat{\alpha},q-1}(q-1)-\mathsf{N}(\widehat{q})- \mathsf{N}(q-1)+1)\mathsf{f}_{q-2}\,,\] implying \[\frac{\mathsf{f}_{q-2}}{\psi(\widehat{q},q-1)+(\mathsf{N}( \widehat{q}-1)\mathsf{f}_{\widehat{q}}+\mathsf{N}(q-1)\mathsf{f}_{q-1}+(n-\chi _{\widehat{q},q-1}^{\widehat{\alpha},q-1}(q-1)-\mathsf{N}(\widehat{q})-\mathsf{ N}(q-1)+1)\mathsf{f}_{q-2}}\] \[> \frac{\mathsf{f}_{q-2}}{\psi_{\widehat{q},q-1}^{\widehat{\alpha},q- 1}(q-1)+\mathsf{N}(\widehat{q})\mathsf{f}_{\widehat{q}}+\mathsf{N}(q-1)\mathsf{ f}_{q-1}+(n-\chi_{\widehat{q},q-1}^{\widehat{\alpha},q-1}(q-1)-\mathsf{N}(\widehat{q})- \mathsf{N}(q-1)+1)\mathsf{f}_{q-2}}\,.\]
Hence, it suffices to prove that
\[\frac{\mathsf{f}_{q}}{\psi_{\widehat{q},q-1}(q-1)+\mathsf{N}( \widehat{q})\mathsf{f}_{q}+(n-\chi_{\widehat{q},q-1}^{\widehat{\alpha}}(q-1)- \mathsf{N}(\widehat{q}))\mathsf{f}_{q-1}}\] \[-\frac{\mathsf{f}_{q-1}}{\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N} (\widehat{q})-1)\mathsf{f}_{q}+(n-\chi_{\widehat{q},q-1}^{\widehat{\alpha}}(q- 1)-\mathsf{N}(\widehat{q})+1)\mathsf{f}_{q-1}}\] \[\leq -\frac{\mathsf{f}_{q-1}}{\psi_{\widehat{q},q-1}(q-1)+\mathsf{N}( \widehat{q})\mathsf{f}_{q}+\mathsf{N}(q-1)\mathsf{f}_{q-1}+(n-\chi_{\widehat {q},q-1}(q-1)-\mathsf{N}(\widehat{q})-\mathsf{N}(q-1))\mathsf{f}_{q-2}}\] \[+\frac{\mathsf{f}_{q}}{\psi_{\widehat{q},q-1}(q-1)+\mathsf{N}( \widehat{q})\mathsf{f}_{\widehat{q}}+\mathsf{N}(q-1)\mathsf{f}_{q-1}+(n- \chi_{\widehat{q},q-1}(q-1)-\mathsf{N}(\widehat{q})-\mathsf{N}(q-1))\mathsf{ f}_{q-2}}\,.\]
Note that
\[\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N}(\widehat{q})-1)\mathsf{f }_{q}+(\mathsf{N}(q-1)+1)\mathsf{f}_{q-1}+(n-\chi_{\widehat{q},q-1}(q-1)-( \mathsf{N}(\widehat{q})-1)\mathsf{f}(\widehat{q})-\mathsf{N}(q-1)\mathsf{f} _{q-2}\] \[< \psi_{\widehat{q},q-1}(q-1)+\mathsf{N}(\widehat{q})\mathsf{f}_{q }+\mathsf{N}(q-1)\mathsf{f}_{q-1}+(n-\chi_{\widehat{q},q-1}^{\widehat{\alpha} }(q-1)-(\mathsf{N}(\widehat{q})-1)\mathsf{f}(\widehat{q})-\mathsf{N}(q-1)) \mathsf{f}_{q-2}\,,\]
implying
\[-\frac{\mathsf{f}_{q-1}}{\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N} (\widehat{q})-1)\mathsf{f}_{q}+(\mathsf{N}(q-1)+1)\mathsf{f}_{q-1}+(n-\chi_ {\widehat{q},q-1}(q-1)-(\mathsf{N}(\widehat{q})-1)\mathsf{f}(\widehat{q})- \mathsf{N}(q-1)\mathsf{f}_{q-2}}\] \[< -\frac{\mathsf{f}_{q-1}}{\psi_{\widehat{q},q-1}(q-1)+\mathsf{N} (\widehat{q})\mathsf{f}_{\widehat{q}}+\mathsf{N}(q-1)\mathsf{f}_{q-1}+(n- \chi_{\widehat{q},q-1}^{\widehat{\alpha}}(q-1)-(\mathsf{N}(\widehat{q})-1) \mathsf{f}(\widehat{q})-\mathsf{N}(q-1))\mathsf{f}_{q-2}}\,.\]
Hence, it suffices to prove that
\[\frac{\mathsf{f}_{\widehat{q}}}{\psi_{\widehat{q},q-1}(q-1)+ \mathsf{N}(\widehat{q})\mathsf{f}_{q}+(n-\chi_{\widehat{q},q-1}^{\widehat{ \alpha}}(q-1)-\mathsf{N}(\widehat{q}))\mathsf{f}_{q-1}}\] \[-\frac{\mathsf{f}_{q-1}}{\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N} (\widehat{q})-1)\mathsf{f}_{q}+(n-\chi_{\widehat{q},q-1}^{\widehat{\alpha}}(q -1)-\mathsf{N}(\widehat{q})+1)\mathsf{f}_{q-1}}\] \[\leq -\frac{\mathsf{f}_{q-1}}{\psi_{\widehat{q},q-1}(q-1)+(\mathsf{N} (\widehat{q})-1)\mathsf{f}_{q}+(\mathsf{N}(q-1)+1)\mathsf{f}_{q-1}+(n-\chi_{ \widehat{q},q-1}^{\widehat{\alpha}}(q-1)-\mathsf{N}(\widehat{q})-\mathsf{N}( q-1))\mathsf{f}_{q-2}}\] \[+\frac{\mathsf{f}_{\widehat{q}}}{\psi_{\widehat{q},q-1}(q-1)+ \mathsf{N}(\widehat{q})\mathsf{f}_{q}+\mathsf{N}(q-1)\mathsf{f}_{q-1}+(n- \chi_{\widehat{q},q-1}(q-1)-\mathsf{N}(\widehat{q})-\mathsf{N}(q-1))\mathsf{f }_{q-2}}\,.\]
and this has been proved in **(A)**.
We finally prove:
**Lemma 14**.: _Assume that \(\mathsf{N}(q-1)\geq 1\) (resp., \(\mathsf{N}(q-2)\geq 1\)). Then, a player assigned to \(q-1\) (resp., \(q-2\)) in iteration \(q-1\) does not want to switch to \(\widehat{q}\)._
Proof.: We have to prove that
\[\mbox{\bf(C):}\quad\ \ \mathsf{C}_{q-1}(\mathsf{N}(\widehat{q}), \mathsf{N}(q-1),\mathsf{N}(q-2)) \leq \mathsf{C}_{\widehat{q}}(\mathsf{N}_{\widehat{q}}+1,\mathsf{N}(q- 1)-1,\mathsf{N}(q-2))\]
with \(\mathsf{N}(q-1)>0\) and
\[\mbox{\bf(D):}\quad\ \ \mathsf{C}_{q-2}(\mathsf{N}(\widehat{q}), \mathsf{N}(q-1),\mathsf{N}(q-2)) \leq \mathsf{C}_{\widehat{q}}(\mathsf{N}(\widehat{q})+1,\mathsf{N}(q- 1),\mathsf{N}(q-2)-1)\]
with \(\mathsf{N}(q-2)>0\). **(C)** is expressed as
\[\mathsf{f}_{q-1}-\frac{\mathsf{f}_{q-1}}{\psi_{\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
By Lemma 15, we only have to prove **(C)**. (7) is equivalent to
\[\begin{split}&\mathsf{f}_{q-1}\,\frac{\psi_{\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
* Right after each iteration \(q\), where \(Q\geq q\geq 2\), \(\mathsf{N}(q)=0\) and \(\mathsf{N}(q-1)=n\); so no player is retained in an iteration. This happens if and only if the smallest integer \(x\) found in each iteration is \(0\). Since the algorithm searches for \(x\) starting from \(0\), each iteration takes time \(\Theta(1)\) and the total time is \(\Theta(Q)\).
* There is at least one iteration \(q\), where \(Q\geq q\geq 2\), with \(\mathsf{N}(q)>0\) right after it: We claim that each iteration \(q\), where \(Q\geq q\geq 2\), takes time \(\Theta(\mathsf{N}(q))\): Recall that, by the algorithm, iteration \(q\) searches, starting with \(x:=0\), for an \(x\), with \(0\leq x\leq n-X(Q,q+1)\), yielding a pure Nash equilibrium restricted to qualities \(q\) and \(q-1\); it terminates when it finds such an \(x\) for a first time and sets \(\mathsf{N}(q):=x\). So iteration \(q\) takes time \(\Theta(\mathsf{N}(q))\). The total time is \(\sum_{Q\geq q\geq 2}\Theta(\mathsf{N}(q))=\Theta\left(\sum_{Q\geq q\geq 2} \mathsf{N}(q)\right)\). By the algorithm, \(\mathsf{N}(q)\) users are retained right after iteration \(q\) and do not participate in future iterations; since the total number of users is \(n\), \(\sum_{Q\geq q\geq 2}\mathsf{N}(q)\leq n\). (Note that in the last iteration \(2\), it is possible that some players are assigned to quality \(1\); this happens exactly when \(\mathsf{N}(2)<n-X(Q,3)\) and \(\mathsf{N}(1)>0\).) So in this case, the total time is \(\Theta(n)\).
Hence, the time complexity is \(\Theta(\max\{Q,n\})\).
## 5 A \(\Theta(1)\) Algorithm
We show:
**Theorem 16**.: _Consider the model of arbitrary players and arbitrary proposals with \(m=1\), skill-effort functions \(\Lambda(s_{i},\mathsf{f}_{q})=s_{i}\mathsf{f}_{q}\), for all players \(i\in[n]\) and qualities \(q\in[Q]\), and proportional allocation. Assume that \(\min_{i\in[n]}s_{i}\geq\frac{\mathsf{f}_{2}}{\mathsf{f}_{2}-\mathsf{f}_{1}}\). Then, there is a \(\Theta(1)\) algorithm that solves PNE In Contest Game._
Proof.: The algorithm assigns all players to quality \(1\). By definition of cost, the cost of each player \(i\in[n]\) is less than \(s_{i}\mathsf{f}_{1}\). If player \(i\) deviates to \(2\), its cost will be greater than \(\mathsf{f}_{2}(s_{i}-1)\). The assumption implies that \(\mathsf{f}_{2}(s_{i}-1)\geq\mathsf{f}_{1}s_{i}\) for all players \(i\in[n]\). So player \(i\) does not want to switch to quality \(2\). Since efforts are increasing, for all qualities \(q\) with \(2<q\leq Q\), the cost of player \(i\) when she deviates to \(q\) will be greater than \(\mathsf{f}_{q}(s_{i}-1)>\mathsf{f}_{2}(s_{i}-1)\geq\mathsf{f}_{1}s_{i}\), using the assumption. So player \(i\) does not want to switch to any quality \(q>2\) either. Hence, assigning all players to quality \(1\) is a pure Nash equilibrium.
Since \(\frac{\mathsf{f}_{2}}{\mathsf{f}_{2}-\mathsf{f}_{1}}>1\), the assumption \(\min_{i\in[n]}s_{i}\geq\frac{\mathsf{f}_{2}}{\mathsf{f}_{2}-\mathsf{f}_{1}}\) that skills are lower-bounded in Theorem 16_cannot_ hold for the model of proposal-indifferent and anonymous players where \(s_{i}=1\) for all players \(i\in[n]\). However, an assumption on efforts suffices for the same algorithm to work for this model. We show:
**Theorem 17**.: _Consider the model of proposal-indifferent and anonymous players with \(m=1\), skill-effort functions \(\Lambda(s_{i},f_{q})=\mathsf{f}_{q}\) for all players \(i\in[n]\) and qualities \(q\in[Q]\) and proportional allocation. Assume that \(\mathsf{f}_{2}-\mathsf{f}_{1}\geq 1\). Then, there is a \(\Theta(1)\) algorithm that solves PNE In Contest Game._
Proof.: The algorithm assigns all players to quality \(1\). By definition, the cost of each player \(i\in[n]\) is less than \(\mathsf{f}_{1}\). If player \(i\) switches to \(2\), its cost will be greater than \(\mathsf{f}_{2}-1\). The assumption implies that player \(i\) does not want to switch to \(2\). Since efforts are increasing, for all qualities \(q\) with \(2<q\leq Q\), the cost of player \(i\) when she switches to \(q\) will be greater than \(\mathsf{f}_{q}-1>\mathsf{f}_{2}-1\geq f_{1}\), by assumption. So player \(i\) does not want to switch to any quality \(q>2\) either. Hence, assigning all players to quality \(1\) is a pure Nash equilibrium.
## 6 Open Problems and Directions for Further Research
This work poses far more challenging problems and research directions about the contest game for crowdsourcing reviews than it answers. To close we list a few.
1. Determine the complexity of computing a pure Nash equilibrium for the model of arbitrary players and arbitrary proposals, with a player-invariant payment function. We remark that no PLS-hardness results are known for either singleton congestion games [35] or for project games [3], which are also singleton. This appears to speak against \(\mathcal{PLS}\)-hardness of the contest game with a player-invariant payment function.
2. Study the _uniqueness_ of pure Nash equilibria; non-uniqueness would trigger interesting decision problems.
3. Improve the complexity of the algorithm for effort-quality functions with increasing differences. For constant \(Q\), this means reducing the exponent \(Q\) of \(n\). Stronger assumptions on the skills-efforts function than the increasing-differences property could be useful.
4. Extend the \(\Theta(\max\{Q,n\})\) algorithm under mandatory participation from anonymous players to arbitrary players, with the same or additional structural assumptions on efforts (such as _additive complements_[20, Definition 3.9]) and possibly with assumptions on skills.
5. Investigate conditions (on \(\mathsf{p}\) and \(\Xi\)) for the extention of the \(\Theta(Q,n)\) algorithm to the (player-invariant) _ratio award_ function \(\mathsf{RA}_{i\ell}(\mathbf{q}^{\ell})=\frac{\mathsf{p}(q_{i\ell})}{\Xi( \mathbf{q}^{\ell})}\), where \(\mathsf{p}\) and \(\Xi\) are (strictly) increasing, which generalizes proportional allocation.
6. Study the computation of pure Nash equilibria for other classes of player-invariant payment functions, such as those mentioned in Section 1.1.
7. Study the computation of _mixed_ Nash equilibria. Work in progress confirms the existence of contest games with \(Q=3\) and \(n=3\) that have only one mixed Nash equilibrium, which is irrational. We conjecture that the problem is \(\mathcal{PPAD}\)-complete for \(n=2\).
8. Investigate conditions on the payment function and the skill-effort function for the contest game for crowdsourcing reviews to be a _valid utility game_[34]. Given the _general_ existence result for pure Nash equilibria under a player-invariant payment function in Theorem 1, this may open up the road to upper-bound the Price of Anarchy for arbitrary \(Q\). (The particular upper bounds for proportional allocation in [4] either do not go beyond the case \(Q=3\) with \(\mathsf{f}_{1}=0\) for which they proved existence of pure Nash equilibria [4, Proposition 2], or go beyond this case without proving existence first [4, Theorem 4].)
9. Determine the complexity of computing _best-responses_. We conjecture \(\mathcal{NP}\)-hardness; techniques similar to those used in [12, Section 3] could be useful.
10. Formulate incomplete information contest games and study their Bayes-Nash equilibria. Ideas from Bayesian congestion games [16] will very likely be helpful. Study existence and complexity properties of pure Bayes-Nash equilibria.
11. Formulate and study _malicious Bayesian contest games,_ extending work on malicious Bayesian congestion games [17]. Study (in)existence and complexity properties of their Bayes-Nash equilibria.
12. Incorporate and study issues of interaction, cooperation and competition both among reviewers and among _proposers_ (cf. [8]). |
2309.06063 | Differential calculus for free algebra and constrained homology | In [Discrete differential calculus on simplicial complexes and constrained
homology, Chin. Ann. Math. Ser. B 44(4), 615-640, 2023], the constrained
(co)homology for simplicial complexes and independence hypergraphs is
constructed via differential calculus on discrete sets. In this paper, we study
the differential calculus for free algebra and subsequently study the
homomorphisms of constrained (co)homology induced by inclusions of simplicial
complexes and independence hypergraphs. We apply the differential calculus to
hypergraphs. We realize simplicial complexes and independence hypergraphs as
certain invariant traces of hypergraphs. As an application, we give the
constrained persistent (co)homology for filtrations of simplicial complexes and
filtrations of independence hypergraphs. | Shiquan Ren | 2023-09-12T09:02:26Z | http://arxiv.org/abs/2309.06063v3 | ###### Abstract
###### Abstract
The notion of independence hypergraphs is introduced to investigate the relations between random hypergraphs and random simplicial complexes [29]. With the help of the differential calculus on discrete sets, the constrained homology of simplicial complexes as well as the constrained cohomology of independence hypergraphs are constructed [26]. In this paper, by proving the functorialities of the constrained homology of simplicial complexes as well as the constrained cohomology of independence hypergraphs, we study the constrained persistent homology for filtrations of simplicial complexes as well as the constrained persistent cohomology for filtrations of independence hypergraphs. We study the Mayer-Vietoris sequences for the constrained (co)homology as well as their functorialities. As a result, we prove the Mayer-Vietoris sequences for the persistent constrained homology for filtrations of simplicial complexes and the persistent constrained cohomology for filtrations of independence hypergraphs.
**Mayer-Vietoris Sequences for the Constrained Persistent Homology of Simplicial Complexes**
Shiquan Ren
**2010 Mathematics Subject Classification.** Primary 55U10, 55U15, Secondary 53A45, 08A50
**Keywords and Phrases.** simplicial complexes, (co-)homology, Mayer-Vietoris sequences, persistent homology
## 1 Introduction
Let \(V\) be a discrete set whose elements are called _vertices_. Suppose there is a total order \(\prec\) on \(V\). Let \(\Delta[V]\) be the collection of all the non-empty finite subsets of \(V\). A _hypergraph_ with its vertices from \(V\) is an arbitrary subset \(\mathcal{H}\subseteq\Delta[V]\). A _simplicial complex_\(\mathcal{K}\) with its vertices from \(V\) is a hypergraph such that for any \(\sigma\in\mathcal{K}\) and any non-empty subset \(\tau\subseteq\sigma\), it holds \(\tau\in\mathcal{K}\). An _independence hypergraph_\(\mathcal{L}\) with its vertices from \(V\) is a hypergraph such that for any \(\sigma\in\mathcal{L}\) and any finite superset \(\tau\supseteq\sigma\), where \(\tau\in\Delta[V]\), it holds \(\tau\in\mathcal{L}\).
Let \(V\) and \(V^{\prime}\) be two discrete sets. Let \(\mathcal{H}\) be a hypergraph on \(V\) and let \(\mathcal{H}^{\prime}\) be a hypergraph on \(V^{\prime}\). A _morphism_ of hypergraphs is a map \(\varphi:V\longrightarrow V^{\prime}\) such that for any hyperedge \(\sigma\) of \(\mathcal{H}\), the image \(f(\sigma)\), which is the subset of \(V^{\prime}\) consisting of the images \(f(v)\in V^{\prime}\) of the vertices \(v\in\sigma\), is a hyperedge of \(\mathcal{H}^{\prime}\). Let \(\varphi:\mathcal{H}\longrightarrow\mathcal{H}^{\prime}\) be a morphism of hypergraphs. Note that if \(\mathcal{H}\) is a simplicial complex, then the image \(\varphi(\mathcal{H})\) must a simplicial complex with its vertices from \(V^{\prime}\). Moreover, if \(\mathcal{H}\) is an independence hypergraph and \(\varphi\) is induced by a bijective map \(\varphi:V\longrightarrow V^{\prime}\), then the image \(\varphi(\mathcal{H})\) must be an independence hypergraph with its vertices from \(V^{\prime}\).
Given a simplicial complex \(\mathcal{K}\) with its vertices from \(V\) and a simplicial complex \(\mathcal{K}^{\prime}\) with its vertices from \(V^{\prime}\), a morphism of hypergraphs \(\varphi:\mathcal{K}\longrightarrow\mathcal{K}^{\prime}\) is a _simplicial map_. Similarly, given an independence hypergraph \(\mathcal{L}\) with its vertices from \(V\) and an independence hypergraph \(\mathcal{L}^{\prime}\) with its vertices from \(V^{\prime}\), we call a morphism of hypergraphs \(\varphi:\mathcal{L}\longrightarrow\mathcal{L}^{\prime}\) a _morphism_ of independence hypergraphs.
This paper is motivated as follows.
**Motivation 1. simplicial complexes and independence hypergraphs from random graphs**. Random hypergraphs and random simplicial complexes are higher-dimensional generalizations of random graphs. They have been intensively studied in recent decades (for example, [8, 9, 20, 22, 23, 24, 28]).
Let \(\mathcal{H}\) be a hypergraph with its vertices from \(V\). The associated simplicial complex \(\Delta\mathcal{H}\) of \(\mathcal{H}\) and the lower-associated simplicial complex \(\delta\mathcal{H}\) of \(\mathcal{H}\) are the smallest simplicial complex containing \(\mathcal{H}\) and the largest simplicial complex contained in \(\mathcal{H}\) respectively. The associated independence hypergraph \(\bar{\Delta}\mathcal{H}\) of \(\mathcal{H}\) and the lower-associated independence hypergraph \(\bar{\delta}\mathcal{H}\) of \(\mathcal{H}\) are the smallest independence hypergraph containing \(\mathcal{H}\) and the largest independence hypergraph contained in \(\mathcal{H}\) respectively. In addition, if \(V\) is a finite set, then the _complement hypergraph_ of \(\mathcal{H}\) is a hypergraph given by \(\gamma\mathcal{H}=\{\sigma\in\Delta[V]\mid\sigma\notin\mathcal{H}\}\) with its vertices from \(V\).
For any simplicial complex \(\mathcal{K}\) with its vertices from \(V\), an _external face_ of \(\mathcal{K}\) is a nonempty subset \(\sigma\) of \(V\) such that \(\sigma\notin\mathcal{K}\) and \(\tau\in\mathcal{K}\) for any proper subset \(\tau\) of \(\sigma\). Let \(E(\mathcal{K})\) be the set of all the external faces of \(\mathcal{K}\). For any independence hypergraph \(\mathcal{L}\) with its vertices from \(V\), a _co-external face_ of \(\mathcal{L}\) is a subset \(\sigma\) of \(V\) such that \(\sigma\notin\mathcal{L}\) and \(\tau\in\mathcal{L}\) for any proper superset \(\tau\) of \(\sigma\) such that \(\tau\) a subset of \(V\). Let \(\bar{E}(\mathcal{L})\) be the set of all the co-external faces of \(\mathcal{L}\).
Let an arbitrary function \(p:\Delta[V]\longrightarrow[0,1]\). Consider the random hypergraph whose probability is given by
\[\bar{\mathrm{P}}_{p}(\mathcal{H})=\prod_{\sigma\in\mathcal{H}}p(\sigma)\prod_ {\sigma\notin\mathcal{H}}\big{(}1-p(\sigma)\big{)}, \tag{1.1}\]
the random simplicial complex whose probability is given by
\[\mathrm{P}_{p}(\mathcal{K})=\prod_{\sigma\in\mathcal{K}}p(\sigma)\prod_{ \sigma\in E(\mathcal{K})}\big{(}1-p(\sigma)\big{)} \tag{1.2}\]
and the random independence hypergraph whose probability is given by
\[\mathrm{Q}_{p}(\mathcal{L})=\prod_{\sigma\in\mathcal{L}}p(\sigma)\prod_{ \sigma\in E(\mathcal{L})}\big{(}1-p(\sigma)\big{)}. \tag{1.3}\]
Here in (1.1) - (1.3), \(\mathcal{H}\), \(\mathcal{K}\) and \(\mathcal{L}\) respectively are any hypergraph, any simplicial complex and any independence hypergraph with their vertices from \(V\). The products in (1.1) - (1.3) of countably many numbers between \(0\) and \(1\) are defined to be the limit of the products of finitely many numbers.
Suppose \(V\) is a finite set. Let \(\mathcal{H}\sim\bar{\mathrm{P}}_{p}\) be a randomly generated hypergraph on \(V\). We have the following assertions (cf. [13, Section 3], [28, Theorem 1.5 (2)], [29, Theorem 1.1]): (1). the lower-associated simplicial complex of \(\mathcal{H}\) is a randomly generated simplicial complex \(\delta\mathcal{H}\sim\mathrm{P}_{p}\), (2). the lower-associated independence hypergraph of \(\mathcal{H}\) is a randomly generated independence hypergraph \(\delta\mathcal{H}\sim\mathrm{Q}_{p}\), (3). the complement of \(\mathcal{H}\) is a randomly generated hypergraph \(\gamma\mathcal{H}\sim\bar{\mathrm{P}}_{1-p}\), (4). the complement of the associated simplicial complex of \(\mathcal{H}\) is a randomly generated independence hypergraph \(\gamma\Delta\mathcal{H}\sim\mathrm{Q}_{1-p}\), (5). the complement of the associated independence hypergraph of \(\mathcal{H}\) is a randomly generated simplicial complex \(\gamma\bar{\Delta}\mathcal{H}\sim\mathrm{P}_{1-p}\).
**Motivation 2. differential calculus on discrete sets and the constrained homology**. The differential calculus on discrete sets are studied by A. Dimakis and F. Muller-Hoissen [10, 11]. Let \(\mathbb{N}\) be the set of natural numbers \(0,1,2,\ldots\). By considering the exterior algebra \(\mathrm{Ext}_{*}(V)\) generated by the operators \(\frac{\partial}{\partial v}\) for \(v\in V\) (cf. Eq. (2.1)) and taking \(\alpha\in\mathrm{Ext}_{2t+1}(V)\) for \(t\in\mathbb{N}\), we constructed the constrained homology \(H_{*}(\mathcal{K},\alpha,m)\), \(m\in\mathbb{Z}\), for a simplicial complex \(\mathcal{K}\) in [26] as a generalization of the weighted homology \(H_{*}(\mathcal{K},w)\) of \(\mathcal{K}\) (cf. [25, 30, 31, 32]). In [26], by considering the exterior algebra \(\mathrm{Ext}^{*}(V)\) generated by the operators \(dv\) for \(v\in V\) (cf. Eq. (2.3)) and taking \(\omega\in\mathrm{Ext}^{2t+1}(V)\) for \(t\in\mathbb{N}\), we constructed the constrained cohomology \(H^{*}(\mathcal{L},\omega,m)\), \(m\in\mathbb{Z}\), of an independence hypergraph \(\mathcal{L}\) as a dual analog of the constrained homology of simplicial
complexes. Taking \(\beta\in\operatorname{Ext}_{2s}(V)\) and \(\mu\in\operatorname{Ext}^{2s}(V)\) for \(s\in\mathbb{N}\). It is proved in [26] that \(\beta\) induces a homomorphism from \(H_{*}(\mathcal{K},\alpha,m)\) to \(H_{*}(\mathcal{K},\alpha,m-2s)\) and \(\mu\) induces a homomorphism from \(H^{*}(\mathcal{L},\omega,m)\) to \(H^{*}(\mathcal{L},\omega,m+2s)\).
**Motivation 3. persistent homology of networks and topological data analysis**. Persistent homology is a widely-used method in topological data analysis (for examples, [7, 4, 33, 12]). Given a simplicial complex \(\mathcal{K}\), we take a filtration \(\{\mathcal{K}_{t}\}_{t\in\mathbb{R}}\) such that \(\mathcal{K}_{t}\subseteq\mathcal{K}_{s}\) for any \(-\infty<t\leq s<+\infty\) and \(\cup_{t\in\mathbb{R}}\mathcal{K}_{t}=\mathcal{K}\). Applying the homology functor, we will obtain a persistent module \(\{H_{*}(\mathcal{K}_{t})\}_{t\in\mathbb{R}}\) which is called the _persistent homology_ of the filtration \(\{\mathcal{K}_{t}\}_{t\in\mathbb{R}}\). The persistent homology of simplicial complexes captures the multi-scale or time-evolving topological features of complex networks (for examples, [1, 2, 6, 21]).
In this paper, we study the persistence of the constrained homology for simplicial complexes as well as the persistence of the constrained cohomology for independence hypergraphs. We prove the Mayer-Vietoris sequences for the constrained persistent homology of simplicial complexes and the constrained persistent cohomology of independence hypergraphs. Precisely, we prove the following results.
Firstly, we prove the Mayer-Vietoris sequences for the constrained homology of simplicial complexes and the constrained cohomology of independence hypergraphs. Let \(t\in\mathbb{N}\). Let \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\) be simplicial complexes and \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) be independence hypergraphs with their vertices from \(V\). For any \(\alpha\in\operatorname{Ext}_{2t+1}(V)\), we have a long exact sequence of the constrained homology groups
\[\cdots\longrightarrow H_{n}(\mathcal{K}_{1}\cap\mathcal{K}_{2}, \alpha,m)\longrightarrow H_{n}(\mathcal{K}_{1},\alpha,m)\oplus H_{n}( \mathcal{K}_{2},\alpha,m)\longrightarrow\] \[H_{n}(\mathcal{K}_{1}\cup\mathcal{K}_{2},\alpha,m)\longrightarrow H _{n-1}(\mathcal{K}_{1}\cap\mathcal{K}_{2},\alpha,m)\longrightarrow\cdots \tag{1.4}\]
which is functorial with respect to simplicial maps. For any \(\omega\in\operatorname{Ext}^{2t+1}(V)\), we have a long exact sequence of the constrained homology groups
\[\cdots\longrightarrow H^{n}(\mathcal{L}_{1}\cap\mathcal{L}_{2},\omega,m)\longrightarrow H^{n}(\mathcal{L}_{1},\omega,m)\oplus H^{n}( \mathcal{L}_{2},\omega,m)\longrightarrow\] \[H^{n}(\mathcal{L}_{1}\cup\mathcal{L}_{2},\omega,m)\longrightarrow H ^{n+1}(\mathcal{L}_{1}\cap\mathcal{L}_{2},\omega,m)\longrightarrow\cdots \tag{1.5}\]
which is functorial with respect to morphisms of independence hypergraphs induced by bijective self-maps on the vertex set \(V\). We call (1.4) the _Mayer-Vietoris sequence_ for the constrained homology of simplicial complexes on \(V\) and denote it as \(\mathbf{MV}_{*}(\mathcal{K}_{1},\mathcal{K}_{2},\alpha,m)\). We call (1.5) the _Mayer-Vietoris sequence_ for the constrained cohomology of independence hypergraphs on \(V\) and denote it as \(\mathbf{MV}^{*}(\mathcal{L}_{1},\mathcal{L}_{2},\omega,m)\).
Secondly, we prove the functorialities of the constrained homology for simplicial complexes, the constrained cohomology for independence hypergraphs and the Mayer-Vietoris sequences. We give the definitions of morphisms of Mayer-Vietoris sequences in Section 4. We study the persistence of constrained homology \(H_{*}(\mathcal{K}_{x},\alpha,m\mid x\in\mathbb{R})\) for a filtration \(\{\mathcal{K}_{x}\}_{x\in\mathbb{R}}\) of simplicial complexes and the persistence of constrained cohomology \(H^{*}(\mathcal{L}_{x},\omega,m\mid x\in\mathbb{R})\) for a filtration \(\{\mathcal{L}_{x}\}_{x\in\mathbb{R}}\) of independence hypergraphs. Given two filtrations \(\{\mathcal{K}_{1,x}\}_{x\in\mathbb{R}}\) and \(\{\mathcal{K}_{2,x}\}_{x\in\mathbb{R}}\) of simplicial complexes, we denote the persistent Mayer-Vietoris sequence of the constrained homology as \(\mathbf{PMV}_{*}(\mathcal{K}_{1,x},\mathcal{K}_{2,x},\alpha,m\mid x\in\mathbb{ R})\). Similarly, given two filtrations \(\{\mathcal{L}_{1,x}\}_{x\in\mathbb{R}}\) and \(\{\mathcal{L}_{2,x}\}_{x\in\mathbb{R}}\) of independence hypergraphs, we denote the persistent Mayer-Vietoris sequence of the constrained cohomology as \(\mathbf{PMV}^{*}(\mathcal{L}_{1,x},\mathcal{L}_{2,x},\omega,m\mid x\in\mathbb{ R})\). We give the definition of morphisms of persistent Mayer-Vietoris sequences in Section 5.
Finally, we prove the next theorem for the persistent Mayer-Vietoris sequences of the constrained homology of simplicial complexes and the constrained cohomology of independence hypergraphs.
**Theorem 1.1**.:
1. _Let_ \(\{\mathcal{K}_{1,x}\}_{x\in\mathbb{R}}\) _and_ \(\{\mathcal{K}_{2,x}\}_{x\in\mathbb{R}}\) _be two filtrations of simplicial complexes with their vertices from_ \(V\)_. Then we have a diagram of persistent Mayer-Vietoris sequences_ \[\dots\xrightarrow{\operatorname{Ext}_{2}(V)}\operatorname{\mathbf{PMV}}_{*}( \mathcal{K}_{1,x},\mathcal{K}_{2,x},\alpha,m+2\mid x\in\mathbb{R})\xrightarrow{ \operatorname{Ext}_{4}(V)}\operatorname{\mathbf{PMV}}_{*}(\mathcal{K}_{1,x}, \mathcal{K}_{2,x},\alpha,m\mid x\in\mathbb{R})\xrightarrow{\operatorname{Ext}_ {2}(V)}\dots\] _such that for any positive integer_ \(k\)_, any_ \(s_{1},s_{2},\dots,s_{k}\in\mathbb{N}\) _and any_ \(\beta_{i}\in\operatorname{Ext}_{2s_{i}}(V)\) _for_ \(i=1,2,\dots,k\)_, the morphism_ \(\beta_{1}\wedge\beta_{2}\wedge\dots\wedge\beta_{k}\in\operatorname{Ext}_{2(s_{ 1}+s_{2}+\dots+s_{k})}(V)\) _of persistent Mayer-Vietoris sequences can be identified with the composition_ \(\beta_{1}\circ\beta_{2}\circ\dots\circ\beta_{k}\) _of the morphisms_ \(\beta_{1},\beta_{2},\dots,\beta_{k}\)_;_
2. _Let_ \(\{\mathcal{L}_{1,x}\}_{x\in\mathbb{R}}\) _and_ \(\{\mathcal{L}_{2,x}\}_{x\in\mathbb{R}}\) _be two filtrations of independence hypergraphs with their vertices from_ \(V\)_. Then we have a diagram of persistent Mayer-Vietoris sequences_ \[\dots\xrightarrow{\operatorname{Ext}^{2}(V)}\operatorname{\mathbf{PMV}}^{*}( \mathcal{L}_{1,x},\mathcal{L}_{2,x},\omega,m-2\mid x\in\mathbb{R})\xrightarrow {\operatorname{Ext}^{2}(V)}\operatorname{\mathbf{PMV}}^{*}(\mathcal{L}_{1,x}, \mathcal{L}_{2,x},\omega,m\mid x\in\mathbb{R})\xrightarrow{\operatorname{Ext} ^{4}(V)}\dots\] _such that for any positive integer_ \(k\)_, any_ \(s_{1},s_{2},\dots,s_{k}\in\mathbb{N}\) _and any_ \(\mu_{i}\in\operatorname{Ext}^{2s_{i}}(V)\) _for_ \(i=1,2,\dots,k\)_, the morphism_ \(\mu_{1}\wedge\mu_{2}\wedge\dots\wedge\mu_{k}\in\operatorname{Ext}^{2(s_{1}+s_{ 2}+\dots+s_{k})}(V)\) _of persistent Mayer-Vietoris sequences can be identified with the composition_ \(\mu_{1}\circ\mu_{2}\circ\dots\circ\mu_{k}\) _of the morphisms_ \(\mu_{1},\mu_{2},\dots,\mu_{k}\)_._
In Theorem 1.1, we use
\[\operatorname{Ext}_{2s}(V):\quad\operatorname{\mathbf{PMV}}_{*}(\mathcal{K}_{1,x},\mathcal{K}_{2,x},\alpha,m\mid x\in\mathbb{R})\longrightarrow\operatorname {\mathbf{PMV}}_{*}(\mathcal{K}_{1,x},\mathcal{K}_{2,x},\alpha,m-2s\mid x\in \mathbb{R})\]
to denote the family of morphisms of persistent Mayer-Vietoris sequences
\[\beta_{*}:\quad\operatorname{\mathbf{PMV}}_{*}(\mathcal{K}_{1,x},\mathcal{K}_{ 2,x},\alpha,m\mid x\in\mathbb{R})\longrightarrow\operatorname{\mathbf{PMV}}_{* }(\mathcal{K}_{1,x},\mathcal{K}_{2,x},\alpha,m-2s\mid x\in\mathbb{R})\]
for all \(\beta\in\operatorname{Ext}_{2s}(V)\) and use
\[\operatorname{Ext}^{2s}(V):\quad\operatorname{\mathbf{PMV}}^{*}(\mathcal{L}_{ 1,x},\mathcal{L}_{2,x},\omega,m\mid x\in\mathbb{R})\longrightarrow\operatorname {\mathbf{PMV}}^{*}(\mathcal{L}_{1,x},\mathcal{L}_{2,x},\omega,m+2s\mid x\in \mathbb{R})\]
to denote the family of morphisms of persistent Mayer-Vietoris sequences
\[\mu_{*}:\quad\operatorname{\mathbf{PMV}}^{*}(\mathcal{L}_{1,x},\mathcal{L}_{ 2,x},\omega,m\mid x\in\mathbb{R})\longrightarrow\operatorname{\mathbf{PMV}}^{*}( \mathcal{L}_{1,x},\mathcal{L}_{2,x},\omega,m+2s\mid x\in\mathbb{R})\]
for all \(\mu\in\operatorname{Ext}^{2s}(V)\). The next corollary follows from Theorem 1.1 immediately.
**Corollary 1.2**.:
1. _Suppose_ \(V\) _is a finite set._
2. _Let_ \(\{\mathcal{K}_{1,x}\}_{x\in\mathbb{R}}\) _and_ \(\{\mathcal{K}_{2,x}\}_{x\in\mathbb{R}}\) _be two filtrations of simplicial complexes with their vertices from_ \(V\)_. Let_ \(\mathcal{L}_{1,x}=\Delta[V]\setminus\mathcal{K}_{1,-x}\) _and_ \(\mathcal{L}_{2,x}=\Delta[V]\setminus\mathcal{K}_{2,-x}\) _for each_ \(x\in\mathbb{R}\)_. Then_ \(\{\mathcal{L}_{1,x}\}_{x\in\mathbb{R}}\) _and_ \(\{\mathcal{L}_{2,x}\}_{x\in\mathbb{R}}\) _are filtrations of independence hypergraphs whose constrained persistent cohomology satisfies Theorem_ 1.1 _(_2);_
3. _Let_ \(\{\mathcal{L}_{1,x}\}_{x\in\mathbb{R}}\) _and_ \(\{\mathcal{L}_{2,x}\}_{x\in\mathbb{R}}\) _be two filtrations of independence hypergraphs with their vertices from_ \(V\)_. Let_ \(\mathcal{K}_{1,x}=\Delta[V]\setminus\mathcal{L}_{1,-x}\) _and_ \(\mathcal{K}_{2,x}=\Delta[V]\setminus\mathcal{L}_{2,-x}\) _for each_ \(x\in\mathbb{R}\)_. Then_ \(\{\mathcal{K}_{1,x}\}_{x\in\mathbb{R}}\) _and_ \(\{\mathcal{K}_{2,x}\}_{x\in\mathbb{R}}\) _are filtrations of simplicial complexes whose constrained persistent homology satisfies Theorem_ 1.1 _(_1_)._
## 2 Differential calculus on discrete sets
Let \(\mathbb{C}\) be be the complex number field. Let \(\operatorname{Ext}[V]\) be the exterior algebra generated by \(V\) over \(\mathbb{C}\). Then \(\operatorname{Ext}[V]=\bigoplus_{p=0}^{\infty}\operatorname{Ext}^{p}[V]\) where \(\operatorname{Ext}^{p}[V]\) is the complex vector space consisting of all the formal (finitely many) linear combinations of the elements of the form \(v_{0}\wedge v_{1}\wedge\cdots\wedge v_{p}\) with \(v_{0},v_{1},\ldots,v_{p}\in V\) distinct.
Let \(n\in\mathbb{N}\). An _elementary \(n\)-path_ on \(V\) is an ordered sequence \(v_{0}v_{1}\ldots v_{n}\) of (not necessarily distinct) \(n+1\) vertices in \(V\) (cf. [15, Definition 2.1], [14, 16, 17, 19, 18]). A formal linear combination of elementary \(n\)-paths on \(V\) with coefficients in \(\mathbb{C}\) is called an \(n\)_-path_ on \(V\). Denote by \(\Lambda_{n}(V)\) the vector space of all the \(n\)-paths on \(V\) (cf. [15, Subsection 2.1], [14, 16, 17, 19, 18]). We have a graded vector space \(\Lambda_{*}(V)=\bigoplus_{n=0}^{\infty}\Lambda_{n}(V)\). We have a canonical Hermitian product on \(\Lambda(V)\) given by
\[\langle u_{0}u_{1}\ldots u_{n},v_{0}v_{1}\ldots v_{m}\rangle= \begin{cases}\prod_{i=0}^{n}\delta(u_{i},v_{i}),&n=m\\ 0,&n\neq m\end{cases}\]
which extends bilinearly by \(\langle\xi_{1}+\xi_{2},\eta\rangle=\langle\xi_{1},\eta\rangle+\langle\xi_{2}, \eta\rangle\), \(\langle\xi,\eta_{1}+\eta_{2}\rangle=\langle\xi,\eta_{1}\rangle+\langle\xi, \eta_{2}\rangle\) and \(\langle c\xi,\eta\rangle=\langle\xi,\bar{c}\eta\rangle=c\langle\xi,\eta\rangle\) for any \(\xi,\xi_{1},\xi_{2},\eta,\eta_{1},\eta_{2}\in\Lambda(V)\) and any \(c\in\mathbb{C}\). Here for any vertices \(u,v\in V\), we use the notation \(\delta(u,v)=1\) if \(u=v\) and \(\delta(u,v)=0\) if \(u\neq v\).
For any \(v\in V\), the _partial derivative_ on \(\Lambda_{*}(V)\) with respect to \(v\) is defined to be a sequence of linear maps (cf. [26, Subsection 3.2])
\[\frac{\partial}{\partial v}:\quad\Lambda_{n}(V)\longrightarrow \Lambda_{n-1}(V),\quad n\in\mathbb{N} \tag{2.1}\]
given by
\[\frac{\partial}{\partial v}(v_{0}v_{1}\ldots v_{n})=\sum_{i=0}^{n}(-1)^{i} \delta(v,v_{i})v_{0}\ldots\widehat{v_{i}}\ldots v_{n} \tag{2.2}\]
and the _partial differentiation_\(dv\) on \(\Lambda_{*}(V)\) with respect to \(v\) is defined to be a sequence of linear maps (cf. [26, Subsection 3.3])
\[dv:\quad\Lambda_{n}(V)\longrightarrow\Lambda_{n+1}(V),\quad n\in\mathbb{N} \tag{2.3}\]
given by
\[dv(u_{0}u_{1}\ldots u_{n-1})=\sum_{i=0}^{n}(-1)^{i}u_{0}u_{1} \ldots u_{i-1}v_{i}u_{i+1}\ldots u_{n-1}. \tag{2.4}\]
It can be derived from [26, Subsection 3.3] that \(\langle\frac{\partial}{\partial v}(\eta),\xi\rangle=\langle\eta,dv(\xi)\rangle\) for any \(\xi,\eta\in\Lambda_{*}(V)\), i.e. \(dv\) is the adjoint linear map of \(\frac{\partial}{\partial v}\).
Let \(\operatorname{Ext}_{*}(V)\) be the exterior algebra generated by \(\frac{\partial}{\partial v}\), \(v\in V\), over \(\mathbb{C}\) and let \(\operatorname{Ext}^{*}(V)\) be the exterior algebra generated by \(dv\), \(v\in V\), over \(\mathbb{C}\). Then both \(\operatorname{Ext}_{*}(V)\) and \(\operatorname{Ext}^{*}(V)\) are isomorphic to \(\operatorname{Ext}[V]\). It is proved in [26, Lemma 3.1] that \(\frac{\partial}{\partial v}\circ\frac{\partial}{\partial u}=\frac{-\partial }{\partial u}\circ\frac{\partial}{\partial v}\) and in [26, Lemma 3.3] that \(dv\circ du=-du\circ dv\) for any \(u,v\in V\). Thus both \(\operatorname{Ext}_{*}(V)\) and \(\operatorname{Ext}^{*}(V)\) act on \(\Lambda_{*}(V)\) such that the compositions of maps are represented by exterior products. Let \(k\in\mathbb{N}\). Let \(\alpha\in\operatorname{Ext}_{k}(V)\) and \(\omega\in\operatorname{Ext}^{k}(V)\). We say that \(\alpha\) and \(\omega\) are _adjoint_ to each other if \(\langle\alpha(\eta),\xi\rangle=\langle\eta,\omega(\xi)\rangle\) for any \(n\in\mathbb{N}\), any \(\xi\in\Lambda_{n}(V)\) and any \(\eta\in\Lambda_{n+k}(V)\). Similar to [26, Subsection 3.3], it is direct that \(\alpha\) and \(\omega\) are adjoint if and only if
\[\alpha=\sum_{v_{1},v_{2},\ldots,v_{k}\in V}z_{v_{1},v_{2},\ldots,v_{k}}\frac{ \partial}{\partial v_{1}}\wedge\frac{\partial}{\partial v_{2}}\wedge\cdots \wedge\frac{\partial}{\partial v_{k}},\quad z_{v_{1},v_{2},\ldots,v_{k}}\in \mathbb{C}.\]
and
\[\omega=\operatorname{sgn}(k)\sum_{v_{1},v_{2},\ldots,v_{k}\in V}\bar{z}_{v_{1}, v_{2},\ldots,v_{k}}dv_{1}\wedge dv_{2}\wedge\cdots\wedge dv_{k},\quad\bar{z}_{v_{1}, v_{2},\ldots,v_{k}}\in\mathbb{C}\]
where \(\mathrm{sgn}(k)=1\) if \(k\equiv 0,1\) modulo \(4\) and \(\mathrm{sgn}(k)=-1\) if \(k\equiv 2,3\) modulo \(4\).
An elementary \(n\)-path \(v_{0}v_{1}\ldots v_{n}\) on \(V\) is called _cyclic_ if there exist integers \(0\leq i<j\leq n\) such that either \(v_{j}=v_{i}\) and is called _acyclic_ if it is not cyclic. Let \(\mathcal{C}_{n}(V)\) be the complex vector space spanned by all the cyclic elementary \(n\)-paths on \(V\) and let \(\mathcal{D}_{n}(V)\) be the complex vector space spanned by all the acyclic elementary \(n\)-paths on \(V\). Let \(\mathcal{C}_{*}(V)=\bigoplus_{n=0}^{\infty}\mathcal{C}_{n}(V)\) and let \(\mathcal{D}_{*}(V)=\bigoplus_{n=0}^{\infty}\mathcal{D}_{n}(V)\). Then we have an orthogonal direct sum of graded vector spaces
\[\Lambda_{*}(V)=\mathcal{C}_{*}(V)\oplus\mathcal{D}_{*}(V). \tag{2.5}\]
An acyclic elementary \(n\)-path \(v_{0}v_{1}\ldots v_{n}\) on \(V\) is called _non-simplicial_ if there exist integers \(0\leq i<j\leq n\) such that \(v_{j}\prec v_{i}\) (cf. [26, Definition 4.1]). Let \(\mathcal{E}_{n}(V)\) be the complex vector space spanned by all the non-simplicial acyclic elementary \(n\)-paths on \(V\). Let \(\tilde{\Lambda}_{n}(V)\) be the subspace of \(\Lambda_{n}(V)\) whose canonical basis are all the elementary \(n\)-paths \(v_{0}v_{1}\ldots v_{n}\) such that \(v_{0},v_{1},\ldots,v_{n}\) are distinct and \(v_{0}\prec v_{1}\prec\cdots\prec v_{n}\). Let \(\mathcal{E}_{*}(V)=\bigoplus_{n=0}^{\infty}\mathcal{E}_{n}(V)\) and let \(\tilde{\Lambda}_{*}(V)=\bigoplus_{n=0}^{\infty}\tilde{\Lambda}_{n}(V)\). Then we have an orthogonal direct sum of graded vector spaces
\[\mathcal{D}_{*}(V)=\mathcal{E}_{*}(V)\oplus\tilde{\Lambda}_{*}(V). \tag{2.6}\]
By (2.5) and (2.6), we have an orthogonal direct sum of graded vector spaces
\[\Lambda_{*}(V)=\mathcal{C}_{*}(V)\oplus\mathcal{E}_{*}(V)\oplus\tilde{ \Lambda}_{*}(V). \tag{2.7}\]
The restriction of (2.1) gives a graded linear map
\[\frac{\partial}{\partial v}:\quad\tilde{\Lambda}_{n}(V)\longrightarrow\tilde {\Lambda}_{n-1}(V),\quad n\in\mathbb{N} \tag{2.8}\]
By composing with the canonical projection \(\pi:\Lambda_{*}(V)\longrightarrow\tilde{\Lambda}_{*}(V)\) sending \(\mathcal{C}_{*}(V)\oplus\mathcal{E}_{*}(V)\) to zero and sending \(\tilde{\Lambda}_{*}(V)\) identically to itself, (2.3) gives a graded linear map
\[dv:\quad\tilde{\Lambda}_{n}(V)\stackrel{{ dv}}{{\longrightarrow}} \Lambda_{n+1}(V)\stackrel{{\pi}}{{\longrightarrow}}\tilde{ \Lambda}_{n+1}(V),\quad n\in\mathbb{N}. \tag{2.9}\]
An element \(\sigma\in\Delta[V]\) consisting of \(n+1\) vertices in \(V\) is an _\(n\)-hyperedge_ on \(V\), denoted as \(\sigma^{(n)}\). Let \(S_{n+1}\) be the symmetric group on \((n+1)\)-letters. Then \(S_{n+1}\) acts on the set of all the elementary \(n\)-paths on \(V\). With the help of this group action, \(\sigma^{(n)}\) can be expressed as an orbit
\[\sigma^{(n)}=S_{n+1}(v_{0}v_{1}\ldots v_{n})=\{v_{s(0)}v_{s(1)}\ldots v_{s(n) }\mid s\in S_{n+1}\}\]
for some distinct \(v_{0},v_{1},\ldots,v_{n}\in V\). Without loss of generality, we choose the representative \(v_{0}v_{1}\ldots v_{n}\) of \(\sigma^{(n)}\) such that \(v_{0}\prec v_{1}\prec\cdots\prec v_{n}\) and write \(\sigma^{(n)}=\{v_{0},v_{1},\ldots,v_{n}\}\). Let
\[\mathbb{C}_{n}(\Delta[V])=\mathrm{Span}_{\mathbb{C}}\{\sigma^{(n)}\in\Delta[V]\}\]
be the vector space consisting of all the linear combinations of the \(n\)-hyperedges on \(V\). Consider the direct sum
\[\mathbb{C}_{*}(\Delta[V])=\bigoplus_{n=0}^{\infty}\mathbb{C}_{n}(\Delta[V]).\]
Then \(\mathbb{C}_{n}(\Delta[V])\) can be identified with \(\tilde{\Lambda}_{n}(V)\) by choosing the elementary \(n\)-path \(v_{0}v_{1}\ldots v_{n}\) on \(V\) satisfying \(v_{0}\prec v_{1}\prec\cdots\prec v_{n}\) as an representative of \(\sigma^{(n)}\). For each \(\alpha\in\mathrm{Ext}_{k}(V)\), \(k\in\mathbb{N}\), (2.8) induces a graded linear map
\[\alpha:\quad\mathbb{C}_{n}(\Delta[V])\longrightarrow\mathbb{C}_{n-k}(\Delta[V ]),\quad n\in\mathbb{N}\]
which coincides with (2.8) if \(k=1\) and \(\alpha=\frac{\partial}{\partial v}\). For each \(\omega\in\mathrm{Ext}^{k}(V)\), \(k\in\mathbb{N}\), (2.9) induces a well-defined graded linear map
\[\omega:\quad\mathbb{C}_{n}(\Delta[V])\longrightarrow\mathbb{C}_{n+k}(\Delta[V ]),\quad n\in\mathbb{N}\]
which coincides with (2.9) if \(k=1\) and \(\omega=dv\).
_Remark 1_: In [26], we used the notation \(\mathcal{O}_{*}(V)\) for the graded vector space spanned by non-simplicial elementary paths on \(V\). Note that \(\mathcal{O}_{*}(V)=\mathcal{C}_{*}(V)\oplus\mathcal{E}_{*}(V)\).
The constrained homology of simplicial complexes and the constrained cohomology of independence hypergraphs
Let \(t\in\mathbb{N}\). Let \(\alpha\in\operatorname{Ext}_{2t+1}(V)\) and \(\omega\in\operatorname{Ext}^{2t+1}(V)\). Then for any integer \(0\leq q\leq 2t\), we have a chain complex
\[\cdots\xlongrightarrow{\alpha}\Lambda_{n(2t+1)+q}(V)\xlongrightarrow{\alpha} \Lambda_{(n-1)(2t+1)+q}(V)\xlongrightarrow{\alpha}\]
\[\cdots\xlongrightarrow{\alpha}\Lambda_{(2t+1)+q}(V)\xlongrightarrow{\alpha} \Lambda_{q}(V)\xlongrightarrow{\alpha}0,\]
denoted by \(\Lambda_{*}(V,\alpha,q)\), and a co-chain complex
\[\cdots\xlongrightarrow{\omega}\Lambda_{n(2t+1)+q}(V)\xlongrightarrow{\omega} \Lambda_{(n-1)(2t+1)+q}(V)\xlongrightarrow{\omega}\]
\[\cdots\xlongrightarrow{\omega}\Lambda_{(2t+1)+q}(V)\xlongrightarrow{\omega} \Lambda_{q}(V)\xlongrightarrow{\omega}0,\]
denoted by \(\Lambda^{*}(V,\omega,q)\). The chain complex \(\Lambda_{*}(V,\alpha,q)\) has a sub-chain complex
\[\cdots\xlongrightarrow{\alpha}\mathcal{D}_{n(2t+1)+q}(V)\xlongrightarrow{ \alpha}\mathcal{D}_{(n-1)(2t+1)+q}(V)\xlongrightarrow{\alpha}\]
\[\cdots\xlongrightarrow{\alpha}\mathcal{D}_{(2t+1)+q}(V)\xlongrightarrow{ \alpha}\mathcal{D}_{q}(V)\xlongrightarrow{\alpha}0,\]
denoted by \(\mathcal{D}_{*}(V,\alpha,q)\), and the chain complex \(\mathcal{D}_{*}(V,\alpha,q)\) has a sub-chain complex
\[\cdots\xlongrightarrow{\alpha}\mathbb{C}_{n(2t+1)+q}(\Delta[V])\xlongrightarrow {\alpha}\mathbb{C}_{(n-1)(2t+1)+q}(\Delta[V])\xlongrightarrow{\alpha}\]
\[\cdots\xlongrightarrow{\alpha}\mathbb{C}_{(2t+1)+q}(\Delta[V])\xlongrightarrow {\alpha}\mathbb{C}_{q}(\Delta[V])\xlongrightarrow{\alpha}0,\]
denoted by \(C_{*}(\Delta[V],\alpha,q)\). On the other hand, the co-chain complex \(\Lambda^{*}(V,\omega,q)\) has a sub-co-chain complex
\[\cdots\xlongrightarrow{\omega}\mathcal{C}_{n(2t+1)+q}(V)\xlongrightarrow{ \omega}\mathcal{C}_{(n-1)(2t+1)+q}(V)\xlongrightarrow{\omega}\]
\[\cdots\xlongrightarrow{\omega}\mathcal{C}_{(2t+1)+q}(V)\xlongrightarrow{ \omega}\mathcal{C}_{q}(V)\xlongrightarrow{\omega}0,\]
denoted by \(\mathcal{C}^{*}(V,\omega,q)\). Moduling \(\mathcal{C}^{*}(V,\omega,q)\) in \(\Lambda^{*}(V,\omega,q)\), we have a quotient co-chain complex
\[\cdots\xlongrightarrow{\omega}\mathcal{D}_{n(2t+1)+q}(V)\xlongrightarrow{ \omega}\mathcal{D}_{(n-1)(2t+1)+q}(V)\xlongrightarrow{\omega}\]
\[\cdots\xlongrightarrow{\omega}\mathcal{D}_{(2t+1)+q}(V)\xlongrightarrow{ \omega}\mathcal{D}_{q}(V)\xlongrightarrow{\omega}0,\]
denoted by \(\mathcal{D}^{*}(V,\omega,q)\). The co-chain complex \(\mathcal{D}^{*}(V,\omega,q)\) has a sub-co-chain complex
\[\cdots\xlongrightarrow{\omega}\mathcal{E}_{n(2t+1)+q}(V)\xlongrightarrow{ \omega}\mathcal{E}_{(n-1)(2t+1)+q}(V)\xlongrightarrow{\omega}\]
\[\cdots\xlongrightarrow{\omega}\mathcal{E}_{(2t+1)+q}(V)\xlongrightarrow{ \omega}\mathcal{E}_{q}(V)\xlongrightarrow{\omega}0,\]
denoted by \(\mathcal{E}^{*}(V,\omega,q)\). Moduling \(\mathcal{E}^{*}(V,\omega,q)\) in \(\mathcal{D}^{*}(V,\omega,q)\), we have a quotient co-chain complex
\[\cdots\xleftarrow{\omega}\mathbb{C}_{n(2t+1)+q}(\Delta[V])\xleftarrow{\omega} \mathbb{C}_{(n-1)(2t+1)+q}(\Delta[V])\xleftarrow{\omega}\]
\[\cdots\xleftarrow{\omega}\mathbb{C}_{(2t+1)+q}(\Delta[V])\xleftarrow{\omega} \mathbb{C}_{q}(\Delta[V])\xleftarrow{\omega}0,\]
denoted by \(C^{*}(\Delta[V],\omega,q)\).
Let \(\mathbb{Z}\) be the ring of integers. For any \(m\in\mathbb{Z}\), there is a unique \(\lambda\in\mathbb{Z}\) and a unique integer \(0\leq q\leq 2t\) such that \(m=\lambda(2t+1)+q\). Denote the chain complex
\[\cdots\xleftarrow{\alpha}\Lambda_{(n+\lambda)(2t+1)+q}(V)\xleftarrow{\alpha} \Lambda_{(n-1+\lambda)(2t+1)+q}(V)\xleftarrow{\alpha}\]
\[\cdots\xleftarrow{\alpha}\Lambda_{(1+\lambda)(2t+1)+q}(V)\xleftarrow{\alpha} \Lambda_{\lambda(2t+1)+q}(V)\xleftarrow{\alpha}0\]
as \(\Lambda_{*}(V,\alpha,m)\) and the co-chain complex
\[\cdots\xleftarrow{\omega}\Lambda_{(n+\lambda)(2t+1)+q}(V)\xleftarrow{\omega} \Lambda_{(n-1+\lambda)(2t+1)+q}(V)\xleftarrow{\omega}\]
\[\cdots\xleftarrow{\omega}\Lambda_{(1+\lambda)(2t+1)+q}(V)\xleftarrow{\omega} \Lambda_{\lambda(2t+1)q}(V)\xleftarrow{\omega}0\]
as \(\Lambda^{*}(V,\omega,m)\). Here we use the notation \(\Lambda_{k}(V)=0\) for \(k<0\). Similar notations for \(\mathbb{C}_{*}(\Delta[V],\alpha,m)\) and \(\mathbb{C}^{*}(\Delta[V],\omega,m)\). Let \(s\in\mathbb{N}\). Let \(\beta\in\mathrm{Ext}_{2s}(V)\) and \(\mu\in\mathrm{Ext}^{2s}(V)\). Then \(\beta\) gives a chain map
\[\beta:\quad\Lambda_{*}(V,\alpha,m)\longrightarrow\Lambda_{*}(V,\alpha,m-2s) \tag{3.1}\]
and \(\mu\) gives a co-chain map
\[\mu:\quad\Lambda^{*}(V,\omega,m)\longrightarrow\Lambda^{*}(V,\omega,m+2s). \tag{3.2}\]
The chain map (3.1) induces a chain map
\[\beta:\quad\mathbb{C}_{*}(\Delta[V],\alpha,m)\longrightarrow\mathbb{C}_{*}( \Delta[V],\alpha,m-2s).\]
Similarly, the co-chain map (3.2) induces a co-chain map
\[\mu:\quad\mathbb{C}^{*}(\Delta[V],\omega,m)\longrightarrow\mathbb{C}^{*}( \Delta[V],\omega,m+2s).\]
**Lemma 3.1**.: _Let \(s_{1},s_{2}\in\mathbb{N}\)._
1. _Let_ \(\beta_{1}\in\mathrm{Ext}_{2s_{1}}(V)\) _and_ \(\beta_{2}\in\mathrm{Ext}_{2s_{2}}(V)\)_. Then the diagram commutes_
2. _Let_ \(\mu_{1}\in\mathrm{Ext}^{2s_{1}}(V)\) _and_ \(\mu_{2}\in\mathrm{Ext}^{2s_{2}}(V)\)_. Then the diagram commutes_
Proof.: Since \(\beta_{1}\wedge\beta_{2}=\beta_{2}\wedge\beta_{1}=\beta_{2}\circ\beta_{1}\) is the composition of \(\beta_{2}\) and \(\beta_{1}\) (or equivalently, the composition of \(\beta_{1}\) and \(\beta_{2}\)), we have (1). Similarly, since \(\mu_{1}\wedge\mu_{2}=\mu_{2}\wedge\mu_{1}=\mu_{2}\circ\mu_{1}\) is the composition of \(\mu_{2}\) and \(\mu_{1}\), we have (2).
Let \(\mathcal{K}\) be a simplicial complex with its vertices from \(V\). Let \(\mathbb{C}_{n}(\mathcal{K})\) be the complex vector space spanned by the \(n\)-simplices in \(\mathcal{K}\). Let \(t,s\in\mathbb{N}\). Let \(m\in\mathbb{Z}\). Suppose \(m=\lambda(2t+1)+q\) where \(\lambda\in\mathbb{Z}\) and \(0\leq q\leq 2t\). It can be derived from [26, Theorem 4.1] that for any \(\alpha\in\operatorname{Ext}_{2t+1}(V)\), the graded vector space \(\{\mathbb{C}_{(n+\lambda)(2t+1)+q}(\mathcal{K})\}_{n\in\mathbb{N}}\) equipped with the chain map \(\alpha\) gives a sub-chain complex of \(\mathbb{C}_{*}(\Delta[V],\alpha,m)\), which will be denoted as \(\mathbb{C}_{*}(\mathcal{K},\alpha,m)\). Moreover, for any \(\beta\in\operatorname{Ext}_{2s}(V)\), there is an induced chain map
\[\beta:\quad\mathbb{C}_{*}(\mathcal{K},\alpha,m)\longrightarrow\mathbb{C}_{*} (\mathcal{K},\alpha,m-2s).\]
The \(n\)-th _constrained homology group_\(H_{n}(\mathcal{K},\alpha,m)\) of \(\mathcal{K}\) with respect to \(\alpha\) and \(m\) is defined to be the \(n\)-th homology group (cf. [26, Definition 4.3])
\[H_{n}(\mathcal{K},\alpha,m)=\frac{\operatorname{Ker}\Bigl{(}\alpha:\mathbb{C }_{(n+\lambda)(2t+1)+q}(\mathcal{K};\mathbb{R})\longrightarrow\mathbb{C}_{(n -1+\lambda)(2t+1)+q}(\mathcal{K};\mathbb{R})\Bigr{)}}{\operatorname{Im}\Bigl{(} \alpha:\mathbb{C}_{(n+1+\lambda)(2t+1)+q}(\mathcal{K};\mathbb{R})\longrightarrow \mathbb{C}_{(n+\lambda)(2t+1)+q}(\mathcal{K};\mathbb{R})\Bigr{)}}\]
of the chain complex \(\mathbb{C}_{*}(\mathcal{K},\alpha,m)\). It can be derived from [26, Theorem 4.2] that for any \(\alpha\in\operatorname{Ext}_{2t+1}(V)\) and any \(\beta\in\operatorname{Ext}_{2s}(V)\), there is an induced homomorphism
\[\beta_{*}:\quad H_{n}(\mathcal{K},\alpha,m)\longrightarrow H_{n}(\mathcal{K},\alpha,m-2s),\qquad n\in\mathbb{N}\]
of the constrained homology groups.
Let \(\mathcal{L}\) be a independent hypergraph with its vertices in \(V\). Let \(\mathbb{C}_{n}(\mathcal{L})\) be the complex vector space spanned by the \(n\)-hyperedges in \(\mathcal{L}\). It can be derived from [26, Theorem 4.3] that for any \(\omega\in\operatorname{Ext}^{2t+1}(V)\), the graded vector space \(\{\mathbb{C}_{(n+\lambda)(2t+1)+q}(\mathcal{L};\mathbb{R})\}_{n\in\mathbb{N}}\) equipped with the co-boundary map \(\omega\) gives a sub-co-chain complex of \(C^{*}(\Delta[V],\omega,m)\), which will be denoted as \(\mathbb{C}^{*}(\mathcal{L},\omega,m)\). Moreover, for any \(\mu\in\operatorname{Ext}^{2s}(V)\), there is an induced co-chain map
\[\mu:\quad\mathbb{C}^{*}(\mathcal{L},\omega,m)\longrightarrow\mathbb{C}^{*}( \mathcal{L},\omega,m+2s).\]
The \(n\)-th _constrained cohomology group_\(H^{n}(\mathcal{L},\omega,m)\) of \(\mathcal{L}\) with respect to \(\omega\) and \(m\) is defined to be the cohomology group (cf. [26, Definition 4.4])
\[H^{n}(\mathcal{L},\omega,m)=\frac{\operatorname{Ker}\Bigl{(}\omega:\mathbb{C} _{(n+\lambda)(2t+1)+q}(\mathcal{L};\mathbb{R})\longrightarrow\mathbb{C}_{(n +1+\lambda)(2t+1)+q}(\mathcal{L};\mathbb{R})\Bigr{)}}{\operatorname{Im}\Bigl{(} \omega:\mathbb{C}_{(n-1+\lambda)(2t+1)+q}(\mathcal{L};\mathbb{R})\longrightarrow \mathbb{C}_{(n+\lambda)(2t+1)+q}(\mathcal{L};\mathbb{R})\Bigr{)}}\]
of the co-chain complex \(\mathbb{C}^{*}(\mathcal{L},\omega,m)\). It can be derived from [26, Theorem 4.4] that for any \(\omega\in\operatorname{Ext}^{2t+1}(V)\) and any \(\mu\in\operatorname{Ext}^{2s}(V)\), there is an induced homomorphism
\[\mu_{*}:\quad H^{n}(\mathcal{L},\omega,m)\longrightarrow H^{n}(\mathcal{L}, \omega,m+2s),\qquad n\in\mathbb{N}\]
of the constrained cohomology groups.
**Lemma 3.2**.: _Let \(s_{1},s_{2}\in\mathbb{N}\)._
1. _Let_ \(\beta_{1}\in\operatorname{Ext}_{2s_{1}}(V)\) _and_ \(\beta_{2}\in\operatorname{Ext}_{2s_{2}}(V)\)_. Then the diagram commutes_
_._
2. _Let_ \(\mu_{1}\in\operatorname{Ext}^{2s_{1}}(V)\) _and_ \(\mu_{2}\in\operatorname{Ext}^{2s_{2}}(V)\)_. Then the diagram commutes_ \[\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0 }\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0 }\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0 }\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2. 0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0 }\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0 }\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram {-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0} \diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.0}\diagram{-2.
Proof.: (1). Let \(\mathcal{K}_{1},\mathcal{K}_{2},\mathcal{K}^{\prime}_{1}\) and \(\mathcal{K}^{\prime}_{2}\) be simplicial complexes with vertices from \(V\). Suppose there are simplicial maps \(\varphi_{1}:\mathcal{K}_{1}\longrightarrow\mathcal{K}^{\prime}_{1}\) and \(\varphi_{2}:\mathcal{K}_{2}\longrightarrow\mathcal{K}^{\prime}_{2}\) induced by a same bijective map \(\varphi:V\longrightarrow V\) on the vertices. Then \(\varphi\) induces a simplicial map from \(\mathcal{K}_{1}\cap\mathcal{K}_{2}\) to \(\mathcal{K}^{\prime}_{1}\cap\mathcal{K}^{\prime}_{2}\) as well as a simplicial map from \(\mathcal{K}_{1}\cup\mathcal{K}_{2}\) to \(\mathcal{K}^{\prime}_{1}\cup\mathcal{K}^{\prime}_{2}\). Since \(\varphi\) is a self-bijection on \(V\), we have
\[\operatorname{Ext}_{*}(\varphi)(\alpha)\in\operatorname{Ext}_{2t+1}(V).\]
Consequently, we have a commutative diagram
where each row is a short exact sequence of coin complexes and the vertical maps \(\varphi_{\#}\) are co-chain maps induced by \(\varphi\). Applying the cohomology functor, the last commutative diagram
induces a commutative diagram
where each row is a long exact sequence and the vertical maps \(\varphi_{*}\) are homomorphisms induced by \(\varphi_{\#}\). We obtain (2).
## 4 The functorialities
Let \(V\) and \(V^{\prime}\) be two discrete sets. Suppose \(V\) as well as \(V^{\prime}\) has a total order. Let \(f:V\longrightarrow V^{\prime}\) be a map. We have an induced graded linear map \(\Lambda_{*}(f):\Lambda_{*}(V)\longrightarrow\Lambda_{*}(V^{\prime})\) given by \(\Lambda_{n}(f)(v_{0}\dots v_{n})=f(v_{0})\dots f(v_{n})\) for each \(n\in\mathbb{N}\) and extending linearly over \(\mathbb{C}\). The restriction of \(\Lambda_{*}(f)\) to \(\mathcal{C}_{*}(V)\) induces a graded linear map \(\mathcal{C}_{*}(f):\mathcal{C}_{*}(V)\longrightarrow\mathcal{C}_{*}(V^{\prime})\).
Suppose in addition that \(f\) is injective from \(V\) to \(V^{\prime}\). Then the restriction of \(\Lambda_{*}(f)\) to \(\mathcal{D}_{*}(V)\) induces a graded linear map \(\mathcal{D}_{*}(f):\mathcal{D}_{*}(V)\longrightarrow\mathcal{D}_{*}(V^{\prime})\). Moduling the orders of the elementary paths in \(\mathcal{D}_{*}(V)\), we have a quotient graded linear map \(\mathbb{C}_{*}(f):\mathbb{C}_{*}(\Delta[V])\longrightarrow\mathbb{C}_{*}( \Delta[V^{\prime}])\). Note that \(\langle\Lambda_{*}(f)(\eta),\Lambda_{*}(f)(\xi)\rangle=\langle\eta,\xi\rangle\) for any \(\xi,\eta\in\Lambda_{*}(V)\) thus for any \(\xi,\eta\in\mathbb{C}_{*}(\Delta[V])\).
We have induced homomorphisms of exterior algebras \(\operatorname{Ext}_{*}(f):\operatorname{Ext}_{*}(V)\longrightarrow \operatorname{Ext}_{*}(V^{\prime})\) sending \(\frac{\partial}{\partial v}\) to \(\frac{\partial}{\partial f(v)}\) for each \(v\in V\) and \(\operatorname{Ext}^{*}(f):\operatorname{Ext}^{*}(V)\longrightarrow \operatorname{Ext}^{*}(V^{\prime})\) sending \(dv\) to \(df(v)\) for each \(v\in V\). For any \(k\in\mathbb{N}\), if \(\alpha\in\operatorname{Ext}_{k}(V)\) and \(\omega\in\operatorname{Ext}^{k}(V)\) are adjoint, then \(\operatorname{Ext}_{*}(f)(\alpha)\) and \(\operatorname{Ext}^{*}(f)(\omega)\) are adjoint as well.
For any \(t\in\mathbb{N}\), any integer \(0\leq q\leq 2t\), any \(\alpha\in\operatorname{Ext}_{2t+1}(V)\) and any \(\omega\in\operatorname{Ext}^{2t+1}(V)\), we have an induced chain map
\[f_{\#}:\quad\Lambda_{*}(V,\alpha,q)\longrightarrow\Lambda_{*}(V^{\prime}, \operatorname{Ext}_{*}(f)(\alpha),q) \tag{4.1}\]
and an induced co-chain map
\[f^{\#}:\quad\Lambda^{*}(V,\omega,q)\longrightarrow\Lambda^{*}(V^{\prime}, \operatorname{Ext}^{*}(f)(\omega),q). \tag{4.2}\]
Moreover, for any \(m\in\mathbb{Z}\), any \(s\in\mathbb{N}\), any \(\beta\in\operatorname{Ext}_{2s}(V)\) and any \(\mu\in\operatorname{Ext}^{2s}(V)\), we have a commutative diagram whose arrows are chain maps
(4.3)
and a commutative diagram whose arrows are co-chain maps
(4.4)
We notice that the injectivity of \(f\) is essential for the well-definedness of \(\operatorname{Ext}_{*}(f)\), \(\operatorname{Ext}^{*}(f)\), \(f_{\#}\) and \(f^{\#}\) respectively.
By restricting (4.1) to the sub-chain complex \(\mathcal{D}_{*}(V)\), \(f_{\#}\) induces a chain map
\[f_{\#}:\quad\mathcal{D}_{*}(V,\alpha,q)\longrightarrow\mathcal{D}_{*}(V^{\prime },\mathrm{Ext}_{*}(f)(\alpha),q). \tag{4.5}\]
By restricting (4.5) to the sub-chain complex \(\mathbb{C}_{*}(\Delta[V])\), \(f_{\#}\) induces a chain map
\[f_{\#}:\quad\mathbb{C}_{*}(\Delta[V],\alpha,q)\longrightarrow\mathbb{C}_{*}( \Delta[V^{\prime}],\mathrm{Ext}_{*}(f)(\alpha),q). \tag{4.6}\]
Similarly, by restricting (4.2) to the sub-co-chain complex \(\mathcal{C}^{*}(V,\omega,q)\), \(f^{\#}\) induces a co-chain map
\[f^{\#}:\quad\mathcal{C}^{*}(V,\omega,q)\longrightarrow\mathcal{C}_{*}(V^{ \prime},\mathrm{Ext}^{*}(f)(\omega),q). \tag{4.7}\]
By moduling (4.7) in (4.2), we have an induced co-chain map
\[f^{\#}:\quad\mathcal{D}^{*}(V,\omega,q)\longrightarrow\mathcal{D}^{*}(V^{ \prime},\mathrm{Ext}^{*}(f)(\omega),q). \tag{4.8}\]
By restricting (4.8) to the sub-co-chain complex \(\mathcal{E}^{*}(V,\omega,q)\), we obtain a co-chain map
\[f^{\#}:\quad\mathcal{E}^{*}(V,\omega,q)\longrightarrow\mathcal{E}^{*}(V^{ \prime},\mathrm{Ext}^{*}(f)(\omega),q). \tag{4.9}\]
By moduling (4.9) in (4.8), we have an induced co-chain map
\[f^{\#}:\quad\mathbb{C}^{*}(\Delta[V],\omega,q)\longrightarrow\mathbb{C}^{*}( \Delta[V^{\prime}],\mathrm{Ext}^{*}(f)(\omega),q). \tag{4.10}\]
For any \(m\in\mathbb{Z}\), any \(s\in\mathbb{N}\), any \(\beta\in\mathrm{Ext}_{2s}(V)\) and any \(\mu\in\mathrm{Ext}^{2s}(V)\), by restricting each chain map in (4.3) to the corresponding sub-chain complexes in (4.6), we have a commutative diagram whose arrows are chain maps
(4.11)
Similarly, by taking the quotient co-chain map given in (4.10) for each co-chain map in (4.4), we have a commutative diagram whose arrows are co-chain maps
(4.12)
Let \(\mathcal{K}\) and \(\mathcal{K}^{\prime}\) be simplicial complexes with their vertices from \(V\) and \(V^{\prime}\) respectively. Suppose there is a simplicial map \(f:\mathcal{K}\longrightarrow\mathcal{K}^{\prime}\) which is given by an injective map \(f:V\longrightarrow V^{\prime}\). Then the diagram (4.11) induces a commutative diagram whose arrows are chain maps
(4.13)
Each chain complex in (4.13) is a sub-chain complex of the corresponding chain complex in (4.11) and each chain map in (4.13) is the restriction of the corresponding chain map in (4.11) to the sub-chain complex. Applying the homology functor, we have a commutative diagram of constrained homology groups
Let \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\) be independence hypergraphs with their vertices in \(V\) and \(V^{\prime}\) respectively. Suppose there is a morphism of independence hypergraphs \(f:\mathcal{L}\longrightarrow\mathcal{L}^{\prime}\) which is given by an bijective map \(f:V\longrightarrow V^{\prime}\), i.e. \(V=V^{\prime}\) and \(f\) is a bijective self-map on \(V\). Then the diagram (4.12) induces a commutative diagram whose arrows are chain maps
(4.15)
Each co-chain complex in (4.15) is a sub-co-chain complex of the corresponding co-chain complex in (4.12) and each co-chain map in (4.15) is the restriction of the corresponding co-chain map in (4.12) to the sub-co-chain complex. We have an induced commutative diagram of constrained cohomology groups
(4.16)
A morphism between two Mayer-Vietoris sequences of the constrained homology of simplicial complexes is a sequence of maps (expressed as vertical arrows) such that the diagram
where each row is a Mayer-Vietoris sequence of constrained homology groups, commutes. We denote such a morphism as \(\mathbf{MV}^{*}(\mathcal{L}_{1},\mathcal{L}_{2},\omega,m)\longrightarrow\mathbf{ MV}^{*}(\mathcal{L}^{\prime}_{1},\mathcal{L}^{\prime}_{2},\omega^{\prime},m^{ \prime})\).
**Proposition 4.1**.: _(1). Let \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\) be two simplicial complexes with their vertices from \(V\). Then we have a diagram of Mayer-Vietoris sequences whose arrows are families of morphisms
_such that for any positive integer \(k\), any \(s_{1},s_{2},\ldots,s_{k}\in\mathbb{N}\) and any \(\beta_{i}\in\operatorname{Ext}_{2s_{i}}(V)\) for \(i=1,2,\ldots,k\), the morphism \(\beta_{1}\wedge\beta_{2}\wedge\cdots\wedge\beta_{k}\in\operatorname{Ext}_{2(s_{ 1}+s_{2}+\cdots+s_{k})}(V)\) of Mayer-Vietoris sequences can be identified with the composition \(\beta_{1}\circ\beta_{2}\circ\cdots\circ\beta_{k}\) of the morphisms \(\beta_{1},\beta_{2},\ldots,\beta_{k}\). Moreover, the diagram is functorial with respect to simplicial maps;_
2. _Let_ \(\mathcal{L}_{1}\) _and_ \(\mathcal{L}_{2}\) _be two independence hypergraphs with their vertices from_ \(V\)_. Then we have a diagram of Mayer-Vietoris sequences whose arrows are families of morphisms of Mayer-Vietoris sequences_ \[\raisebox{-0.5pt}{\includegraphics[]{fig-1.pdf}}\] _such that for any positive integer_ \(k\)_, any_ \(s_{1},s_{2},\ldots,s_{k}\in\mathbb{N}\) _and any_ \(\mu_{i}\in\operatorname{Ext}^{2s_{i}}(V)\) _for_ \(i=1,2,\ldots,k\)_, the morphism_ \(\mu_{1}\wedge\mu_{2}\wedge\cdots\wedge\mu_{k}\in\operatorname{Ext}^{2(s_{1}+s _{2}+\cdots+s_{k})}(V)\) _of Mayer-Vietoris sequences can be identified with the composition_ \(\mu_{1}\circ\mu_{2}\circ\cdots\circ\mu_{k}\) _of the morphisms_ \(\mu_{1},\mu_{2},\ldots,\mu_{k}\)_. Moreover, the diagram is functorial with respect to morphisms of independence hypergraphs induced by bijective maps between the vertex sets._
Proof.: (1). For each \(i=1,2,\ldots,k\), we have a commutative diagram
such that each row is a Mayer-Vietoris sequence of the constrained homology of simplicial complexes. Thus \(\beta_{i}\) induces a morphism between two Mayer-Vietoris sequences of the constrained homology of simplicial complexes
\[(\beta_{i})_{*}:\quad\mathbf{MV}_{*}(\mathcal{K}_{1},\mathcal{K}_{2},\alpha,m )\longrightarrow\mathbf{MV}_{*}(\mathcal{K}_{1},\mathcal{K}_{2},\alpha,m-2s_ {i}).\]
By Proposition 3.3 (1), the morphism \((\beta_{1}\wedge\beta_{2}\wedge\cdots\wedge\beta_{k})_{*}\) induced by \(\beta_{1}\wedge\beta_{2}\wedge\cdots\wedge\beta_{k}\in\operatorname{Ext}_{2(s_ {1}+s_{2}+\cdots+s_{k})}(V)\) of Mayer-Vietoris sequences can be identified with the composition \((\beta_{1})_{*}\circ(\beta_{2})_{*}\circ\cdots\circ(\beta_{k})_{*}\). We obtain the diagram in (1). By (4.14) and Lemma 3.4 (1), the diagram in (1) is functorial with respect to simplicial maps.
(2). For each \(i=1,2,\ldots,k\), we have a commutative diagram
such that each row is a Mayer-Vietoris sequence of the constrained cohomology of independence hypergraphs. Thus induces a morphism \((\mu_{i})_{*}\) between two Mayer-Vietoris sequences of the constrained cohomology of independent hypergraphs
\[(\mu_{i})_{*}:\quad\mathbf{MV}^{*}(\mathcal{L}_{1},\mathcal{L}_{2},\omega,m) \longrightarrow\mathbf{MV}^{*}(\mathcal{L}_{1},\mathcal{L}_{2},\omega,m+2s_{i}).\]
By Proposition 3.3 (2), the morphism \((\mu_{1}\wedge\mu_{2}\wedge\cdots\wedge\mu_{k})_{*}\) induced by \(\mu_{1}\wedge\mu_{2}\wedge\cdots\wedge\mu_{k}\in\operatorname{Ext}^{2(s_{1}+s_ {2}+\cdots+s_{k})}(V)\) of Mayer-Vietoris sequences can be identified with the composition \((\mu_{1})_{*}\circ(\mu_{2})_{*}\circ\cdots\circ(\mu_{k})_{*}\). We obtain the diagram in (2). By (4.16) and Lemma 3.4 (2), the diagram in (2) is functorial with respect to morphisms of independence hypergraphs induced by bijective maps between the vertex sets.
In Proposition 4.1, we use
\[\operatorname{Ext}_{2s}(V):\quad\mathbf{MV}_{*}(\mathcal{K}_{1},\mathcal{K}_ {2},\alpha,m)\longrightarrow\mathbf{MV}_{*}(\mathcal{K}_{1},\mathcal{K}_{2}, \alpha,m-2s)\]
to denote the family of morphisms of Mayer-Vietoris sequences
\[\beta_{*}:\quad\mathbf{MV}_{*}(\mathcal{K}_{1},\mathcal{K}_{2},\alpha,m) \longrightarrow\mathbf{MV}_{*}(\mathcal{K}_{1,x},\mathcal{K}_{2,x},\alpha,m-2 s)\]
for all \(\beta\in\operatorname{Ext}_{2s}(V)\) and use
\[\operatorname{Ext}^{2s}(V):\quad\mathbf{MV}^{*}(\mathcal{L}_{1},\mathcal{L}_ {2},\omega,m)\longrightarrow\mathbf{MV}^{*}(\mathcal{L}_{1,x},\mathcal{L}_{2,x },\omega,m+2s)\]
to denote the family of morphisms of Mayer-Vietoris sequences
\[\mu_{*}:\quad\mathbf{MV}^{*}(\mathcal{L}_{1},\mathcal{L}_{2},\omega,m) \longrightarrow\mathbf{MV}^{*}(\mathcal{L}_{1},\mathcal{L}_{2},\omega,m+2s)\]
for all \(\mu\in\operatorname{Ext}^{2s}(V)\).
## 5 The constrained persistent (co)homology
Let \(V\) be a discrete set. Let \(\{\mathcal{K}_{x}\}_{x\in\mathbb{R}}\) be a filtration of simplicial complexes such that
1. for any \(x\in\mathbb{R}\), \(\mathcal{K}_{x}\) is a simplicial complex with its vertices from \(V\);
2. for any \(-\infty<x\leq y<+\infty\), there is a canonical inclusion \(\iota_{x}^{y}:\mathcal{K}_{x}\longrightarrow\mathcal{K}_{y}\) satisfying \(\iota_{x}^{x}=\operatorname{id}\) for any \(x\in\mathbb{R}\) and \(\iota_{x}^{z}=\iota_{y}^{z}\circ\iota_{x}^{y}\) for any \(-\infty<x\leq y\leq z<+\infty\).
Let \(t,s,n\in\mathbb{N}\). Let \(\alpha\in\operatorname{Ext}_{2t+1}(V)\) and \(\beta\in\operatorname{Ext}_{2s}(V)\). Let \(m\in\mathbb{Z}\). We have a family of constrained homology groups
\[\{H_{n}(\mathcal{K}_{x},\alpha,m)\}_{x\in\mathbb{R}} \tag{5.1}\]
and a family of homomorphisms of homology groups
\[\{(\iota^{y}_{x})_{*}:H_{n}(\mathcal{K}_{x},\alpha,m)\longrightarrow H_{n}( \mathcal{K}_{y},\operatorname{Ext}_{*}(\iota^{y}_{x})(\alpha),m)\}_{-\infty<x \leq y<+\infty}. \tag{5.2}\]
For any \(-\infty<x\leq y<+\infty\), we have a commutative diagram
(5.3)
Since \(\iota^{y}_{x}\) is assumed to be the canonical inclusion of simplicial complexes with their vertices from \(V\), it is a simplicial map induced by the identity map on \(V\). Thus
\[\operatorname{Ext}_{*}(\iota^{y}_{x})(\alpha)=\alpha.\]
We call (5.1) together with (5.2) the \(n\)-th _constrained persistent homology_ of \(\{\mathcal{K}_{x}\}_{x\in\mathbb{R}}\) with respect to \(\alpha\) and \(m\) and denote it as \(\mathbf{H}_{n}(\mathcal{K}_{x},\alpha,m\mid x\in\mathbb{R})\). By the commutative diagram (5.3), we have an induced homomorphism of persistent complex vector spaces (cf. [5, Section 1.3, Module categories])
\[\beta_{*}:\quad\mathbf{H}_{n}(\mathcal{K}_{x},\alpha,m\mid x\in\mathbb{R}) \longrightarrow\mathbf{H}_{n}(\mathcal{K}_{x},\alpha,m-2s\mid x\in\mathbb{R}).\]
Similarly, let \(\{\mathcal{L}_{x}\}_{x\in\mathbb{R}}\) be a filtration of independence hypergraphs such that
1. for any \(x\in\mathbb{R}\), \(\mathcal{L}_{x}\) is an independence hypergraph with its vertices from \(V\);
2. for any \(-\infty<x\leq y<+\infty\), there is a canonical inclusion \(\theta^{y}_{x}:\mathcal{L}_{x}\longrightarrow\mathcal{L}_{y}\).
Let \(\omega\in\operatorname{Ext}^{2t+1}(V)\) and \(\mu\in\operatorname{Ext}^{2s}(V)\). We have a family of constrained cohomology groups
\[\{H^{n}(\mathcal{L}_{x},\omega,m)\}_{x\in\mathbb{R}} \tag{5.4}\]
and a family of homomorphisms of cohomology groups
\[\{(\theta^{y}_{x})_{*}:H^{n}(\mathcal{L}_{x},\omega,m)\longrightarrow H^{n}( \mathcal{L}_{y},\operatorname{Ext}^{*}(\iota^{y}_{x})(\omega),m)\}_{-\infty<x \leq y<+\infty}. \tag{5.5}\]
For any \(-\infty<x\leq y<+\infty\), we have a commutative diagram
(5.6)
Since \(\theta^{y}_{x}\) is assumed to be the canonical inclusion of independence hypergraphs with their vertices from \(V\), it is a morphism of independence hypergraphs induced by the identity map on \(V\). Thus
\[\operatorname{Ext}_{*}(\theta^{y}_{x})(\omega)=\omega.\]
We call (5.4) together with (5.5) the \(n\)-th _constrained persistent cohomology_ of \(\{\mathcal{L}_{x}\}_{x\in\mathbb{R}}\) with respect to \(\omega\) and \(m\) and denote it as \(\mathbf{H}^{n}(\mathcal{L}_{x},\omega,m\mid x\in\mathbb{R})\). By the commutative diagram (5.6), we have an induced homomorphism of persistent complex vector spaces
\[\mu_{*}:\quad\mathbf{H}^{n}(\mathcal{L}_{x},\omega,m\mid x\in\mathbb{R}) \longrightarrow\mathbf{H}^{n}(\mathcal{L}_{x},\omega,m+2s\mid x\in\mathbb{R}).\]
**Proposition 5.1**.:
1. _We have a diagram of persistent complex vector spaces_ _such that for any positive integer_ \(k\)_, any_ \(s_{1},s_{2},\ldots,s_{k}\in\mathbb{N}\) _and any_ \(\beta_{i}\in\operatorname{Ext}_{2s_{i}}(V)\) _for_ \(i=1,2,\ldots,k\)_, the homomorphism of persistent complex vector spaces_ \(\beta_{1}\wedge\beta_{2}\wedge\cdots\wedge\beta_{k}\in\operatorname{Ext}_{2(s_ {1}+s_{2}+\cdots+s_{k})}(V)\) _can be identified with the composition_ \(\beta_{1}\circ\beta_{2}\circ\cdots\circ\beta_{k}\) _of the homomorphisms of persistent complex vector spaces_ \(\beta_{1},\beta_{2},\ldots,\beta_{k}\)_;_
2. _We have a diagram of persistent complex vector spaces_ _such that for any positive integer_ \(k\)_, any_ \(s_{1},s_{2},\ldots,s_{k}\in\mathbb{N}\) _and any_ \(\mu_{i}\in\operatorname{Ext}^{2s_{i}}(V)\) _for_ \(i=1,2,\ldots,k\)_, the homomorphism of persistent complex vector spaces_ \(\mu_{1}\wedge\mu_{2}\wedge\cdots\wedge\mu_{k}\in\operatorname{Ext}^{2(s_{1}+s_ {2}+\cdots+s_{k})}(V)\) _can be identified with the composition_ \(\mu_{1}\circ\mu_{2}\circ\cdots\circ\mu_{k}\) _of the homomorphisms of persistent complex vector spaces_ \(\mu_{1},\mu_{2},\ldots,\mu_{k}\)_._
Proof.: (1). For each \(i=1,2,\ldots,k\), we have that
\[(\beta_{i})_{*}:\mathbf{H}_{n}(\mathcal{K}_{x},\alpha,m\mid x\in\mathbb{R}) \longrightarrow\mathbf{H}_{n}(\mathcal{K}_{x},\alpha,m-2s_{i}\mid x\in\mathbb{ R})\]
is an induced homomorphism of persistent complex vector spaces. By the definition of \(\operatorname{Ext}_{*}(V)\) and Proposition 3.3 (1), \(\beta_{1}\wedge\beta_{2}\wedge\cdots\wedge\beta_{k}\) is the composition of \(\beta_{1},\beta_{2},\ldots,\beta_{k}\). We obtain (1).
(2). For each \(i=1,2,\ldots,k\), we have that
\[(\mu_{i})_{*}:\mathbf{H}^{n}(\mathcal{L}_{x},\alpha,m\mid x\in\mathbb{R}) \longrightarrow\mathbf{H}^{n}(\mathcal{L}_{x},\alpha,m+2s_{i}\mid x\in\mathbb{ R})\]
is an induced homomorphism of persistent complex vector spaces. By the definition of \(\operatorname{Ext}^{*}(V)\) and Proposition 3.3 (2), \(\mu_{1}\wedge\mu_{2}\wedge\cdots\wedge\mu_{k}\) is the composition of \(\mu_{1},\mu_{2},\ldots,\mu_{k}\). We obtain (2).
In Proposition 5.1, we use we use
\[\operatorname{Ext}_{2s}(V):\quad\mathbf{H}_{*}(\mathcal{K}_{x},\alpha,m\mid x \in\mathbb{R})\longrightarrow\mathbf{H}_{*}(\mathcal{K}_{x},\alpha,m-2s\mid x \in\mathbb{R})\]
to denote the family of morphisms of persistent complex vector spaces
\[\beta_{*}:\quad\mathbf{H}_{*}(\mathcal{K}_{x},\alpha,m\mid x\in\mathbb{R}) \longrightarrow\mathbf{H}_{*}(\mathcal{K}_{x},\alpha,m-2s\mid x\in\mathbb{R})\]
for all \(\beta\in\operatorname{Ext}_{2s}(V)\) and use
\[\operatorname{Ext}^{2s}(V):\quad\mathbf{H}^{*}(\mathcal{L}_{x},\omega,m\mid x \in\mathbb{R})\longrightarrow\mathbf{H}^{*}(\mathcal{L}_{x},\omega,m+2s\mid x \in\mathbb{R})\]
to denote the family of morphisms of persistent complex vector spaces
\[\mu_{*}:\quad\mathbf{H}^{*}(\mathcal{L}_{x},\omega,m\mid x\in\mathbb{R}) \longrightarrow\mathbf{H}^{*}(\mathcal{L}_{x},\omega,m+2s\mid x\in\mathbb{ R})\]
for all \(\mu\in\mathrm{Ext}^{2S}(V)\).
Let \(\{\mathcal{K}_{x}\}_{x\in\mathbb{R}}\) and \(\{\mathcal{K}^{\prime}_{x}\}_{x\in\mathbb{R}}\) be two filtrations of simplicial complexes with their vertices from \(V\). Then both \(\{\mathcal{K}_{x}\cap\mathcal{K}^{\prime}_{x}\}_{x\in\mathbb{R}}\) and \(\{\mathcal{K}_{x}\cup\mathcal{K}^{\prime}_{x}\}_{x\in\mathbb{R}}\) are filtrations of simplicial complexes with vertices from \(V\). For any \(-\infty<x\leq y<+\infty\), the canonical inclusions \(\iota^{y}_{x}:\mathcal{K}_{x}\longrightarrow\mathcal{K}_{y}\) and \(\iota^{\prime y}_{x}:\mathcal{K}^{\prime}_{x}\longrightarrow\mathcal{K}^{ \prime}_{y}\) induces a morphism of the Mayer-Vietoris sequences of the constrained homology
\[(\iota^{y}_{x},\iota^{\prime y}_{x})_{*}:\quad\mathbf{MV}_{*}(\mathcal{K}_{x}, \mathcal{K}^{\prime}_{x},\alpha,m)\longrightarrow\mathbf{MV}_{*}(\mathcal{K}_ {y},\mathcal{K}^{\prime}_{y},\alpha,m). \tag{5.7}\]
We call the family \(\{\mathbf{MV}_{*}(\mathcal{K}_{x},\mathcal{K}^{\prime}_{x},\alpha,m)\}_{x\in \mathbb{R}}\) together with the family of morphisms given by (5.7) the _persistent Mayer-Vietoris sequence_ of the constrained persistent homology for the filtrations \(\{\mathcal{K}_{x}\}_{x\in\mathbb{R}}\) and \(\{\mathcal{K}^{\prime}_{x}\}_{x\in\mathbb{R}}\) of simplicial complexes and denote it as
\[\mathbf{PMV}_{*}(\mathcal{K}_{x},\mathcal{K}^{\prime}_{x},\alpha,m\mid x\in \mathbb{R}).\]
Similarly, let \(\{\mathcal{L}_{x}\}_{x\in\mathbb{R}}\) and \(\{\mathcal{L}^{\prime}_{x}\}_{x\in\mathbb{R}}\) be two filtrations of independence hypergraphs with their vertices from \(V\). Then both \(\{\mathcal{L}_{x}\cap\mathcal{L}^{\prime}_{x}\}_{x\in\mathbb{R}}\) and \(\{\mathcal{L}_{x}\cup\mathcal{L}^{\prime}_{x}\}_{x\in\mathbb{R}}\) are filtrations of independence hypergraphs with vertices from \(V\). For any \(-\infty<x\leq y<+\infty\), the canonical inclusions \(\theta^{y}_{x}:\mathcal{L}_{x}\longrightarrow\mathcal{L}_{y}\) and \(\theta^{\prime y}_{x}:\mathcal{L}^{\prime}_{x}\longrightarrow\mathcal{L}^{ \prime}_{y}\) induces a morphism of the Mayer-Vietoris sequences of the constrained homology
\[(\theta^{y}_{x},\theta^{\prime y}_{x})_{*}:\quad\mathbf{MV}^{*}(\mathcal{L}_{ x},\mathcal{L}^{\prime}_{x},\omega,m)\longrightarrow\mathbf{MV}^{*}(\mathcal{L}_{y},\mathcal{L}^{\prime}_{y},\omega,m). \tag{5.8}\]
We call the family \(\{\mathbf{MV}^{*}(\mathcal{L}_{x},\mathcal{L}^{\prime}_{x},\alpha,m)\}_{x\in \mathbb{R}}\) together with the family of morphisms given by (5.8) the _persistent Mayer-Vietoris sequence_ of the constrained persistent cohomology for the filtrations \(\{\mathcal{L}_{x}\}_{x\in\mathbb{R}}\) and \(\{\mathcal{L}^{\prime}_{x}\}_{x\in\mathbb{R}}\) of independence hypergraphs and denote it as
\[\mathbf{PMV}^{*}(\mathcal{L}_{x},\mathcal{L}^{\prime}_{x},\omega,m\mid x\in \mathbb{R}).\]
Proof of Theorem 1.1.: Take the maps on the set of vertices as the identity map \(\mathrm{id}_{V}:V\longrightarrow V\). Consider the filtrations of simplicial complexes and independence hypergraphs with respect to \(\mathrm{id}_{V}\). The diagram in Proposition 4.1 (1) is functorial with respect to simplicial maps. By taking the persistence of the constrained homology, (1) follows from Proposition 4.1 (1). The diagram in Proposition 4.1 (2) is functorial with respect to morphisms of independence hypergraphs induced by bijective maps between the vertex sets. By taking the persistence of the constrained cohomology, (2) follows from Proposition 4.1 (2).
Proof of Corollary 1.2.: Suppose \(V\) is a finite set. Then a filtration \(\{\mathcal{K}_{x}\}_{x\in\mathbb{R}}\) of simplicial complexes gives a filtration \(\{\mathcal{L}_{x}\}_{x\in\mathbb{R}}\), where \(\mathcal{L}_{x}=\Delta[V]\setminus\mathcal{K}_{-x}\), of independence hypergraphs. Conversely, a filtration \(\{\mathcal{L}_{x}\}_{x\in\mathbb{R}}\) of independence hypergraphs also gives a filtration \(\{\mathcal{K}_{x}\}_{x\in\mathbb{R}}\), where \(\mathcal{K}_{x}=\Delta[V]\setminus\mathcal{L}_{-x}\), of simplicial complexes. The corollary follows.
|
2309.09350 | Efficient Quantum Algorithm for All Quantum Wavelet Transforms | Wavelet transforms are widely used in various fields of science and
engineering as a mathematical tool with features that reveal information
ignored by the Fourier transform. Unlike the Fourier transform, which is
unique, a wavelet transform is specified by a sequence of numbers associated
with the type of wavelet used and an order parameter specifying the length of
the sequence. While the quantum Fourier transform, a quantum analog of the
classical Fourier transform, has been pivotal in quantum computing, prior works
on quantum wavelet transforms~(QWTs) were limited to the second and fourth
order of a particular wavelet, the Daubechies wavelet. Here we develop a simple
yet efficient quantum algorithm for executing any wavelet transform on a
quantum computer. Our approach is to decompose the kernel matrix of a wavelet
transform as a linear combination of unitaries (LCU) that are compilable by
easy-to-implement modular quantum arithmetic operations and use the LCU
technique to construct a probabilistic procedure to implement a QWT with a
\textit{known} success probability. We then use properties of wavelets to make
this approach deterministic by a few executions of the amplitude amplification
strategy. We extend our approach to a multilevel wavelet transform and a
generalized version, the packet wavelet transform, establishing computational
complexities in terms of three parameters: the wavelet order $M$, the dimension
$N$ of the transformation matrix, and the transformation level $d$. We show the
cost is logarithmic in $N$, linear in $d$ and superlinear in $M$. Moreover, we
show the cost is independent of $M$ for practical applications. Our proposed
quantum wavelet transforms could be used in quantum computing algorithms in a
similar manner to their well-established counterpart, the quantum Fourier
transform. | Mohsen Bagherimehrab, Alan Aspuru-Guzik | 2023-09-17T19:02:08Z | http://arxiv.org/abs/2309.09350v2 | # Efficient Quantum Algorithm for All Quantum Wavelet Transforms
###### Abstract
Wavelet transforms are widely used in various fields of science and engineering as a mathematical tool with features that reveal information ignored by the Fourier transform. Unlike the Fourier transform, which is unique, a wavelet transform is specified by a sequence of numbers associated with the type of wavelet used and an order parameter specifying the length of the sequence. While the quantum Fourier transform, a quantum analog of the classical Fourier transform, has been pivotal in quantum computing, prior works on quantum wavelet transforms (QWTs) were limited to the second and fourth order of a particular wavelet, the Daubechies wavelet. Here we develop a simple yet efficient quantum algorithm for executing any wavelet transform on a quantum computer. Our approach is to decompose the kernel matrix of a wavelet transform as a linear combination of unitaries (LCU) that are compilable by easy-to-implement modular quantum arithmetic operations and use the LCU technique to construct a probabilistic procedure to implement a QWT with a _known_ success probability. We then use properties of wavelets to make this approach deterministic by a single execution of the amplitude amplification strategy. We extend our approach to a multilevel wavelet transform and a generalized version, the packet wavelet transform, establishing computational complexities in terms of three parameters: the wavelet order \(M\), the dimension \(N\) of the transformation matrix, and the transformation level \(d\). We show the cost is logarithmic in \(N\), linear in \(d\) and quasilinear in \(M\). Our proposed quantum wavelet transforms could be used in quantum computing algorithms in a similar manner to their well-established counterpart, the quantum Fourier transform.
## I Introduction
As a solid alternative to the Fourier transform, wavelet transforms are a relatively new mathematical tool with diverse utility that has generated much interest in various fields of science and engineering over the past four decades. Although wavelet-like functions have existed for over a century, a prominent example is what is now known as the Haar wavelet. The interest is due to the attractive features of wavelets [1; 2; 3; 4]. Such functions are differentiable, up to a particular order, and are local in both the real and dual spaces. They provide an exact representation for polynomials up to a certain order, and a simple yet optimal preconditioner for a large class of differential operators. Crucially, wavelets provide structured and sparse representations for vectors, functions, or operators, enabling data compression and constructing faster algorithms. These appealing features of wavelets and their associated transforms make them advantageous for numerous applications in classical computing over their established counterpart, the Fourier transform.
With the wavelet transforms' diverse utility and extensive use in classical computing, a natural expectation is that a quantum analog of such transforms will find applications in quantum computing, especially for developing faster quantum algorithms and quantum data compression. Wavelets have already been used in quantum physics and computation [5; 6; 7; 8; 9; 10; 11; 12]. However, prior works on developing a quantum analog for wavelet transforms are limited to a few representative cases [13; 14; 15; 16]. In contrast, the quantum Fourier transform, a quantum analog of the classical Fourier transform, has been extensively used in quantum computing as a critical subroutine for many quantum algorithms.
Unlike the Fourier transform, a wavelet transform is not unique and is specified by the type of wavelet used and an order parameter. In particular, a wavelet transform is defined by a sequence of numbers, known as the filter coefficients, associated with the type of wavelet used and an even number known as the order of the wavelet that specifies the length of the sequence. Given the sequence, a unitary matrix known as the kernel matrix of the wavelet transform is constructed, the application of which on a vector yields the single-level wavelet transform of the vector. Such a transform partitions the vector into two components: a low-frequency or average component and high-frequency or difference component. To expose the multi-scale structure of the vector, or a function for that matter, the wavelet transform is recursively applied to the low-frequency component, yielding the multi-level wavelet transform of the vector. The wavelet packet transform is a generalization of the multi-level wavelet transform,
in which the wavelet transform is recursively applied to both the low- and high-frequency components. We refer to a quantum analog of the (single-) multi-level and packet wavelet transforms as the (single-) multi-level and packet QWTs, respectively.
This paper proposes and analyzes a conceptually simple and computationally efficient quantum algorithm for executing single-level, multi-level, and packet QWTs associated with any wavelet and any order on a quantum computer. Our approach is based on decomposing a unitary associated with a wavelet transform in terms of a linear combination of a finite number of simple-to-implement unitaries and using the linear combination of unitaries (LCU) technique [17] to implement the original unitary. Specifically, we decompose the kernel matrix of the wavelet transform, associated with a wavelet of order \(M\), as a linear combination of \(M\) simple-to-implement unitaries and, by the LCU technique, construct a probabilistic procedure for implementing the single-level QWT. The success probability of this approach is \(1/2\) by properties of the wavelet filters. We use this known success probability to make the implementation deterministic using a single ancilla qubit and a single amplitude amplification.
Having an implementation for the single-level QWT and recursive formulae describing the multi-level and packet wavelet transforms based on single-level transforms, we construct quantum algorithms for multi-level and packet QWTs. We establish the computational complexity of these transformations in terms of three parameters: the wavelet order \(M\), the dimension of the wavelet-transform matrix \(N\), and the level of the wavelet transform \(d\). Without loss of generality, we assume that our main parameter of interest \(N\) is a power of two, as \(N=2^{n}\), and report the computational costs with respect to \(n\), the number of qubits that the wavelet transforms act on.
We summarize our main results on computational costs of the described transformations in the following three theorems. We establish these theorems in subsequent sections after providing a detailed description of our algorithms.
**Theorem 1** (Single-level QWT with logarithmic gate cost).: _A single-level QWT on \(n\) qubits, associated with a wavelet of order \(M\), can be implemented using \(\lceil\log_{2}M\rceil+1\) ancilla qubits and \(\mathcal{O}(n)+\mathcal{O}(M\log_{2}M)\) toffoli and elementary one- and two-qubit gates._
**Theorem 2** (Multi-level QWT with multiplicative gate cost).: _A d-level QWT on \(n\) qubits can be achieved using \(\lceil\log_{2}M\rceil+2\) ancilla qubits and \(\mathcal{O}(dn)+\mathcal{O}(dM\log_{2}M)\) toffoli and elementary one- and two-qubit gates._
**Theorem 3** (Packet QWT).: _A d-level packet QWT on \(n\) qubits can be achieved using \(\lceil\log_{2}M\rceil+1\) ancilla qubits and \(\mathcal{O}(dn-d^{2}/2)+\mathcal{O}(dM\log_{2}M)\) toffoli and elementary one- and two-qubit gates._
We remark that the number of levels for the multi-level or packet QWTs is upper bounded by \(n\), i.e., \(d\leq n\). Hence, as a corollary of Theorems 2 and 3, the gate cost for these transformations is at most quadratic in \(n=\log_{2}N\). We discuss allowable range for the order parameter \(M\) versus the values used in practical applications in the discussion section.
The rest of this paper proceeds as follows. We begin by describing the notation we use throughout the paper. Then we detail our approach for implementing a single-level QWT by simple modular arithmetic operations in SSII. We describe the multi-level and packet QWT in SSIII, followed by detailed complexity analysis for our algorithms in SSIV. Finally, we discuss our results and conclude in SSV.
**Notation**: We refer to \(A\in\mathbb{C}^{2^{n}\times 2^{n}}\) as \(n\)-qubit matrix and denote the \(n\)-qubit identity by \(\mathbb{1}_{n}\). Throughout the paper, we use the symbol \(M\) for the wavelet order and \(m=\lceil\log_{2}M\rceil\). The wavelet order is an even positive number as \(M=2\mathcal{K}\) with \(\mathcal{K}\) a positive integer called the wavelet index; the symbol \(\mathcal{K}\) is used for \(M/2\). We use zero indexing for iterable mathematical objects such as vectors and matrices. Qubits of an \(n\)-qubit register is ordered from right to left, i.e., the rightmost (leftmost) qubit in \(|q_{n-1},\dots,q_{1},q_{0}\rangle\) representing the state of an \(n\)-qubit register that encodes the binary representation of an integer \(q\) is the first (last) qubit. The first and last qubits are also referred to as the least-significant bit (LSB) and the most-significant bit (MSB). Qubits in a quantum circuit are ordered from bottom to top: the bottom qubit is the LSB and the top qubit is the MSB.
## II Single-level QWT
This section describes our algorithm for executing a single-level wavelet transform on a quantum computer. Such a transformation is specified by a kernel matrix. We describe this matrix in SSII.1 and decompose it as a linear combination of a finite number of unitaries. The decomposition enables a prepare-select-unprepare-style procedure for probabilistic implementation of the desired transformation that we cover in SSII.2. In SSII.3, we describe how purposefully reducing the success probability yields a perfect amplitude amplification. Finally, in SSII.4 and SSII.5, we provide a compilation for the select and prepare operations based on simple-to-implement modular arithmetic operations.
### The wavelet kernel matrix as a linear combination of unitaries
We begin this subsection by briefly describing the kernel matrix associated with a wavelet transform. We refer to [4, Chap. 2.1] for a review of wavelet formalism and how this matrix is constructed. The kernel matrix \(W\) of a wavelet transform is specified
by the wavelet filter coefficients: a sequence of numbers \((h_{0},h_{1},\ldots,h_{M})\) that depend on the type of wavelet and satisfy
\[\sum_{\ell=0}^{M-1}h_{\ell}=\sqrt{2},\quad\sum_{\ell=0}^{M-1}h_{\ell}^{2}=1, \tag{1}\]
where the even number \(M\) is the wavelet order. Specifically, the \(2^{n}\times 2^{n}\) kernel matrix \(W\) is comprised of \(2^{n-1}\times 2^{n}\) matrices \(H\) and \(G\) as
\[W=\begin{bmatrix}H\\ G\end{bmatrix},\quad H_{ij}=h_{2j-i(\bmod 2^{n})},\quad G_{ij}=h_{2j-i(\bmod 2^{n} )}; \tag{2}\]
an example of the kernel matrix \(W\) for a forth-order wavelet \((M=4)\) is as follows
\[W=\begin{bmatrix}h_{0}&h_{1}&h_{2}&h_{3}&0&0&\cdots&0&0&0&0\\ 0&0&h_{0}&h_{1}&h_{2}&h_{3}&\cdots&0&0&0&0\\ 0&0&0&0&h_{0}&h_{1}&\cdots&0&0&0&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots& \vdots\\ 0&0&0&0&0&\cdots&h_{0}&h_{1}&h_{2}&h_{3}\\ h_{2}&h_{3}&0&0&0&\cdots&0&0&h_{0}&h_{1}\\ h_{3}&-h_{2}&h_{1}&-h_{0}&0&0&\cdots&0&0&0&0\\ 0&0&h_{3}&-h_{2}&h_{1}&-h_{0}&\cdots&0&0&0&0\\ 0&0&0&0&h_{3}&-h_{2}&\cdots&0&0&0&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\vdots& \vdots\\ 0&0&0&0&0&\cdots&h_{3}&-h_{2}&h_{1}&-h_{0}\\ h_{1}&-h_{0}&0&0&0&\cdots&0&0&h_{3}&-h_{2}\end{bmatrix},\ U=\begin{bmatrix}h_{0}&h_{1}&h_{2}&h_{3}&0&0& \cdots&0&0&0&0\\ 0&0&h_{0}&h_{1}&h_{2}&h_{3}&\cdots&0&0&0&0\\ 0&0&0&0&h_{0}&h_{1}&\cdots&0&0&0&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots& \vdots\\ 0&0&0&0&0&0&\cdots&h_{0}&h_{1}&h_{2}&h_{3}\\ h_{2}&h_{3}&0&0&0&0&\cdots&0&0&h_{0}&h_{1}\\ h_{1}&-h_{0}&0&0&\cdots&0&0&0&0&h_{3}&-h_{2}\\ h_{3}&-h_{2}&h_{1}&-h_{0}&\cdots&0&0&0&0&0\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\vdots& \vdots\\ 0&0&0&0&\cdots&h_{1}&-h_{0}&0&0&0\\ 0&0&0&0&\cdots&h_{3}&-h_{2}&h_{1}&-h_{0}\end{bmatrix} \tag{3}\]
The unitary matrix \(U\) here is a modification of the unitary \(W\) that we use for decomposing \(W\) as a linear combination of unitaries. To this end, let us first define the circular downshift and upshift permutation operations as
\[S_{n}^{\downarrow}:=\sum_{j=0}^{2^{n}-1}|j+1\bmod 2^{n}\rangle\!\langle j|= \begin{bmatrix}1&&&1\\ 1&&&\\ &1&&\\ &\ddots&&\\ &&1\end{bmatrix},\quad S_{n}^{\uparrow}:=\sum_{j=0}^{2^{n}-1}|j-1\bmod 2^{n} \rangle\!\langle j|=\begin{bmatrix}1&&&\\ &1&&\\ &&\ddots&\\ &&&1\end{bmatrix}, \tag{4}\]
where the matrix size is \(2^{n}\times 2^{n}\). Note that these operations are inverse of each other and their action on \(n\)-qubit basis state \(|j\rangle\) is
\[S_{n}^{\downarrow}\,|j\rangle=|j+1\bmod 2^{n}\rangle\,,\quad S_{n}^{ \uparrow}\,|j\rangle=|j-1\bmod 2^{n}\rangle\,. \tag{5}\]
Upon acting on a vector with \(2^{n}\) components, \(S_{n}^{\downarrow}/S_{n}^{\uparrow}\) shifts the vector's components one place downward/upward with wraparound. Similarly, when acting on a matrix with \(2^{n}\) rows from the left side, \(S_{n}^{\downarrow}/S_{n}^{\uparrow}\) shifts the rows of the matrix one place downward/upward with wraparound.
To construct an LCU decomposition for the \(n\)-qubit unitary \(W\), the kernel matrix associated with a wavelet of order \(M=2\mathcal{K}\), first we transform it into another unitary \(U\) by \(\mathcal{K}-1\) downshift permutations of the rows in the lower half of \(W\). Specifically, we transform \(W\) as
\[W=\begin{bmatrix}H\\ G\end{bmatrix}\to U=\begin{bmatrix}H\\ G^{\prime}\end{bmatrix},\quad G^{\prime}:=(S_{n-1}^{\downarrow})^{\mathcal{K}- 1}G, \tag{6}\]
where \(G^{\prime}\) is obtained by \(\mathcal{K}-1\) downshift permutations of the rows of \(G\) and its elements are
\[G^{\prime}_{i,j}=(-1)^{2i+2-j}h_{2i+2-j}. \tag{7}\]
Let us now represent \(\mathcal{K}-1\) upshift permutations on \(n\) qubits by ushift\({}_{n}\) with the action
\[\textsc{ushift}_{n}\,|j\rangle:=|j+\mathcal{K}-1\bmod 2^{n}\rangle \tag{8}\]
on \(n\)-qubit basis state \(|j\rangle\). Then we have
\[W=(|0\rangle\!\langle 0|\otimes 1_{n-1}+|1\rangle\!\langle 1|\otimes \textsc{ushift}_{n-1})U=\Lambda_{1}(\textsc{ushift}_{n-1})U, \tag{9}\]
i.e., \(W\) is obtained by \(\mathcal{K}-1\) upshift permutations of the rows in the lower half of \(U\).
We now decompose the unitary \(U\) as a linear combination of \(M\) unitaries as
\[U=\sum_{\ell=0}^{M-1}h_{\ell}U_{\ell},\quad U_{\ell}:=\begin{cases}P_{\ell}& \text{if $\ell$ is odd},\\ (Z\otimes\mathbb{1}_{n-1})P_{\ell}&\text{if $\ell$ is even},\end{cases} \tag{10}\]
where \(Z\) is the Pauli-\(Z\) operator and the unitary
\[P_{\ell}:=\sum_{j=0}^{N/2-1}|j\rangle\!\langle 2j+\ell\bmod N|+|N/2+j\rangle\! \langle 2j+1-\ell\bmod N| \tag{11}\]
is a permutation matrix that is obtained from \(U\) as follows: all entries of \(U\) with value \(\pm h_{\ell}\) are replaced with \(1\) and all other nonzero entries are replaced with \(0\). Because \(W\) is unitarily equivalent to \(U\) by Eq. (9), the LCU decomposition in Eq. (10) provides a similar LCU decomposition for \(W\).
### Probabilistic implementation for the single-level QWT
The decomposition in Eq. (10) enables a prepare-select-unprepare-style method [17] for probabilistic implementation of \(U\). To this end, let us define
\[\textsc{prep}\ket{0^{m}}:=\frac{1}{\sqrt[4]{2}}\sum_{\ell=0}^{M-1}\sqrt{h_{ \ell}}\ket{\ell}, \tag{12}\]
where \(m=\lceil\log_{2}M\rceil\) is the number of ancilla qubits, and let select be an operation such that
\[\textsc{select}\ket{\ell}\ket{j}:=\ket{\ell}U_{\ell}\ket{j} \tag{13}\]
with \(U_{\ell}\) defined in Eq. (10). Then for any \(n\)-qubit state we have
\[(\textsc{prep}^{\dagger}\otimes\mathbb{1}_{n})\,\textsc{select}\,(\textsc{ prep}\otimes\mathbb{1}_{n})\ket{0^{m}}\ket{\psi}\,=\frac{1}{\sqrt{2}}\ket{0^{m}}U \ket{\psi}+\frac{1}{\sqrt{2}}\ket{\bot}, \tag{14}\]
where \(\ket{\bot}\) is an (\(m+n\))-qubit state such that \((\bra{0^{m}}\otimes\mathbb{1}_{n})\ket{\bot}=0\). This equation follows as
\[\ket{0^{m}}\ket{\psi} \xrightarrow[\textsc{prep}\otimes\mathbb{1}_{n}]{\frac{1}{\sqrt{2 }}}\sum_{\ell}\sqrt{h_{\ell}}\ket{\ell}\ket{\psi} \tag{15}\] \[\xrightarrow[\textsc{select}]{\frac{1}{\sqrt{2}}}\sum_{\ell}\sqrt{h _{\ell}}\ket{\ell}U_{\ell}\ket{\psi}\] (16) \[\xrightarrow[\textsc{prep}^{\dagger}\otimes\mathbb{1}_{n}]{\frac {1}{\sqrt{2}}}\ket{0^{m}}U\ket{\psi}+\frac{1}{\sqrt{2}}\ket{\bot}, \tag{17}\]
where the last line follows by projecting the ancilla qubits to \(\ket{0^{m}}\) state, i.e.,
\[(\bra{0^{m}}\otimes\mathbb{1}_{n})(\textsc{prep}^{\dagger} \otimes\mathbb{1}_{n})\frac{1}{\sqrt{2}}\sum_{\ell}\sqrt{h_{\ell}}\ket{\ell}U_{ \ell}\ket{\psi} =\frac{1}{\sqrt{2}}\sum_{\ell^{\prime}\ell}\sqrt{h_{\ell^{\prime}} h_{\ell}}\bra{\ell^{\prime}}U_{\ell}\ket{\psi} \tag{18}\] \[=\frac{1}{\sqrt{2}}\sum_{\ell}h_{\ell}U_{\ell}\ket{\psi}=\frac{1} {\sqrt{2}}U\ket{\psi}. \tag{19}\]
Equation (14) yields a probabilistic implementation for \(U\). Because \(U\) and \(W\) are unitarily equivalent, by Eq. (9), we also have a probabilistic implementation for \(W\) with the same success probability. In particular, let us define a probabilistic QWT as
\[\textsc{pqwt}:=(\mathbb{1}_{m}\otimes\Lambda_{1}(\textsc{ushift}_{n-1}))( \textsc{prep}^{\dagger}\otimes\mathbb{1}_{n})\,\textsc{select}\,(\textsc{ prep}\otimes\mathbb{1}_{n}) \tag{20}\]
then we have
\[\textsc{pqwt}\ket{0^{m}}\ket{\psi}=\frac{1}{\sqrt{2}}\ket{0^{m}}W\ket{\psi}+ \frac{1}{\sqrt{2}}\ket{\bot^{\prime}}, \tag{21}\]
with the \((m+n)\)-qubit state \(\ket{\bot^{\prime}}:=\mathbb{1}_{m}\otimes\Lambda_{1}(\textsc{ushift}_{n-1}) \ket{\bot}\) and the \(\ket{1}\)-controlled unitary \(\Lambda_{1}(\textsc{ushift})\) defined in Eq. (9).
### Reduction of success probability for perfect amplitude amplification
The success probability of the described probabilistic approach for implementing the single-level QWT is \(1/2\). For perfect amplitude amplification, we purposefully reduce the success probability to \(1/4\) using one extra ancilla qubit. This end is achieved by applying a Hadamard gate \(H\) on the extra qubit initialized in \(\ket{0}\). A single round of amplitude amplification then yields the success state with unit probability.
Specifically, by Eq. (21) and for any \(n\)-qubit state \(\ket{\psi}\), we have
\[(H\otimes\textsc{pqwt})\ket{0}\ket{0^{m}}\ket{\psi}=\sin(\pi/6)\ket{0^{m+1}}W \ket{\psi}+\cos(\pi/6)\ket{\bot^{\prime\prime}} \tag{22}\]
where \(\ket{\bot^{\prime\prime}}\) is an \((m+n+1)\)-qubit state that satisfies \((\bra{0^{m+1}}\otimes\mathbb{1}_{n})\ket{\bot^{\prime\prime}}=0\). The success probability is now \(1/4\), enabling perfect amplitude amplification. Indeed, by only a single amplitude amplification, \(W\) is applied on \(\ket{\psi}\) and all \(m+1\) ancilla qubits end up in the all-zero state.
We use the oblivious amplitude amplification because the input state \(\ket{\psi}\) is unknown. To this end, let \(R_{n}=2\ket{0^{n}}\!\!\bra{0^{n}}-\mathbb{1}_{n}\) be the \(n\)-qubit reflection operator with respect to the \(n\)-qubit zero state \(\ket{0^{n}}\) and let
\[\mathcal{A}:=-(H\otimes\textsc{pqwt})(R_{m+1}\otimes\mathbb{1}_{n})(H\otimes \textsc{pqwt})^{\dagger}(R_{m+1}\otimes\mathbb{1}_{n}), \tag{23}\]
be the amplitude amplification operator. Then the following holds [18, Lemma 2.2]
\[\mathcal{A}^{t}(H\otimes\textsc{pqwt})\ket{0}\ket{0^{m}}\ket{\psi}=\sin((2t+1) \pi/6)\ket{0^{m+1}}W\ket{\psi}+\cos((2t+1)\pi/6)\ket{\bot^{\prime\prime}}. \tag{24}\]
Therefore, the unit success probability is achieved by a single execution of amplitude amplification (\(t=1\)).
### Implementing select by modular quantum arithmetic
Here we describe our approach for implementing the select operation by simple modular arithmetic operations on a quantum computer. As per Eq. (13), select applies \(U_{\ell}\) on the second register \(\ket{j}\) based on the value of \(\ell\) encoded in the first register \(\ket{\ell}\). If \(\ell\) is odd, then \(U_{\ell}=P_{\ell}\) by Eq. (10). Otherwise, \(U_{\ell}\) is a product of \(P_{\ell}\) and a single Pauli-\(Z\) on the first qubit of the second register. That is to say that \(U_{\ell}\) and \(P_{\ell}\) are equivalent up to a \(\ket{0}\)-controlled-\(Z\) operation; control qubit is the qubit representing the least-significant bit (LSB) of \(\ell\) and target qubit is the one representing the most-significant bit (MSB) of \(j\). Implementing select is therefore achieved by an implementation for \(P_{\ell}\).
The \(n\)-qubit permutation \(P_{\ell}\) in Eq. (11) transforms the basis state \(\ket{j}\) as
\[P_{\ell}:\ket{j}\mapsto\begin{cases}\left|\frac{j-\ell\bmod N}{2}\right\rangle &\text{if $j$ and $\ell$ have same parity,}\\ \left|\frac{N}{2}+\frac{j+\ell-1\bmod N}{2}\right\rangle&\text{if $j$ and $\ell$ have opposite parity.}\end{cases} \tag{25}\]
This transformation can be implemented by modular quantum addition add and subtraction sub defined as
\[\textsc{add}\ket{\ell}\ket{j}:=\ket{\ell}\ket{j+\ell\bmod N},\quad\textsc{ sub}\ket{\ell}\ket{j}:=\ket{\ell}\ket{j-\ell\bmod N}, \tag{26}\]
and the perfect shuffle transformation \(\Pi\) defined as
\[\textsc{shuffle}\ket{q_{n-1}\ldots q_{1}q_{0}}:=\ket{q_{0}q_{n-1}\ldots q_{1}}, \tag{27}\]
which performs \(\ket{q}\mapsto\ket{q/2}\) if \(q\) is an even number and \(\ket{q}\mapsto\ket{N/2+(q-1)/2}\) if \(q\) is odd. To implement \(P_{\ell}\) by these operations, we use a single ancilla qubit called parity qubit and define the parity operation par as
\[\textsc{par}\ket{0}\ket{\ell}\ket{j}:=\begin{cases}\ket{0}\ket{\ell}\ket{j}& \text{if $j$ and $\ell$ have same parity,}\\ \ket{1}\ket{\ell}\ket{j}&\text{if $j$ and $\ell$ have opposite parity,}\end{cases} \tag{28}\]
which flips the parity qubit based on the parity of \(\ell\) and \(j\); parity of a number is \(0\) if its even and is \(1\) otherwise. This operation can be implemented using two cnot gates, one controlled on the LSQ of the register encoding \(\ell\) and the other controlled on the LSQ of the register encoding \(j\). The target qubit for each cnot is the parity qubit.
Having computed the parity by par, we then apply sub to the last two registers if the parity qubit is \(\ket{0}\) and apply add to these registers if the parity is \(\ket{1}\), followed by the shuffle operation in Eq. (27) on the last register. By these operations, the state of the parity qubit and the other two registers transform as
\[\textsc{par}\ket{0}\ket{\ell}\ket{j}\xrightarrow{\frac{\ket{0}\ket{0}\ket{0} \ket{0}\ket{0}\ket{0}\ket{1}\ket{1}\ket{0}\xrightarrow{\text{add}}}{\ket{0} \ket{\ell}\ket{j-\ell\bmod N}+\ket{1}\ket{\ell}\ket{j+\ell\bmod N}} \tag{29}\]
\[\xrightarrow{\frac{1_{1}\otimes 1_{m}\otimes\textsc{shuffle}}}\ket{0}\ket{\ell} \ket{(j-\ell\bmod N)/2}+\ket{1}\ket{\ell}\ket{N/2+(j+\ell-1\bmod N)/2}. \tag{30}\]
We finally erase the parity qubit to achieve an implementation for \(P_{\ell}\). To this end, we note that the parity qubit is \(\ket{1}\) only if the value encoded in the last register is greater than \(N/2\); see Eq. (25). Hence a cnot from the qubit representing the MSB of the value encoded in the system register to the parity qubit would erase this qubit.
The quantum circuit in the dotted-line box in FIG. 1 gives an implementation for the select operation based on the described approach. The sequence of swap gates in this circuit gives a gate-level implementation for shuffle in Eq. (27)
### A compilation for prep by increment and rotation gates
We now provide an implementation for prep in Eq. (12) using the rotation gate, defined as
\[R_{\ell}:=\begin{bmatrix}\cos\theta_{\ell}&\sin\theta_{\ell}\\ -\sin\theta_{\ell}&\cos\theta_{\ell}\end{bmatrix} \tag{31}\]
for some known angle \(\theta_{\ell}\), and the increment gate that preforms the map \(\ket{\ell}\mapsto\ket{\ell+1}\) for \(\ket{\ell}\) an \(m\)-qubit basis state. Notice that the increment gate is indeed the downshift permutation \(S_{m}^{\downarrow}\) defined in Eq. (4) and its inverse is the upshift permutation \(S_{m}^{\uparrow}\).
The prep operation prepares a quantum state with amplitudes given by the wavelet filter \(\mathbf{h}=(h_{0},\dots,h_{M-1})^{\top}\), a column vector of \(M\) real numbers that satisfy Eq. (1). By the procedure given in Ref. [19], the wavelet filter vector \(\mathbf{h}\) of length \(M=2\mathcal{K}\) can be achieved by a sequence of \(\mathcal{K}\) unitaries \(U_{\ell}\) as \(\mathbf{h}=U_{\mathcal{K}-1}\cdots U_{1}U_{0}\,\mathsf{e}_{\mathcal{K}}\), where \(\mathsf{e}_{\ell}\) is the \(\ell\)th column of the \(M\)-by-\(M\) identity matrix and the unitary \(U_{\ell}\) is constructed from rotation gates \(R_{\ell}\) as illustrated in FIG. 2(a). As an example, for \(M=6\) we have
\[\begin{bmatrix}h_{0}\\ h_{1}\\ h_{2}\\ h_{3}\\ h_{4}\\ h_{5}\end{bmatrix}=\begin{bmatrix}c_{2}&s_{2}\\ -s_{2}&c_{2}\\ &c_{2}&s_{2}\\ &&-s_{2}&c_{2}\\ &&&-s_{2}&c_{2}\end{bmatrix}\begin{bmatrix}1\\ c_{1}&s_{1}\\ -s_{1}&c_{1}\\ &&c_{1}&s_{1}\\ &&-s_{1}&c_{1}\\ \end{bmatrix}\begin{bmatrix}1\\ c_{0}&s_{0}\\ -s_{0}&c_{0}\\ &1\\ &&0\\ \end{bmatrix}\begin{bmatrix}0\\ 0\\ 0\\ 1\\ 0\\ \end{bmatrix} \tag{32}\]
Figure 1: Equivalent quantum circuits for executing a single-level QWT comprised of high-level operations. Three registers are used: the parity register par (one qubit), the ancilla register anc (\(m\) qubits), and system register sys (\(n\) qubits). The state of sys register is in a superposition of \(\ket{j}\) states for different values of \(j\) and the state of anc register, after applying prep with action given in Eq. (12), is in a superposition of \(\ket{\ell}\) states for different values of \(\ell\). The gates inside the dotted-line box implement the select operation in Eq. (13) as follows. The \(\ket{0}\)-controlled \(Z\) is applied as per Eq. (10). The first two cnots compute the parity of \(j\) and \(\ell\) by their LSB. Then controlled on the parity qubit, we apply sub (parity zero) or add (parity one). The sequence of swap gates implement the shuffle operation in Eq. (27). The subsequent cnot resets the parity qubit to \(\ket{0}\) because the state of par is filled to \(\ket{1}\) only if \(j\) and \(\ell\) have opposite parity as per Eq. (28); otherwise it stays \(\ket{0}\). If par is \(\ket{1}\), the MSB of sys register is in the state \(\ket{1}\) as the value encoded in sys is greater than \(N/2\) by Eq. (25), so the last cnot resets the parity qubit. The cnot has no action if par is \(\ket{0}\). This is because the value encoded in the system register is less than \(N/2\) by Eq. (25) when \(j\) and \(\ell\) have same parity. Consequently, the MSB of sys is \(\ket{0}\), making the last cnot inactive. The controlled-usbit operation, with usbitr given in Eq. (8), maps the implemented unitary by select to the single-level QWT \(W\) as in Eq. (9). The Hadamard gate \(H\) is used for amplitude amplification \(\mathcal{A}\) given in Eq. (23). The bottom circuit follows from the top circuit. The amplitude amplification \(\mathcal{A}^{\prime}\) is unitarily equivalent to \(\mathcal{A}\).
where \(c_{\ell}:=\cos\theta_{\ell}\) and \(s_{\ell}:=\sin\theta_{\ell}\).
Having classically precomputed the rotation angles \((\theta_{0},\theta_{1},\dots,\theta_{\mathcal{K}-1})\) by the procedure in Ref. [19], we construct a quantum circuit for prep as follows. Let \(m=\lceil\log_{2}M\rceil\). For \(M\) that is not a power of \(2\), we pad \((2^{m}-M)/2\) zeros from left and right to the wavelet filter vector \(\mathbf{h}\) to have a vector as \((0,\dots,0,h_{0},\dots,h_{M-1},0\dots,0)^{\top}\). Then unitaries \(U_{\ell}\) are modified accordingly so that \(U_{\mathcal{K}-1}\cdots U_{1}U_{0}\,\mathsf{e}_{2^{m-1}}\) yields the modified wavelet filter vector. A diagrammatic representation of this approach is shown in FIG. 2(b) for \(M=6\). For each \(\theta_{\ell}\) with even \(\ell\), first we shift elements of the vector one place to the right, shown in FIG. 2(c) by the right arrow, to be able to apply the rotations in parallel on consequent pairs of the vector elements and then shift the vector elements one place to the left. Because the rotations are in parallel, we can decompose the associated unitary as a tensor product of an identity and a rotation gate as \(\mathbb{1}_{m-1}\otimes R_{\ell}\). Shifting to the right (left) is implemented by the increment gate (inverse of the increment gate) on a quantum computer. The inverse of the increment gate is applied \((2^{m}-M)/2\) times at the end to achieve the desired amplitudes as \((h_{0},\dots,h_{M-1},0\dots,0)^{\top}\). The quantum circuit in FIG. 2(d) illustrates the case where \(M=6\).
## III Multi-level and packet QWT
We now use our implementation for the single-level QWT as a subroutine and construct quantum algorithms for multi-level and packet QWTs. To this end, let \(W_{n}^{(d)}\) denote the \(d\)-level wavelet transform of size \(2^{n}\times 2^{n}\) and let \(P_{n}^{(d)}\) denote the \(d\)-level wavelet packet transform of the same size. Also let \(W_{n}^{(1)}=W_{n}\) for notation simplicity.
The \(d\)-level wavelet transform can be recursively decomposed as [7, Appendix A]
\[W_{n}^{(d)}=\left(W_{n-1}^{(d-1)}\oplus\mathbb{1}_{n-1}\right)W_{n}. \tag{33}\]
This decomposition follows from the notion of multi-level wavelet transform: at each level, the transformation is only applied on the low-frequency component (i.e., the top part) of the column vector it acts on. The wavelet packet transform, however, acts on both the low- and high-frequency components, so we have the decomposition
\[P_{n}^{(d)}=\left(P_{n-1}^{(d-1)}\oplus P_{n-1}^{(d-1)}\right)W_{n}=\left( \mathbb{1}_{1}\otimes P_{n-1}^{(d-1)}\right)W_{n} \tag{34}\]
for the wavelet packet transform. Equation (33) yields the decomposition
\[W_{n}^{(d)}=\Lambda_{0}^{d-1}(W_{n-d+1})\cdots\Lambda_{0}^{2}(W_{n-2})\Lambda _{0}^{1}(W_{n-1})W_{n}, \tag{35}\]
where
\[\Lambda_{0}^{s}(W_{n-s}):=|0^{s}\rangle\!\langle 0^{s}|\otimes W_{n-s}+( \mathbb{1}_{s}-|0^{s}\rangle\!\langle 0^{s}|)\otimes\mathbb{1}_{n-s} \tag{36}\]
is the \(|0^{s}\rangle\)-controlled unitary operation, for any \(s\in\{1,\dots,d-1\}\). Similarity, Eq. (34) yields the decomposition
\[P_{n}^{(d)}=(1_{d-1}\otimes W_{n-d+1})\cdots(\mathbb{1}_{2}\otimes W_{n-2})( \mathbb{1}_{1}\otimes W_{n-1})W_{n} \tag{37}\]
for the \(d\)-level wavelet packet transform. These decompositions give a simple procedure for implementing a multi-level and packet QWT shown by the quantum circuits in FIG. 3
Figure 2: (a) Diagrammatic representation of the procedure producing the wavelet filters for a wavelet of order \(M\) from a particular initial vector by a set of \(M/2\) rotations; \(M=6\) is illustrated here. (b) Zero padding for cases that \(M\) is not a power of two. (c) Rotations can be applied in parallel. The right (left) arrow represents shifting elements of the vector one place to the right (left). Because the initial vector is a particular vector, the rotations represented by white boxes do not affect the vector. (d) Quantum circuit for prep using rotation gates, the increment gate denoted by \(+1\) and its inverse denoted by \(-1\). The gate \(+1\) (\(-1\)) is applied before (after) each rotation gate \(R_{\ell}\) with even \(\ell\), as in dotted boxes.
The multi-level packet QWT is construed from single-level QWTs that can be implemented by the method described in SSII. In contrast, the multi-level QWT is constructed from multi-controlled single-level QWTs. As in FIG. 3(c), we break down these multi-controlled operations in terms of multi-bit Toffoli gates and controlled single-level QWTs. We discuss an implementation of a multi-bit Toffoli gate in SSIV.1 and a controlled single-level QWT in SSIV.3, where we analyze the complexities of these operations.
## IV Complexity analysis
In this section, we analyze the computational cost of executing single-level, multi-level and packet QWTs, thereby establishing Theorems 1-3. We begin by analyzing the computational cost of key subroutines in our algorithms in SSIV.1. We then build upon them and provide cost analysis for the single-level QWT in SSIV.2 and for the multi-level and packet QWTs in SSIV.3.
In our cost analysis and in implementing the key operations, we use ancilla and "borrowed" qubits. In contrast to an ancilla qubit that starts from \(|0\rangle\) and returns to \(|0\rangle\), a borrowed qubit can start from any state and will return to its original state. The purpose of using borrowed qubits is that they enable simple implementation for complex multi-qubit operations. The availability of a sufficient number of qubits in our algorithm on which the key operations do not act on them allows us to use them as borrowed qubits in implementing such operations.
### Complexity of key subroutines
Here we analyze the cost of key subroutines used in our algorithm for a single-level QWT: prep, select and ushift, the latter of which adds a classically known constant value to the value encoded in a quantum register. We also analyze the cost of implementing a multi-qubit reflection, an operation used in the amplitude amplification part of our algorithm.
For simplicity of cost analysis, we state the cost of each key subroutine in a lemma and proceed with analyzing the cost in the poof. We begin with a lemma stating the cost of executing a multi-bit Toffoli gate, an operation frequently used in our algorithm and provides an implementation for the multi-qubit reflection about the all-zero state.
**Lemma 1**.: _The \((m+1)\)-bit Toffoli gate with \(m\geq 3\), defined as \(\Lambda_{1}^{m}(X):=|1^{m}\rangle\!\langle 1^{m}|\otimes X+(\mathbb{1}_{m}-|1^{m} \rangle\!\langle 1^{m}|)\otimes\mathbb{1}_{1}\), can be implemented by either of the following computational resources:_
1. \(m-2\) _borrowed qubits and_ \(\mathcal{O}(m)\) _toffoli gates, or_
2. _one borrowed qubit and_ \(\mathcal{O}(m)\) _toffoli and elementary one- or two-qubit gate._
The implementation based on \(m-2\) borrowed qubits follows from Gidney's method [20] for implementing a multi-bit Toffoli gate and the one using one borrowed qubit follows by the method given in Ref. [21, Corollary 7.4 ] and also in Ref. [22]. Notice that the gate cost of the two methods scales similarly, but one uses only a single borrowed qubit. However, we sometimes use the method with \(m-2\) borrowed qubits due to its simplicity in implementing a multi-bit Toffoli and the availability of a sufficient number of qubits in our algorithm that can be borrowed.
We proceed with the cost of select in the following lemma.
**Lemma 2**.: select _in Eq. (13) can be executed using one ancilla and one borrowed qubit, two Hadamard and \(\mathcal{O}(n)\) not, cnot and toffoli gates._
Figure 3: Quantum circuits for (a) the \(d\)-level packet QWT and (b) the \(d\)-level QWT using single-level QWTs; \(d=4\) is illustrated here. (c) An implementation of multi-controlled single-level QWTs needed for the multi-level QWT in (b) using multi-bit Toffoli gates, controlled single-level QWT and one ancilla qubit that starts and ends in the \(|0\rangle\) state.
Proof.: By FIG.1(x), select is composed of one controlled-\(Z\) gate, three cnot gates, one controlled-sub, one controlled-add and \(n-1\) swap gates. The controlled-\(Z\) gate can be executed using two Hadamard gates and one cnot, and each swap can be executed using three cnots. By the compilation given in Ref. [23], the add itself can be implemented using one ancilla qubit and \(\mathcal{O}(n)\) not, cnot and toffoli gates. Hence the controlled-add can be compiled using \(\mathcal{O}(n)\) cnot, toffoli and four-bit Toffoli gates, the latter of which can be implemented using one borrowed qubit and four toffoli gates by Lemma1.
In the next lemma, we show that the \(m\)-qubit reflection \(R_{m}\) about the all-zero state \(\ket{0^{m}}\) can be implemented using \(m-2\) borrowed qubits.
**Lemma 3**.: _The \(m\)-qubit reflection \(R_{m}:=\ket{0^{m}}\!\!\bra{0^{m}}-\mathbb{1}_{m}\) can be executed using one ancilla and \(m-2\) borrowed qubits along with two Hadamard, \(2m+2\) not and \(\mathcal{O}(m)\) toffoli gates._
Proof.: Using the phase kickback trick and one ancilla qubit, we can implement \(R_{m}\) up to an irrelevant global \(-1\) phase factor as
\[(R_{m}\otimes\mathbb{1}_{1})\ket{\psi}\ket{0}=-X^{\otimes m+1}(\mathbb{1}_{m }\otimes H)\Lambda_{1}^{m}(X)(\mathbb{1}_{m}\otimes H)X^{\otimes m+1}\ket{ \psi}\ket{0}, \tag{38}\]
where \(\ket{\psi}\) is any \(m\)-qubit state and \(\Lambda_{1}^{m}(X)\) is the \((m+1)\)-bit Toffoli gate. The lemma then follows by Gidney's method [20] for implementing the \((m+1)\)-bit Toffoli using \(\mathcal{O}(m)\) toffoli gates and \(m-2\) borrowed qubits.
We remark that the \((m+1)\)-bit Toffoli can be implemented using only one borrowed qubit and \(\mathcal{O}(m)\) toffoli and elementary one- or two-qubit gate by Lemma1. However, we use the method with \(m-2\) borrowed qubits due to its simplicity in implementing a multi-bit Toffoli and the availability of a sufficient number of qubits in our algorithm that can be borrowed.
The following lemma states the cost of adding a known classical value to a quantum register. We use a controlled version of this operation in our algorithm, the cost of which is stated in the following corollary.
**Lemma 4**.: _Adding a classically known \(m\)-bit constant to an \(n\)-qubit register with \(m<n\) can be achieved using \(m+1\) ancilla qubits and \(\mathcal{O}(m)\) not, cnot and toffoli gates._
Proof.: First, prepare \(m\) ancillae in the computational state that encodes the \(m\)-bit constant. This preparation can be achieved by applying at most \(m\) not gates. Then add this state to the state of the \(n\)-qubit register by add operation in Eq.26. By \(m<n\) and the compilation given in Ref. [23], add can be implemented by one ancilla qubit and \(\mathcal{O}(m)\) not, cnot and toffoli gates.
The computational cost reported in Lemma4 is indeed the cost of executing ushift in Eq.8. We use a controlled version of this operation as in the circuit shown in FIG.1. Because of the toffoli gate in Lemma4, the controlled-ushift requires implementing a four-bit Toffoli gate, an operation that can be implemented using one borrowed qubit and four toffoli gates by Lemma1. Therefore, we have the following cost for the controlled ushift as a corollary of Lemma4 and Lemma1.
**Corollary 4**.: _The controlled-ushift operation can be executed by one borrowed qubit, \(m+1\) ancilla qubits, and \(\mathcal{O}(m)\) cnot and toffoli gates._
The final lemma states the cost of the prep operation. We remark that the cost of this operation is independent of \(n\) as prep generates a quantum state on a number of ancilla qubits that depends on the wavelet order \(M\).
**Lemma 5**.: prep _in Eq.12 can be executed using \(\mathcal{O}(\log_{2}M)\) elementary gates and \(\lceil\log_{2}M\rceil\) borrowed qubits._
Proof.: The prep can be implemented using \(\mathcal{O}(M)\) rotation gates and \(\mathcal{O}(M)\) increment and inverse of increment gates by the procedure given in SSII.5. The increment gate on \(m=\lceil\log_{2}M\rceil\) qubits can be implemented using \(m\) borrowed qubits and \(\mathcal{O}(m)\) elementary gates [24], so the overall gate cost is \(\mathcal{O}(M\log_{2}M)\).
### Complexity of single-level QWT
We now build upon the computational cost of the key subroutines analyzed in the previous section to obtain the computational cost of executing a single-level QWT. To this end, we mainly use Eq.23 and Eq.24. By these equations, a single-level QWT is achieved by performing three Hadamard gates and
* Two pqwt and one pqwt\({}^{\dagger}\), which by Eq.20 needs performing two select and one select\({}^{\dagger}\); two prep and one prep\({}^{\dagger}\); and one controlled-ushift;
* Two \((m+1)\)-qubit reflection \(R_{m+1}\).
Therefore, by Lemmas 2, 3, 5 and Corollary 4, the gate cost \(\mathcal{G}(\texttt{1Qwt})\) for executing a single-level QWT is
\[\mathcal{G}(\texttt{1Qwt})=3\mathcal{G}(\texttt{select})+3\mathcal{G}(\texttt{ prep})+\mathcal{G}(\texttt{controlled-ushift})+2\mathcal{G}(R_{m+1})+3\in\mathcal{O}(n)+ \mathcal{O}(m) \tag{39}\]
where \(m=\lceil\log_{2}M\rceil\) in our application; \(M\) is the wavelet order. The number of ancilla qubits used is \(m+1\): \(m\) ancillae are used for the state-preparation step, and one extra ancilla is the parity qubit \(\texttt{par}\), which is also used in the amplitude amplification step.
We remark that the borrowed qubits in executing prep, select, controlled-ushift and reflection operations, in Lemmas 2-5, are borrowed from the portion of quantum registers that these operations do not act on them. For instance, the \(m-2\) borrowed qubits in Lemma 3 for executing the \(m\)-qubit reflection \(R_{m}\) could be any \(m-2\) qubits of the \(n\) qubit register that \(R_{m}\) does not act on them. For select, the borrowed qubit is needed to implement the four-bit Toffoli gate, see proof of Lemma 2, and this qubit could be any qubit in the circuit that the four-bit Toffoli gate does not act on it. We also remark that the \(m+1\) ancilla qubits in Corollary 4 needed for controlled-ushift are qubits of the single-qubit \(\texttt{par}\) register and \(m\)-qubit anc register. This operation is executed after the amplitude amplification, see FIG. 1, when \(\texttt{par}\) and \(\texttt{anc}\) are in the all-zero state.
Putting all together, the overall gate cost for implementing the single-level QWT is \(\mathcal{O}(n)+\mathcal{O}(\log_{2}M)\) and the number of ancilla qubits is \(\lceil\log_{2}M\rceil+1\). This is the computational cost reported in Theorem 1.
### Complexity of multi-level and packet QWTs
Here we analyze the complexity of implementing a \(d\)-level and packet QWTs, thereby establishing Theorem 2 and Theorem 3. By FIG. 3, implementing a multi-level QWT is achieved by implementing multiply-controlled single-level QWTs. Our strategy is to break down each multiply-controlled unitaries in terms of multi-bit Toffoli gates and single-controlled unitary. We then use a compilation for a controlled single-level QWT and an ancilla-friendly compilation for multi-bit Toffoli gates to achieve an efficient yet ancilla-friendly implementation for a multi-level QWT. The packet QWT, however, is achieved by a sequence of single-level QWTs without controlled qubits, as shown in FIG. 3.
Before describing the specifics of our implementation strategy, we first state the complexity of the \(|1\rangle\)-controlled single-level QWT in the following lemma. We then build upon this complexity to establish the complexity of multi-level QWT.
**Lemma 6**.: _The controlled single-level QWT on \(n\) qubits, associated with a wavelet of order \(M\), can be achieved using \(\lceil\log_{2}M\rceil+2\) ancilla qubits and \(\mathcal{O}(n)+\mathcal{O}(\log_{2}M)\) elementary gates._
Proof.: By the circuit in FIG. 1, a controlled single-level QWT needs preforming double-controlled-sub, -add and -ushift operations. Each cnot is transformed to a toffoli, each swap is transformed to three toffoli gates and \(H\) is transformed to controlled-\(H\). However, prep and prep\({}^{\dagger}\) remain uncontrolled because they will cancel each other if the control qubit is off. A double-controlled operation can be reduced to a single-controlled operation using two toffoli gates and one ancilla qubit. By the discussion in the proof of Lemma 2, the controlled-add (or -sub) can be compiled using \(\mathcal{O}(n)\) cnot, toffoli gates. The other ancilla qubits are the \(m\) qubits used for state preparation and the parity qubit. Altogether with Corollary 4 prove the lemma.
We now proceed with the complexity of \(d\)-level QWT. Let the integer \(s\), with \(1\leq s\leq d\), represent the level of a QWT. Then for the level \(s=r+1\) we need to implement \(|0^{r}\rangle\)-controlled-\(W_{n-r}\), where \(W_{n-r}\) is the single-level QWT on \(n-r\) qubits. For simplicity of cost analysis, we map all \(|0\rangle\)-controlled operations in FIG. 3(b) to \(|1\rangle\)-controlled operations; this can be achieved by \(2(d-1)\) not gates for \(d\)-level QWT as in FIG. 3(c). For \(r\geq 2\), we implement \(|1^{r}\rangle\)-controlled-\(W_{n-r}\) by a single ancilla qubit, two \((r+1)\)-bit Toffoli gates and one controlled-\(W_{n-r}\) as shown in FIG. 3(c). Notice that \(s=1\) corresponds to a single-level QWT on \(n\) qubits and \(s=2\) corresponds to a controlled single-level QWT on \(n-1\) qubits.
The gate cost for the controlled single-level QWT on \(n-r\) qubits is \(\mathcal{O}(n-r)\) by Lemma 6, disregarding the cost with respect to \(M\), and the gate cost for the \((r+1)\)-bit Toffoli gate is \(\mathcal{O}(r)\) by Lemma 1. Hence the gate cost for each level, including the first and second levels, is \(\mathcal{O}(n)\). We also have an additional gate cost of \(\mathcal{O}(\log_{2}M)\) for each level associated with the cost of implementing prep and prep\({}^{\dagger}\). We remark that only a single ancilla qubit is used for all levels; the ancilla qubit starts and ends in \(|0\rangle\) for each level to be reused in the next level, as illustrated in FIG. 3(c). Putting all together, we arrive at the computational cost stated in Theorem 2 for a \(d\)-level QWT.
Because the packet QWT does not have multi-controlled operations (see FIG. 3(a)), its gate cost simply follows from the cost of the single-level QWT. The single-level QWT acts on \(n-r\) qubits at level \(s=r+1\) and has the gate cost \(\mathcal{O}(n-r)\) by Theorem 1. The gate cost for all levels \(1\leq s\leq d\) is therefore \(\mathcal{O}(dn-d(d-1)/2)\). We also have an additional gate cost of \(\mathcal{O}(\log_{2}M)\) for each level associated with the cost of implementing prep and prep\({}^{\dagger}\), yielding the overall gate cost stated in Theorem 3. We note that the packet QWT does not need the extra ancilla qubit used in multi-level QWT for implementing the multi-controlled operations.
Discussion and conclusion
Wavelets and their associated transforms have been extensively used in classical computing. The basis functions of wavelet transforms have features that make such transforms advantageous for numerous applications over their established counterpart, the Fourier transform. However, prior works on developing a quantum analog for wavelet transforms were limited to a few representative cases. This paper presents quantum algorithms for executing any wavelet transform and a generalized version, the wavelet packet transform, on a quantum computer; the algorithm works for any wavelet of any order. We have established the computational complexity of our algorithms in terms of three parameters involved in wavelet transforms: the wavelet order \(M\), the level \(d\) of wavelet transform, and the number of qubits \(n=\log_{2}N\) the QWT acts on, with \(N\) the dimension of the kernel matrix associated with the wavelet transform.
The core idea of our approach is to express the kernel matrix as a linear combination of \(M\) unitary operations that are simple to implement and use the LCU technique to construct a probabilistic procedure for implementing the desired QWT. We then make the implementation deterministic using the known success probability of the probabilistic procedure by a single amplitude amplification. The gate cost of our algorithm for single-level QWT scales optimally with \(n\), the number of qubits, for the case that the wavelet order \(M\) is constant. Indeed, the order parameter used in practical applications is constant, typically in the range of \(2\leq M\leq 20\)[3; 25; 26]. In contrast, the transformation level \(d\) does scale linearly with the number of qubits, or \(\log_{2}N\), for practical applications. Because the value of \(d\) is upper-bounded by \(n\), the gate cost of multi-level and packet QWTs scales as \(\mathcal{O}(n^{2})\) in the worst case. Even for the worst case, our algorithm improves the gate cost of prior works on the second- and fourth-order Daubechies QWT from \(\mathcal{O}(n^{3})\) to \(\mathcal{O}(n^{2})\).
We remark that our approach requires a number of ancilla qubits that scales as \(\log_{2}M\) with the wavelet order. A potential area for further exploration is constructing ancilla-free quantum algorithms for all QWTs. More importantly, a primary area for future research is exploring the opportunities offered by quantum wavelet transforms in quantum algorithms, particularly in simulating quantum systems where wavelet transforms could be advantageous over the established Fourier transform.
## Acknowledgements
We acknowledge support from the Canada 150 Research Chairs program and NSERC-IRC. A.A.-G. also acknowledges the generous support of Anders G. Frosseth.
|
2309.16270 | Social Media Fashion Knowledge Extraction as Captioning | Social media plays a significant role in boosting the fashion industry, where
a massive amount of fashion-related posts are generated every day. In order to
obtain the rich fashion information from the posts, we study the task of social
media fashion knowledge extraction. Fashion knowledge, which typically consists
of the occasion, person attributes, and fashion item information, can be
effectively represented as a set of tuples. Most previous studies on fashion
knowledge extraction are based on the fashion product images without
considering the rich text information in social media posts. Existing work on
fashion knowledge extraction in social media is classification-based and
requires to manually determine a set of fashion knowledge categories in
advance. In our work, we propose to cast the task as a captioning problem to
capture the interplay of the multimodal post information. Specifically, we
transform the fashion knowledge tuples into a natural language caption with a
sentence transformation method. Our framework then aims to generate the
sentence-based fashion knowledge directly from the social media post. Inspired
by the big success of pre-trained models, we build our model based on a
multimodal pre-trained generative model and design several auxiliary tasks for
enhancing the knowledge extraction. Since there is no existing dataset which
can be directly borrowed to our task, we introduce a dataset consisting of
social media posts with manual fashion knowledge annotation. Extensive
experiments are conducted to demonstrate the effectiveness of our model. | Yifei Yuan, Wenxuan Zhang, Yang Deng, Wai Lam | 2023-09-28T09:07:48Z | http://arxiv.org/abs/2309.16270v1 | # Social Media Fashion Knowledge Extraction as Captioning
###### Abstract.
Social media plays a significant role in boosting the fashion industry, where a massive amount of fashion-related posts are generated every day. In order to obtain the rich fashion information from the posts, we study the task of social media fashion knowledge extraction. Fashion knowledge, which typically consists of the occasion, person attributes, and fashion item information, can be effectively represented as a set of tuples. Most previous studies on fashion knowledge extraction are based on the fashion product images without considering the rich text information in social media posts. Existing work on fashion knowledge extraction in social media is classification-based and requires to manually determine a set of fashion knowledge categories in advance. In our work, we propose to cast the task as a captioning problem to capture the interplay of the multimodal post information. Specifically, we transform the fashion knowledge tuples into a natural language caption with a sentence transformation method. Our framework then aims to generate the sentence-based fashion knowledge directly from the social media post. Inspired by the big success of pre-trained models, we build our model based on a multimodal pre-trained generative model and design several auxiliary tasks for enhancing the knowledge extraction. Since there is no existing dataset which can be directly borrowed to our task, we introduce a dataset consisting of social media posts with manual fashion knowledge annotation. Extensive experiments are conducted to demonstrate the effectiveness of our model.
fashion knowledge extraction, social media analysis, multimodal data mining +
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
Footnote †: journal: journal: Information Retrieval in the Asia Pacific Region (SIGIR-AP ’23), November 26-28, 2023, Beijing, China.
+
conducted on the fashion product images (Beng et al., 2017; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019) where a single image taken in the professional studio is provided. However, social media posts often contain information of different modalities, including both the image and text. As shown in Figure 1, apart from the post image, the corresponding post text also indicates essential information such as where the image is taken, who in the post is, and what the person is wearing, and thus attaching great importance to extracting the fashion knowledge from the post. Therefore, how to make full use of the multimodal post information for the FKE task is underexplored.
To handle the multimodal information for harvesting the fashion knowledge, some initial attempts have been made. Ma et al. (Ma et al., 2018) propose a pipeline-based model which first extracts person and clothing boxes from the image, then classifies the detected regions into different attribute categories with the text as an additional input. However, the model merely incorporates the image and text features by simple concatenation which fails to capture the deep interplay between different modalities. Moreover, similar to text-based structure prediction problems (Zhu et al., 2018; Wang et al., 2018), formulating the knowledge extraction task as a classification problem needs to manually determine a set of fashion knowledge categories in advance. However, the format of fashion knowledge aspects is typically quite varied. For example, "muslin white" and "chiffon beige" can both be used to describe the appearance of the dress the woman wears in Figure 1. Besides, strong dependencies are often observed between different aspects in the fashion knowledge data (Ma et al., 2018). For instance, the person clothes can be affected by the occasion of the post. Traditional classification-based models tend to determine the category of each fashion knowledge aspect separately, thus failing to capture such relationship. Furthermore, their method is pipeline-based and can give rise to the problem of error propagation. Some potential errors in the preceding steps such as the inaccurate prediction of person boxes could lead to a negative influence on not only the extraction of person attributes information but also all the fashion item knowledge corresponding to the person.
To tackle the research challenges discussed above, we propose to cast the social media based FKE task as a captioning problem. Inspired by the classic image captioning task (Zhu et al., 2018; Wang et al., 2018) that generates a natural language description for a given image, we transform the FKE task to a captioning problem for better modeling the interplay of the image and text information and alleviating the issues of the classification-based models. Specifically, given the multimodal social media post including an image and the corresponding text, we aim to generate a natural language caption for the post, which contains the key fashion information. The fashion knowledge tuples can then be easily extracted from the generated caption. During the training stage, we first transform the original fashion knowledge tuples into a pseudo caption with a sentence transformation method. Then the multimodal post and the pseudo caption can be paired as training instances to learn a multimodal generation model. With such caption generation formulation, we can tackle the FKE task in an end-to-end manner, alleviating the potential error propagation issue in pipeline-based solutions. Moreover, compared with existing classification-based models, our model incorporates the multimodal information from both the image and text as input and utilizes the natural language caption as the output, which can better capture the interactions between different modalities. In addition, the dependencies between different fashion knowledge aspects can also be fully exploited by learning to generate them in an autoregressive manner.
Motivated by the big success of pre-trained language models for various vision-language tasks such as image-text retrieval (Ma et al., 2018), we build our model based on a multimodal pre-trained generation model named VL-Bart (Chen et al., 2017) to utilize its rich knowledge of processing information from different modalities. We further design several auxiliary tasks including visual question answering (VQA), sentence reconstruction, and image-text matching to warm-up the model. These tasks are designed to equip the model with fashion-related knowledge via different formats but under the same model architecture. After training with multiple relevant tasks, the model can obtain some prior task-specific knowledge, which helps tackle the main concerned FKE task.
Since existing datasets used in previous studies are either single-modal with only fashion item information (Chen et al., 2018) or not publicly available (Ma et al., 2018), there is no dataset that can be directly adopted for the concerned task. Therefore, we introduce a large-scale fashion knowledge dataset based on user-generated social media fashion-related posts. For each post including an image and text, we manually annotate its corresponding occasion, person attributes, as well as the type and appearance of the fashion items they wear to construct the fashion knowledge tuples. We provide detailed statistics on this newly introduced dataset and conduct extensive experiments on it1.
Footnote 1: The dataset and code are available in [https://github.com/yfyuan01/FKE](https://github.com/yfyuan01/FKE).
To sum up, the main contributions of our paper are as follows:
* We propose to tackle fashion knowledge extraction from multimodal social media posts as a captioning task, which effectively captures the interplay of different modalities via generating a natural language caption for extracting the fashion knowledge tuples in an end-to-end manner.
* To equip the model with fashion-related knowledge, we design several auxiliary tasks including sentence reconstruction, image-text matching, and visual question answering, which helps tackle the main concerned FKE task.
* We contribute a benchmark dataset and conduct extensive experiments to demonstrate the effectiveness of our model. We show that our method outperforms various state-of-the-art methods, especially under the difficult multi-person multi-fashion-item situation.
## 2. Related Work
### Fashion Knowledge Extraction
Fashion knowledge plays a vital role in fashion-related tasks such as clothing recognition (Beng et al., 2017; Chen et al., 2017), fashion trend forecasting (Zhu et al., 2018; Wang et al., 2018; Wang et al., 2018), fashion sentiment analysis (Wang et al., 2018), and fashion-related information retrieval (Wang et al., 2018; Wang et al., 2018). Therefore, there has been an increasing interest on knowledge extraction tasks in the fashion domain recently (Chen et al., 2017; Chen et al., 2017). Early studies mostly rely on handcrafted features and mainly focus on extracting simple clothing-related knowledge using techniques such as conditional random field (Beng et al., 2017; Chen et al., 2017). Huang et al. (Huang et al., 2018) propose a Dual Attribute-aware Ranking Network (DARN) consisting of two sub-networks for retrieval feature learning. DeepFashion,
which is first proposed by Liu et al. (Liu et al., 2018), is annotated with clothing items with rich fashion knowledge information. They propose a dataset where each picture is annotated with some fashion item attributes. Jia et al. (Jia et al., 2019) propose a data-driven approach for recognizing fashion attribute where a modified version of Faster R-CNN model is trained. Furthermore, Wang et al. (Wang et al., 2019) solve the problem of fashion landmark localization and clothing category classification via a knowledge-guided fashion network. Yan et al. (Yan et al., 2019) address unconstrained fashion landmark detection, where clothing bounding boxes are not provided in both training and testing phases. To the best of our knowledge, Ma et al. (Ma et al., 2019) are the first to focus on social media based fashion knowledge extraction, which aims to conduct automatic fashion knowledge extraction from social media posts by unifying the occasion, person attributes and clothing prediction in a contextualized module. Although the model incorporates multimodal information from the social media posts, it is pipeline-based and requires to extract all person boxes in advance. Moreover, they do not publish their dataset for safety reasons.
### Multimodal Pre-training
Following the success of large pre-trained models in natural language understanding (NLU) (Liu et al., 2018; Liu et al., 2019; Liu et al., 2019) and generation (NLG) tasks (Liu et al., 2018; Liu et al., 2019; Liu et al., 2019), some multimodal pre-trained models have shown their superiority over traditional non-pretrained methods in many tasks recently. Some of them mainly focus on video-text pretraining such as VideoBERT (Wang et al., 2019), HERO (Liu et al., 2019), MIL-NCE (Liu et al., 2019; Liu et al., 2019) and so on, while others focus on the image-text domain. Among these image-text pretrained methods, ViLBERT (Liu et al., 2019), LXBERT (Liu et al., 2019), and VL-BERT (Liu et al., 2019) are the extensions of the popular BERT model (Liu et al., 2019) and are used for learning task-agnostic joint representations of the image content and natural language. Following this line, unified models are proposed to deal with both understanding and generation tasks. For example, Oscar (Oscar, 2019) leverages object tags detected in images as anchor points to significantly ease the learning of image-text alignments. CLIP (Liu et al., 2019) connects image and text representations by learning visual concepts from natural language supervision. Huo et al. (Huo et al., 2019) propose a two-tower pre-trained model named WenLan within the cross-modal contrastive learning framework. CogView (Huo et al., 2019) and DALLE (DALLE, 2020) are powerful generative models that focus on text-to-image generation. Among them, VL-Bart (Chen et al., 2020) is the state-of-the-art model designed for vision text generation and shows good generalization ability on different tasks. Therefore, we adopt it as the backbone of our model (to leverage its knowledge in processing information from different modalities) in this work.
## 3. Our Method
### Problem Definition
We aim to automatically extract fashion knowledge from a social media post, which is composed of an image and the post text content. Following the definition given in previous study (Ma et al., 2019), the fashion knowledge is denoted as a set of tuples, each tuple \(k\) is defined as the combination of the occasion, person attributes, and fashion item information: \(k=(o,p,f)\). Here \(o\) denotes the occasion category, which belongs to a set of occasions such as wedding, school, sports, etc. \(p=(age,gender)\) denotes the gender and age information of a specific person in the post, where \(gender\in\{Male,Female\}\) and \(age\in\{Kid,Youth,Mid,Old\}\). The fashion item information \(f=(type,app)\) contains the fashion item type \(type\) such as "pants" and the appearance \(app\) of the fashion item, where the appearance is usually a short text such as "lace white" describing both its pattern, color, and style. Therefore, the fashion knowledge tuple \(k\) can also be unfolded and represented as \(k=(occ,age,gender,type,app)\).
Given a post \(x\) consisting of an image \(v\) and text \(t\) denoted as \(x=\{v,t\}\), the problem is to develop a framework which outputs \(N\) fashion knowledge tuples of the post, represented as \(K=\{k\}_{i=1}^{N}\), where the number of tuples \(N\) varies from post to post.
### Framework Overview
Figure 2 presents an overview of our proposed method. In general, we formulate the concerned FKE task as a captioning problem. We tackle it via an encoder-decoder structure based on a pre-trained generative model named VL-Bart (Chen et al., 2020), as shown in the left part. By treating it as a multimodal generation problem, the interactions between different modalities can be effectively captured. Then the structured fashion knowledge tuples are recovered from the generated caption. Besides, as shown in the right part, before captioning, we leverage several fashion-related auxiliary tasks to warm-up the pre-trained models and equip it with task-specific knowledge.
In detail, for the captioning phase in the left part, we fine-tune the model to generate the fashion knowledge captions. Instead of generating the tuple-like fashion knowledge directly, the model generates captions in a natural language manner. To facilitate such training, given the original training instance with the format of post-tuple pair \((x,K)\), we transform the fashion knowledge tuples \(K\) to a pseudo caption \(y\) containing all the desired fashion knowledge elements of the post via a caption construction method. The transformed training instance can thus be represented as \((x,y)\) for learning a multimodal generative model. To add fashion information to the pre-trained model, as shown in the right part of Figure 2, we design various auxiliary tasks processes including sentence reconstruction (SRC), visual question answering (VQA), and image text matching (ITM) before fine-tuning. These tasks are designed to focus on one or several fashion knowledge aspects and equip the model with task-specific fashion knowledge.
### Image and Text Encoding
#### 3.3.1. Text Encoding
As shown in the bottom part of Figure 2, the input text \(t\) of our model consists of three parts: task prefix, task text, and post text. Post text is the original text content written by the user in the social media post. To auxiliary task training, we also include a task prefix which indicates which task the model should perform, followed by the task text used as an additional textual input for a specific task (e.g. it can be a question in the visual question answering task). The three textual inputs are concatenated with a special token [SEP] and fed to the embedding layer to obtain the text embedding of the model. The positional embeddings for denoting the absolute token positions are added to the token embeddings and learned during the training. Then for each training instance, the text input \(t\) is encoded to a vector represented as \(e^{t}\).
#### 3.3.2. Image Encoding
To extract image features, we first detect several object regions from the image, denoted as Region of Interest (ROI). By utilizing ROI instead of the raw image pixels, we can align
the multimodal information between the image and text (Liu et al., 2017; Liu et al., 2018). To obtain ROI features, following previous studies (Liu et al., 2017; Liu et al., 2018), we generate \(r\) image object regions with Faster-RCNN (Wang et al., 2017). For each region, we also detect the object tag in the format of text such as "Upper", "Woman", etc. The final embedding is the sum of four types of features: ROI object features, ROI bounding box coordinates, image ids, and region ids. The ROI object feature is the encoding result from Faster-RCNN. The bounding box coordinate is the position vector of the ROI. Image id is set to be 1 in our task, and region id \(\in\{1,...,r\}\). The visual embedding of image \(v\) is represented as \(e^{v}\).
### FKE as Captioning
We cast the original FKE task as a captioning problem. We aim to train a generation model for learning the mapping function given the natural language caption \(y\) transformed from the fashion knowledge tuples \(K\) and the social media post \(x\).
#### 3.4.1. Caption Construction
To facilitate the training process of the generative model, we propose a strategy to construct the pseudo caption \(y\) from the \(N\) fashion knowledge tuples of a post represented as \(K=\{k\}_{i=1}^{N}\). For the caption construction, we wish to incorporate the major fashion knowledge elements into the caption while neglecting the unnecessary information. The rule of transforming the fashion knowledge tuples into the natural language caption is designed as follows.
As shown in Algorithm 1, since the occasion of all the fashion knowledge tuples corresponding to a certain post is the same, we first transform the occasion information into a sentence at the beginning of the target sequence with the template "The occasion is [occ]". In the example shown in Figure 2, the sentence saying "The occasion is daily" is constructed to incorporate the occasion category. We then group and gather all the fashion knowledge tuples by different persons. For each person, we write a sentence containing his/her gender and age information. With the same example, we write "The first youth female" at the beginning of the second sentence. We then list all the fashion items the person wears including their type and appearance and incorporate them into a fashion item description sentence. For different fashion items of the same person, we concatenate them with the word "and" to mimic the writing method the users often use. Therefore, for the girl in the Figure 2, we add the fashion item information by saying that she wears a black upper and a dark blue pants, etc. After the sentence transformation process that transforms the original tuple-like data into a natural language caption, the input-to-target generation can be modeled with a classic encoder-decoder architecture.
```
0:\(N\) fashion knowledge tuples of a post \(\{k\}_{i=1}^{N}\)
0: Each tuple \(k=(occ,gender,age,type,app)\)
0: Number of persons in the post \(n_{p}\)
0: Natural Language Sequence \(y\)
1:\(o\leftarrow\) "The occasion is "+occ
2:\(s\gets o\)
3:for\(m=1\) to \(n_{p}\)do
4:\(y\gets y\) "The "+\(m\)-\(gender\)-\(age\)+"person wears "
5:for\(n=1\) to \(N\)do
6:if\(n!=N\)then
7:\(y\gets y\) "\(a\)" + \(app\) + \(type\) + " and "
8:else
9:\(y\gets y\) "\(a\)" + \(app\) + \(type\) + ".
10:endif
11:endfor
12:endfor
```
**Algorithm 1** Caption Construction
#### 3.4.2. Encoder-Decoder Structure
We use transformer (Wang et al., 2017) encoder-decoder to incorporate image and text features and generate the fashion knowledge caption. The encoder is composed of \(m\) transformer blocks, each of which consists of a self-attention layer and a fully-connected layer with residual connections. The decoder is also a stack of \(m\) transformers with an additional cross-attention layer in each block. Given the multimodal post input \(x\), the image \(v\) and text \(t\) are first fed into the bidirectional encoder and incorporated together into a contextualized sequence. Given the sequence, the decoder models the conditional probability distribution of the target sentence to generate caption \(y\). At each time step, the decoder
Figure 2. The overall architecture of our framework. The left part depicts transforming the FKE task as a captioning problem, where FKT denotes the fashion knowledge tuples. The right part shows the detail of the three auxiliary tasks.
iteratively predicts the probability of current caption tokens based on previously generated tokens and the encoder output.
#### 3.4.3. Training
Given a pretrained model with the encoder-decoder structure, we fine-tune our model parameters \(\theta\) on the input-target pair. We utilize standard sentence generation loss as our loss function. At each time step \(j\), the decoder output \(y_{j}\) is determined based on the generated caption by previous time steps \(y_{<j}\), the image and text embedding \(e^{n}\) and \(e^{t}\). We minimize the negative log-likelihood of generating the target caption \(y\) given the input text embedding \(e^{t}\) and image embedding \(e^{o}\):
\[\min\ -\log p_{\theta}(y|e^{t},e^{o})=-\sum_{j=1}^{|y|}log p_{\theta}(y_{j}|y_{< j},e^{t},e^{o}) \tag{1}\]
where \(P_{\theta}\) is the likelihood of generating the target caption \(y\) given the image text input, and \(|y|\) is the length of the target caption.
#### 3.4.4. Inference and Tuple Recovery
During inference, we generate the target caption sequence \(y^{\prime}\) in an autoregressive manner given the post image and text pair. Same as mentioned in the training phase, the input text also consists of the task and the post text separated by the separation token [SEP].
At each time step, we choose the token with the highest probability over the vocabulary set to obtain the natural language caption. When recovering the fashion knowledge tuples from the caption, we first split the output sequence into several sentences. As shown in the top left part of Figure 2, the occasion information can then be extracted from the first sentence having the format of "The occasion is ". With respect to the remaining sentences, we extract the person attributes in the sentence, and pair them with all the fashion item information including the type and appearance in that sentence. According to the figure, for the sentence "The first youth female wears a black upper", we can obtain the fashion knowledge tuple (youth,female,upper,black) from it following the rules. After extracting the fashion knowledge elements from the sequence, we compare them with the ground-truth label for evaluation. Notably, if the decoding fails, say the generated sequence violates the format, we treat the prediction as null.
### Auxiliary Task Training
To obtain task-specific knowledge, we further design several auxiliary tasks including sentence reconstruction, visual question answering, and image-text matching. These tasks are designed to focus on one or several fashion knowledge aspects which can warm up the pre-trained model before training on the main captioning task. In order to fit to different auxiliary tasks, we assign different task prefixes to each task and add them before the original task text. The examples of the task prefix, task text, and target output text of each task are listed in Figure 3.
#### 3.5.1. Sentence Reconstruction (SRC)
Based on the assumption that different aspects in the fashion knowledge data are not strictly independent but strongly related (e.g. the type and appearance of the fashion item can be affected by the occasion), the goal of the SRC task is to predict some masked tokens based on their surrounding tokens and the image feature. Therefore, we randomly mask out 30% of the input tokens and ask the model to predict and reconstruct the original sentence. The task text is the masked fashion knowledge sequence and the output is the original full text. For each masked token, we replace it with the special mask token <mask>.
#### 3.5.2. Image-Text Matching (ITM)
This task takes a pair of image and natural language text as input. The model needs to determine if the text corresponds to the image or not. In our setting, we aim to determine if the given fashion knowledge caption corresponds to the post or not. We transform the original binary classification task into a generation problem following the rule that if the text is the corresponding caption of the post, the model generates "true", while if not, the model generates "false". We consider the ground-truth post-caption pair as positive samples. To construct negative samples, with the probability of 50%, we randomly sample the pseudo caption from another post in the training dataset.
#### 3.5.3. Visual Question Answering (VQA)
In the general visual question answering problem (Bog
For each training step, we randomly sample a mini-batch from one of the three tasks. We differentiate the tasks by using different task prefixes. It is worth noting that the four subtasks of VQA are considered as the same task and share the same task prefix. Since we only change the input-output format without changing the pretrained model structure, we use the same loss function as in Section 3.4.3. We then set a bunch of weights according to the partial loss of each task to form the final loss.
\[L_{all}=\sum_{i=1}^{|T|}w_{i}L_{i} \tag{2}\]
Where \(|T|=3\) is the number of tasks, \(L_{i}\) is the partial loss according to each task, \(w_{i}\) is the hyperparameter representing the corresponding weight. After using with multiple relevant tasks to warm-up the model, the model is equipped with fashion-related knowledge, which can help tackle the main concerned FKE task.
## 4. Experiment
### Dataset
Since there is no existing dataset that can be directly adopted to our setting, we collect and contribute a large-scale annotated dataset for the FKE task. Our dataset contains 9,272 posts with 32,439 fashion knowledge tuples in total, with an average of 3.5 fashion knowledge tuples per post and 2.7 fashion knowledge tuples per person. The detailed statistics of our dataset concerning different fashion knowledge aspects are reported in Figure 4.
**Post Collection and Preprocessing** Our dataset is collected from Instagram2, which is a popular social media platform where large amount of posts are generated by users every day. To obtain the fashion-related posts, we first define six occasions, including school, graduation, sports, wedding, daily wear, and vacation. Under each occasion, we then choose some typical hashtags and crawl the related posts given the hashtag. After that, we filter out posts without any texts or containing only emojs. Since the raw text in social media is often noisy, we employ several text cleaning methods to deal with the crawled texts. We first preprocess the texts by removing all the unnecessary tokens including emojs, URL, whitespace, HTML characters, punctuation marks, and mentions. We then detect and translate all the text into English using the Google translate API 3.
Footnote 3: [https://pypi.org/project/googletrans/](https://pypi.org/project/googletrans/)
**Fashion Knowledge Annotation** For the filtered fashion-related posts, we hire 10 fashion experts to manually annotate the fashion knowledge information for each post. The annotators first need to determine the occasion of the posts. Since sometimes similar images may result in different occasion results, before making the choice, the annotators are asked to read both texts and images to make sure that the occasion type is determined by both of them. After that, the annotators annotate the person attributes including the gender and age group in the images, as well as the type and appearance of the fashion items they wear. Considering the unfixed format of the appearance, when annotating it, the annotators are asked to use two or three words to describe how the fashion item looks, including its color, pattern, and texture. After all the annotations are finished, we ask two annotators to check the completeness and correctness of the results, making sure that all the fashion knowledge is correctly annotated in each post.
### Comparison Methods
To validate the model effectiveness, we compare with both existing classification-based and generation-based methods. The first four are classification-based methods.
* **DARN**(Dalal and Triggs, 2017) is a Dual Attribute-aware Ranking Network originally used for retrieval feature learning. Same as in (Song et al., 2018), we also only keep one stream for our task.
* **FashionNet**(Krizhevsky et al., 2017) is a pipeline-based model which simultaneously predicts landmarks and attributes. It consists of a global appearance branch, a local appearance branch and a pose branch.
* **HDF**(Song et al., 2018) extracts the fashion knowledge from social media posts. It unifies three tasks of occasion, person and clothing discovery from multiple modalities of images, texts and metadata.
* **ViBLBERT**(Song et al., 2018) directly takes image text features as inputs and treats the task as a classification task. For occasion prediction, the input is the post image and text, and the output is the occasion category. While for fashion item information extraction, the
Figure 4. Detailed information of our dataset w.r.t different fashion knowledge aspects, where NW, UW, BC, SS, FW are the abbreviations of nightwear, underwater, babyloches, swimsuits, and footwear respectively. From bottom to top, we report the number of fashion items, persons, and posts.
image input is the fashion item box, the output is the type and appearance classes of the fashion item.
To further evaluate the effectiveness of our proposed captioning method, we also adopt the following generation-based baselines:
* **Oscar**(Kumar et al., 2017). We utilize Oscar to generate the fashion knowledge tuples. Oscar is BERT-like and does not have an encoder-decoder structure. The model is pre-trained on several classification tasks and one generation task (COCOcaption). During fine-tuning, the words in the tuples serve as input and are masked randomly at the rate of 15%. During inference, the generation process terminates when the model outputs the [STOP] token.
* **VL-Bart**(Chen et al., 2017). We also construct a baseline which uses the same pre-trained VL-Bart model as ours but without the proposed captioning method. Specifically, we directly employ the fashion knowledge tuples in the natural language form as the target sequence, instead of transforming them into a natural language caption with the sentence transformation strategy.
### Experimental Setting
In our experiment, we randomly split the dataset into the training, testing, and validation set with the percentage of 80%, 10%, 10%. We conduct 5 runs for our experiment, each with a different random seed and report the average score. When comparing our method with existing models, since most of the existing classification-based methods take the fashion item boxes as an input, we use an existing tool (Wang et al., 2017) to extract and predict all the person attributes in the post, following (Zhu et al., 2018). For each person box, we then use the same Faster-RCNN network mentioned in Section 3.3.2 to extract all the fashion items. Our code is based on PyTorch and Huggingface Transformers (Wang et al., 2017). We use AdamW (Kingmae and Ba, 2014) with (\(\beta_{1}\),\(\beta_{2}\)) = (0.9,0.999) and the learning rate 1e-4 with 5% linear warmup schedule. By default, each training process is run for 40 epochs. We report the results from the top-20 fashion item boxes with the confidence score greater than 0.5 from the original extraction results.
We use several evaluation metrics in our experiment. At the fashion item level, we report the precision, recall, F1 rate of each fashion item tuple. A tuple prediction is counted as correct only when all the elements are the same as the ground-truth label. We also report the post-wise accuracy score, which is the probability of post predictions our model got right. Except for that, to measure the semantic similarity between the generated caption and the transformed gold standard, we also employ some caption evaluation metrics including BLEU (Wang et al., 2017) and METE(Chen et al., 2017).
### Main Experiment Results
Table 1 shows the main experiment results of our model and the baseline models. Except for the overall fashion tuple prediction performance ("Overall"), we also report the performance of the occasion, fashion item category and appearance prediction for a more comprehensive comparison. We have the following observations:
First of all, error propagation is the main problem in the pipeline-based methods. The inaccurate prediction of the person boxes can lead to the fashion item information prediction errors, affecting both the category, appearance and overall performance. Secondly, for pipeline-based models, there remains a gap between the precision and recall rate in most tasks. For example, the precision rate of HDF is 77.1 while the recall rate is 52.7 in the category prediction subtask. The gap mainly comes from the inaccurate fashion item box extraction, where many fashion items are not extracted or misextracted, thus leading to the low recall rate in the model result. In addition, some models take the dependencies between different fashion knowledge items into account (e.g. HDF), thus achieving better performance than those not (e.g. FashionNet). Moreover, it can also be observed that pre-trained models (e.g. ViLBERT) have a better performance than the non-pretrained models (e.g. DARN), showing the effectiveness of large pre-trained models in our task.
Our model achieves the best overall performance among all the methods. Although Oscar outperforms our method in the occasion prediction subtask, where each post contains only one occasion label belonging to one of the six categories, which does not require the model to have a strong generation ability. Oscar has difficulty in generating the more complex fashion item information, getting only 5.5 of the overall accuracy score. In addition, our model gets further improved by transforming the original tuple-like fashion knowledge into natural language sentences. Compared with directly generating the fashion knowledge tuples (i.e. VL-Bart), the overall F1 score improves from 27.1 to 35.4. The result proves that generating the natural language caption helps the generation model capture the dependencies between different fashion knowledge aspects, thus resulting in a better prediction.
To further study the role of different modalities in this task, we remove the post text and the image object tags respectively. Without post texts, the overall F1 score drops from 35.4 to 30.7. This
\begin{table}
\begin{tabular}{c|c c c c|c c c c|c c c c|c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{**Occasion**} & \multicolumn{4}{c|}{**Category**} & \multicolumn{4}{c|}{**Appearance**} & \multicolumn{4}{c}{**Overall**} \\ & Acc & Pre & Rec & F1 & Acc & Pre & Rec & F1 & Acc & Pre & Rec & F1 & Acc & Pre & Rec & F1 \\ \hline DARN (Kumar et al., 2017) & 44.1 & 40.2 & 42.6 & 42.4 & 25.1 & 73.2 & 47.1 & 57.4 & 10.3 & 64.2 & 40.8 & 50.0 & 9.1 & 23.5 & 14.4 & 17.9 \\ FashionNet (Kumar et al., 2017) & 43.2 & 40.5 & 43.6 & 42.0 & 26.3 & 72.8 & 46.3 & 56.6 & 10.6 & 62.9 & 41.2 & 49.8 & 8.7 & 22.4 & 14.5 & 17.6 \\ HDF (Zhu et al., 2018) & 50.3 & 47.4 & 43.7 & 45.5 & 29.4 & 77.1 & 52.7 & 62.8 & 14.3 & 68.8 & 44.0 & 53.7 & 12.1 & 27.9 & 17.6 & 21.6 \\ ViLBERT (Zhu et al., 2018) & 59.6 & 50.3 & 58.4 & 54.1 & 32.5 & 80.1 & 53.6 & 64.2 & 15.2 & 71.3 & 52.9 & 60.7 & 12.5 & 28.7 & 20.4 & 23.5 \\ Oscar (Kumar et al., 2017) & 75.6 & 75.2 & 76.0 & 75.5 & 7.7 & 21.1 & 33.7 & 26.0 & 7.1 & 20.1 & 23.2 & 21.5 & 5.5 & 15.2 & 17.4 & 16.2 \\ VL-Bart (Chen et al., 2017) & 75.2 & 69.1 & 74.9 & 71.4 & 30.8 & 80.9 & 48.6 & 60.7 & 17.8 & 52.9 & 31.6 & 39.6 & 15.4 & 33.6 & 21.9 & 27.1 \\ \hline Ours w/o text & 69.2 & 64.6 & 70.1 & 68.8 & 32.7 & 78.5 & 64.2 & 70.6 & 20.4 & 71.8 & 57.7 & 64.0 & 15.4 & 33.6 & 28.2 & 30.7 \\ Ours w/o img tags & 72.8 & 68.4 & 73.0 & 70.6 & 30.8 & 75.6 & 64.2 & 69.4 & 18.1 & 70.2 & 57.0 & 62.9 & 16.0 & 35.1 & 28.4 & 31.4 \\ Ours & **74.7** & **69.2** & **75.4** & **71.1** & **36.4** & **81.8** & **67.9** & **74.2** & **22.2** & **73.9** & **60.7** & **66.5** & **20.2** & **39.1** & **32.3** & **35.4** \\ \hline \hline \end{tabular}
\end{table}
Table 1. The experimental results of our model compared with the baseline methods, as well as the ablated results where text and image tags are removed from our model.
result verifies that the post text contains rich fashion knowledge information with respect to where the post is located, who is in the post, and what the person is wearing. Besides, the image tags also play a vital role in our task, improving the overall performance from 31.4 to 35.4. The reason is that some tags (e.g. woman, dress) can be aligned to the corresponding image regions and provide hints for the fashion knowledge such as the person gender and fashion item categories. The results verify that essential information is contained in different modalities of the post, which can be effectively captured by our model.
### Ablation Study
To evaluate the effect of different auxiliary tasks, we report the performance of our model with several variants. As shown in Table 2, we remove one or two auxiliary tasks at each time and report the corresponding accuracy and F1 scores. To further analyze the effects of those tasks on the generation results at the semantic level, BLEU and METEOR scores are also presented.
It can be noted that introducing auxiliary tasks improves the performance compared to directly fine-tuning the model on our dataset, which enhances the model with fashion-related knowledge. Among the three tasks, TTM is the most beneficial to the performance improvement, which improves the BLEU\({}_{1}\) and BLEU\({}_{2}\) scores by 2.13 and 1.54 percent. The reason is that the captions of different images are constructed by the same transformation method and share similar structure, recognizing the right caption from the negative image-caption pairs helps the model understand the fashion knowledge elements better. Compared with other tasks, removing SRC (corresponds to "+ITM+VQA" in the table) has the least influence on the F1 score. The reason is that when masking the caption, some less important tokens which appear with high frequency are masked with an equal probability with tokens containing rich fashion knowledge. For example, in the caption sentence "The woman wears a black upper", token "The" has the same probability of being masked as the token "upper".
### Extensive Analysis
#### 4.6.1. Performance under Different Person and Fashion Item Numbers
We analyze the performance of our model compared with the baseline models under different person and fashion item numbers, and plot the performance change in Figure 5. We can see that although the F1 score decreases for every method as the number of persons and fashion items grows, our model shows a greater advantage when it comes to the multi-person or multi-fashion-item setting. Specifically, when the number of persons and fashion items is small, both classification-based models and our model achieve a reasonable performance. However, as the case becomes more complicated, which means more persons are included in the image, traditional models often fail to extract all the fashion knowledge from the post. The performance gap between our method and a baseline HDF model reaches to the largest when there are 3 persons and 4 fashion items in the post image. Such failure on the first place, may result from the error propagation for the pipeline-based method, which means the inaccurate extraction of person boxes may give rise to the wrong prediction of all the fashion items associated with that person. On the other place, compared with our model, most traditional models fail to capture the relationship between different fashion knowledge elements. For example, the occasion "wedding" can be related to a young woman wearing a lack white dress and a young man wearing a black suit. Our model captures such correlation by generating a caption where the occasion and person attributes are generated first, which provides some prior hints for the upcoming fashion item knowledge generation.
#### 4.6.2. Generative vs. Discriminative Methods
As can be observed from Table 1, generative methods generally achieve better performance than the classification type methods. To further investigate such a phenomenon, we break down the testing dataset into three groups, namely common, rare, and unseen set. Specifically, we define the testing fashion tuples appearing more than 5 times in the training set as the common set, those contained in the training set but appear less than 5 times as the rare set. For fashion knowledge tuples that never appear in the training set, we denote them as the unseen set. Table 3 shows the recall score of the three groups, which is the likelihood of the corresponding tuples being correctly predicted by the model.
As shown in the table, our model improves upon the discriminative baselines across all the tuple categories. This improvement is more significant on the rare data, where the recall score improves by 15.01 percent compared with ViBERT. The result demonstrates the effectiveness of generative models in the FKE task, showing that when it comes to unfamiliar cases, generative models learn to describe the fashion items using the given knowledge compared to discriminative methods. What's more, by generating a natural language caption, the recall rate improves by 7.32 and 3.46 percent in rare and unseen cases, which proves that the interplay between
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline Setting & BLEU\({}_{1}\) & BLEU\({}_{2}\) & METEOR & Acc & F1 \\ \hline Base & 69.77 & 64.51 & 38.17 & 13.76 & 30.43 \\ \hline +SRC & 71.20 & 65.81 & 38.69 & 15.08 & 31.79 \\ +ITM & 71.90 & 66.05 & 38.44 & 15.31 & 32.04 \\ +VQA & 71.79 & 65.03 & 38.81 & 15.10 & 31.87 \\ +ITM+VQA & 72.49 & 66.68 & 38.90 & 18.23 & 34.01 \\ +SRC+VQA & 72.81 & 67.04 & 38.79 & 17.97 & 32.56 \\ +SRC+ITM & 73.46 & 67.61 & 38.86 & 18.54 & 33.87 \\ \hline Ours & 75.40 & 69.29 & 39.80 & 20.22 & 35.43 \\ \hline \end{tabular}
\end{table}
Table 2. Performance comparison regarding different auxiliary tasks, where base denotes directly fine-tuning on our dataset without post-training.
Figure 5. Performance comparison with respect to different person and fashion item numbers.
image and text can be better captured compared with generating fashion knowledge tuples directly.
#### 4.6.3. Caption Construction Analysis
Our proposed sentence construction method transforms the original fashion knowledge tuples to a natural language caption for the sequence-to-sequence mapping. To verify the effectiveness of such design, we also perform experiments based on different caption construction strategies and report the accuracy in Table 4. We use three different caption construction rules in the experiment. Some rules are designed to combine the fashion knowledge tuples in a less compact way (e.g. Rule 1 and 2). For example, we use one sentence to describe each fashion knowledge tuple respectively. We also design some rules where different fashion knowledge aspects are separated (e.g. Rule 3), where we follow the order of occasion first, person next, fashion item last when constructing the caption.
To better demonstrate the algorithms of them, we use one example to illustrate. With the input of three fashion knowledge tuples (daily, P1, male, kid, upper, black), (daily, P1, male, kid, pants, white), (daily, P2, female, old, dress, blue), the outputs of them are as follows:
* Rule 1 _The first male kid wears a black upper in daily. The first male kid wears a white pants in daily. The second female old wears a blue dress in daily._
* Rule 2 _The first male kid wears a black upper and a white pants in daily. The second female old wears a blue dress in daily._
* Rule 3 _The occasion is daily. The person is a male kid and a female old. The first person wears a black upper and a white pants. The second person wears a blue dress._
* Ours _The occasion is daily. The first male kid wears a black upper and a white pants. The second female old wears a blue dress._
According to the results, our caption construction method achieves the best performance in all aspects. Rule 1 and 2 both put the occasion information at the end of each sentence. However, we find that it may pose a negative influence on the occasion prediction. Compared with Rule 3 where the person and fashion item information is separated, our method has a more compact form and helps to better capture the interplay between different aspects, improving the overall accuracy from 18.2 to 20.2.
### Case Study
We use some real cases to compare the performance of our model with the HDF model in a more vivid way. As shown in Figure 6, there are more errors in the HDF extraction results compared with our model. For example, the appearance of the upper in the first case is misclassified as grey. In addition, our model captures the interplay between the image and text information better. For example, in the second case, the post text "_lacy dress_" corresponds to the dress in the image and the hashtag indicates that the occasion should be vacation. What's more, our model provides more comprehensive results. For example, in the third case, the HDF model fails to extract the less obvious earring information in the image and also ignores the pants the man wears. Also concerning the appearance of the fashion items, our model outputs better description against the HDF model. As shown in the third case, our model describes the dress of the woman as "_lacy white_", while the HDF model only classifies the dress as "_white_".
## 5. Conclusion
We investigate social media based FKE task. For a social media post consisting of an image and text, we aim to elicit the occasion, person attributes, and fashion item information from the post. Specifically, we formulate this task as a captioning problem and transform the fashion knowledge tuples into a natural language caption. We also design several auxiliary tasks before captioning to warm-up the model with task-specific knowledge. Since no existing dataset can be directly adapted to our task, we contribute a large-scale dataset with manual annotation. Extensive experiments are conducted to demonstrate the effectiveness of our model.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline & Occ. & Cat. & App. & Overall \\ \hline
**Rule 1** & 72.5 & 36.0 & 21.8 & 18.0 \\
**Rule 2** & 73.6 & 36.1 & 22.0 & 19.1 \\
**Rule 3** & 74.2 & 35.8 & 22.1 & 18.2 \\ \hline
**Ours** & 74.7 & 36.4 & 22.2 & 20.2 \\ \hline \end{tabular}
\end{table}
Table 4. Analysis of different caption construction strategies.
Figure 6. Real Case results of our model and HDF. Different color represents different person information. The underline in the tuple denotes the errors in the prediction.
\begin{table}
\begin{tabular}{c c c c c} \hline Method & Common & Rare & Unseen & Overall \\ \hline
**Discriminative** & & & & \\ HDF & 18.32 & 6.07 & 1.79 & 17.62 \\ VILBERT & 22.31 & 6.21 & 1.97 & 20.41 \\ \hline
**Generative** & & & & \\ VL-Bart & 26.21 & 13.90 & 3.51 & 21.85 \\ Ours & 34.63 & 21.22 & 6.97 & 32.28 \\ \hline \end{tabular}
\end{table}
Table 3. Recall rate of generative and discriminative methods on different test categories. |
2309.14301 | Existence of Eigenvalues for Anisotropic and Fractional Anisotropic
Problems via Ljusternik-Schnirelmann Theory | In this work, our interest lies in proving the existence of critical values
of the following Rayleigh-type quotients $$Q_{\mathbf p}(u) = \frac{\|\nabla
u\|_{\mathbf p}}{\|u\|_{\mathbf p}},\quad\text{and}\quad Q_{\mathbf s,\mathbf
p}(u) = \frac{[u]_{\mathbf s,\mathbf p}}{\|u\|_{\mathbf p}}, $$ where $\mathbf
p = (p_1,\dots,p_n)$, $\mathbf s=(s_1,\dots,s_n)$ and $$ \|\nabla u\|_{\mathbf
p} = \sum_{i=1}^n \|u_{x_i}\|_{p_i} $$ is an anisotropic Sobolev norm,
$[u]_{\mathbf s,\mathbf p}$ is a fractional version of the same anisotropic
norm, and $\|u\|_{\mathbf p}$ is an anisotropic Lebesgue norm.
Using the Ljusternik-Schnirelmann theory, we prove the existence of a
sequence of critical values and we also find an associated Euler-Lagrange
equation for critical points. Additionally, we analyze the connection between
the fractional critical values and its local counterparts. | I. Ceresa Dussel, J. Fernandez Bonder | 2023-09-25T17:16:47Z | http://arxiv.org/abs/2309.14301v1 | # Existence of eigenvalues for anisotropic and fractional anisotropic problems via
###### Abstract.
In this work, our interest lies in proving the existence of critical values of the following Rayleigh-type quotients
\[\mathcal{Q}_{\mathbf{p}}(u)=\frac{\|\nabla u\|_{\mathbf{p}}}{\|u\|_{\mathbf{ p}}},\quad\text{and}\quad\mathcal{Q}_{\mathbf{s},\mathbf{p}}(u)=\frac{[u]_{ \mathbf{s},\mathbf{p}}}{\|u\|_{\mathbf{p}}},\]
where \(\mathbf{p}=(p_{1},\ldots,p_{n})\), \(\mathbf{s}=(s_{1},\ldots,s_{n})\) and
\[\|\nabla u\|_{\mathbf{p}}=\sum_{i=1}^{n}\|u_{x_{i}}\|_{p_{i}}\]
is an anisotropic Sobolev norm, \([u]_{\mathbf{s},\mathbf{p}}\) is a fractional version of the same anisotropic norm, and
\[\|u\|_{\mathbf{p}}=\left(\int_{\mathbb{R}}\left(\ldots\left(\int_{\mathbb{R}}| u|^{p_{1}}dx_{1}\right)^{\frac{p_{2}}{p_{1}}}\,dx_{2}\ldots\right)^{p_{n}/p_{n-1}} dx_{n}\right)^{1/p_{n}}\]
is an anisotropic Lebesgue norm.
Using the Ljusternik-Schnirelmann theory, we prove the existence of a sequence of critical values and we also find an associated Euler-Lagrange equation for critical points. Additionally, we analyze the connection between the fractional critical values and its local counterparts.
Key words and phrases:Eigenvalues, mixed Lebesgue, anisotropic Sobolev spaces, Ljusternik-Schnirelmann.
The eigenvalue problem of the \(p\)-Laplacian, characterized by the nonlinearity introduced through power exponentiation, is both intriguing and demanding, as it entails solving the equation
\[-\Delta_{p}u=\lambda|u|^{p-2}u\]
Many authors have extensively explored this problem, as seen in [12, 17, 14, 2].
Even more, in recent decades, there has been a growing interest in fractional operators due to their applications in various natural sciences models [9, 16, 21]. One prominent exponent of this family of fractional operators is the fractional \(p\)-Laplacian, defined as
\[(-\Delta_{p})^{s}u(x)=\mathrm{p.v.}(1-s)K_{n,p}\int_{\mathbb{R}^{n}}\frac{|u(x)- u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{n+sp}}\,dy,\]
where \(K_{n,p}\) is a constant that depends only on \(n\) and \(p\).
The eigenvalue problem associated with this fractional operator,
\[(-\Delta_{p})^{s}u(x)=\lambda|u|^{p-2}u,\]
has also been studied extensively by several authors [5, 10, 22, 15].
The aim of this paper is to introduce an anisotropic feature to these eigenvalue problems. This choice is motivated by the substantial attention dedicated to investigating this phenomenon in signal processing and diffusion studies [8, 24]. For those not familiar with the term, anisotropy can be described as the characteristic of displaying directional dependence, where various attributes or qualities manifest differently in distinct directions. This stands in opposition to the isotropic nature of the Laplacian, \(p-\)Laplacian, and fractional \(p-\)Laplacian, where these properties remain uniform regardless of the direction.
On one hand, a strategy to address anisotropy involves emphasizing the integrability of individual partial derivatives of a function \(u\) by employing the sum of standard \(L^{p}\) norms,
\[\left\|\nabla u\right\|_{\mathbf{p}}=\sum_{i=1}^{n}\left\|u_{x_{i}}\right\|_{p _{i}},\]
see [19, 20, 23]. Hence, we naturally arrive at the following anisotropic pseudo-laplace operator
\[-\widetilde{\Delta}_{\mathbf{p}}u:=-\operatorname{div}\left(\sum_{i=1}^{n}|u_{ x_{i}}|^{p_{i}-2}u_{x_{i}}\right)\]
On the other hand, Benedek & Panzone [3] present the anisotropic \(L^{\mathbf{p}}\) (\(\mathbf{p}=(p_{1},\ldots,p_{n})\)) space with a special norm to address the anisotropy of a function \(u\). The mixed Lebesgue space is constructed by considering different exponents for each coordinate in the norm
\[\|u\|_{\mathbf{p}}=\left(\int_{\mathbb{R}}\left(\ldots\left(\int_{\mathbb{R}} |u|^{p_{1}}dx_{1}\right)^{\frac{p_{2}}{p_{1}}}\,dx_{2}\ldots\right)^{p_{n}/p_{ n-1}}dx_{n}\right)^{1/p_{n}}.\]
By considering different exponents for each coordinate, the mixed Lebesgue norm accounts for the anisotropy of the function \(u\). It allows for a more flexible and nuanced characterization of the integrability and decay properties across different coordinates.
By combining this two perspective we can state the following eigenvalue problem
\[-\widetilde{\Delta}_{\mathbf{p}}u=\lambda\mathcal{F}_{\mathbf{p}}(u),\]
where \(\mathcal{F}_{\mathbf{p}}\) is a suitable functional related to \(\|u\|_{\mathbf{p}}\). See (3.3).
Unfortunately, this problem is hindered by its lack of homogeneity. It's important to observe that if \(v\) is an eigenfunction associated with \(\lambda\), there's a possibility that \(tv\) may not qualify as an eigenfunction of \(\lambda\).
Note that a crucial approach to solving the Laplacian, \(p-\)Laplacian and fractional \(p-\)Laplacian eigenvalue problems involves finding the critical points of the Rayleigh quotient associated with each one, namely
\[\mathcal{Q}_{2}(u)=\frac{\|\nabla u\|_{2}^{2}}{\|u\|_{2}^{2}},\quad\mathcal{Q }_{p}(u)=\frac{\|\nabla u\|_{p}^{p}}{\|u\|_{p}^{p}}\quad\text{and }\mathcal{Q}_{sp}(u)=\frac{[u]_{sp}^{p}}{\|u\|_{p}^{p}}.\]
Therefore, it is recommended to explore the following homogeneous Rayleigh quotient
\[\mathcal{Q}_{\mathbf{p}}(u)=\frac{\|\nabla u\|_{\mathbf{p}}}{\|u\|_{\mathbf{ p}}}. \tag{1.1}\]
As we will observe in Section 3, the associated Euler-Lagrange equation of \(\mathcal{Q}_{\mathbf{p}}(u)\) is the following homogeneous eigenvalue problem,
\[-\mathcal{L}_{\mathbf{p}}u=-\operatorname{div}\left(\sum_{i=1}^{n}\left| \frac{u_{x_{i}}}{\|u_{x_{i}}\|_{p_{i}}}\right|^{p_{i}-2}\frac{u_{x_{1}}}{\|u \|_{p_{i}}}\right)=\lambda\mathcal{F}_{\mathbf{p}}(u). \tag{1.2}\]
In [7] fractional anisotropy is introduced through the utilization of integrability parameters \(\mathbf{p}=(p_{1},\dots,p_{n})\), \(1<p_{i}<\infty\), and fractional parameters \(\mathbf{s}=(s_{1},\dots,s_{n})\), \(0<s_{i}<1\), and the subsequent norm,
\[[u]_{\mathbf{s},\mathbf{p}}=\sum_{i=1}^{n}\left(\int_{\mathbb{R}^{n}}\int_{ \mathbb{R}}(1-s_{i})\frac{|u(x+he_{i})-u(x)|^{p_{i}}}{|h|^{1+s_{i}p_{i}}}\,dh \,dx\right)^{1/p_{i}}.\]
As in the non-fractional case, combining this perspective with the Benedek & Panzone's norm we arrive to the following eigenvalue problem
\[(-\widetilde{\Delta}_{\mathbf{p}})^{\mathbf{s}}u(x)=\lambda\mathcal{F}_{ \mathbf{p}}(u),\]
where \((-\widetilde{\Delta}_{\mathbf{p}})^{\mathbf{s}}\) it the _fractional pseudo p-Laplacian_ operator defined as
\[(-\widetilde{\Delta}_{\mathbf{p}})^{\mathbf{s}}u(x)=\sum_{i=1}^{n}\int_{ \mathbb{R}^{n}}\frac{(1-s_{i})}{p_{i}}\frac{|u(x+he_{i})-u(x)|^{p_{i}-2}(u(x+ he_{i})-u(x))}{|h|^{1+sp}}\,dh\,dx.\]
Again, this is not an homogeneous problem, therefore we study the homogeneous Rayleigh quotient
\[\mathcal{Q}_{\mathbf{s},\mathbf{p}}(u)=\frac{[u]_{\mathbf{s},\mathbf{p}}}{\|u \|_{\mathbf{p}}}. \tag{1.3}\]
As we will see in Section 3 the Euler-Lagrange equation is
\[-\mathcal{L}_{\mathbf{s},\mathbf{p}}u=\lambda\mathcal{F}_{\mathbf{p}}(u), \tag{1.4}\]
where \(\mathcal{L}_{\mathbf{s},\mathbf{p}}\) is the fractional version of \(\mathcal{L}_{\mathbf{p}}\).
To address the problem of find criticals points of (1.1), and (1.3) and solve the eigenvalues problems (1.2) and (1.4), the Ljusternik-Schnirelman theory serves as a powerful framework for exploring critical point theory and the existence of critical points for functionals as we will see in Section 5. See [18].
The rest of the paper is organized as follows: In Section 2, we dive into anisotropic Sobolev spaces and fractional anisotropic Sobolev spaces, explaining them in more detail and discussing some interesting properties like the Poincare inequality and a Rellich-Kondrashov type theorem. Then, in Section 3, we figure out the Euler-Lagrange equations associated with the corresponding Rayleigh-type quotients. In Section 4 we study the asymptotic behavior of the sequence of eigenvalues as \(\mathbf{s}\to(1,\dots,1)\) and finally in Section 5 we use Ljusternik-Schnirelman theory to prove the existence of eigenvalues.
## 2. Mixed, anisotropic and fractional spaces
In this section, our objective is to establish the definition of the mixed Lebesgue space, as introduced by [3]. This space will serve as a fundamental building block for our analysis. Furthermore, we will define a suitable anisotropic Sobolev space, \(W_{0}^{1,\mathbf{P}}(\Omega)\), and a fractional anistropic Sobolev space \(W_{0}^{\mathbf{s},\mathbf{P}}(\Omega)\).
### Mixed space
Let \(\mathbf{p}=(p_{1},p_{2},\dots,p_{n})\) with \(1<p_{i}<\infty\) for \(i=1,\dots,n\) be integral parameters. Without loss of generality, we can assume that
\[1<p_{1}\leq p_{2}\leq\dots\leq p_{n}<\infty. \tag{2.1}\]
We define the _mixed Lebesgue space_ as
\[L^{\mathbf{P}}(\mathbb{R}^{n})=\{u\text{ measurable such that }\|u\|_{L^{\mathbf{P}}( \mathbb{R}^{n})}<\infty\}.\]
Where
\[\|u\|_{\mathbf{p}}=\left(\int_{\mathbb{R}}\left(\dots\left(\int_{\mathbb{R}}| u|^{p_{1}}dx_{1}\right)^{\frac{p_{2}}{p_{1}}}\,dx_{2}\dots\right)^{p_{n}/p_{n-1}} dx_{n}\right)^{1/p_{n}}.\]
Furthermore, given \(\Omega\) an open bounded subset of \(\mathbb{R}^{n}\), we define
\[L^{\mathbf{P}}(\Omega)=\{u\in L^{\mathbf{p}}(\mathbb{R}^{n})\text{ such that }u=0\text{ in }\mathbb{R}^{n}\setminus\Omega\}.\]
Observe that \(L^{\mathbf{P}}(\Omega)\) is a closed subspace of \(L^{\mathbf{P}}(\mathbb{R}^{n})\). This space \(L^{\mathbf{P}}(\Omega)\) turns out to be a reflexive Banach space and its properties were studied in [3, 1].
_Remark 2.1_.: The \(\|.\|_{\mathbf{p}}\) norm can be defined by recurrence as
\[I_{1}(u) =\left(\int_{\mathbb{R}}|u|^{p_{1}}\,dx_{1}\right)^{1/p_{1}}\] \[I_{2}(u) =\left(\int_{\mathbb{R}}I_{1}(u)^{p_{2}}\,dx_{2}\right)^{1/p_{2}}\] \[\vdots\] \[I_{j}(u) =\left(\int_{\mathbb{R}}I_{j-1}(u)^{p_{j}}\,dx_{j}\right)^{1/p_{j}}\] \[\vdots\] \[I_{n}(u) =\left(\int_{\mathbb{R}}I_{n-1}(u)^{p_{n}}\,dx_{n}\right)^{1/p_{n}}\] \[I(u) =I_{n}(u)=\|u\|_{\mathbf{p}}\]
_Remark 2.2_.: Observe that, given \(u\in L^{\mathbf{p}}(\mathbb{R}^{n})\), \(I_{j}(u)\) is a function of \((x_{j+1},\ldots,x_{n})\).
Moreover, for almost every \((y_{j+2},\ldots,y_{n})\in\mathbb{R}^{n-j-2}\), the function \(I_{j}(u)\) (as a function of \(x_{j+1}\)), belongs to \(L^{p_{j+1}}(\mathbb{R})\).
Also, observe that if \(\{u_{k}\}_{k\in\mathbb{N}}\subset L^{\mathbf{p}}(\mathbb{R}^{n})\) is such that \(u_{k}\to u\in L^{\mathbf{p}}(\mathbb{R}^{n})\) as \(k\to\infty\) then \(I_{j}(u_{k})(\cdot,y_{j+2},\ldots,y_{n})\to I_{j}(u)(\cdot,y_{j+2},\ldots,y_{n})\) in \(L^{p_{j+1}}(\mathbb{R})\) for a.e. \((y_{j+2},\ldots,y_{n})\in\mathbb{R}^{n-j-2}\).
### Anisotropic Sobolev spaces
Our interest lies in functions whose partial derivatives have different integrability. With this fact in mind, given \(\mathbf{p}=(p_{1},\ldots,p_{n})\) with \(1<p_{i}<\infty\), the anisotropic Sobolev space is defined as follows:
\[W^{1,\mathbf{p}}(\mathbb{R}^{n}):=\left\{u\in L^{\mathbf{p}}(\mathbb{R}^{n}) \colon u_{x_{i}}\in L^{p_{i}}(\mathbb{R}^{n}),\ i=1,\ldots,n\right\},\]
equipped with the following norm
\[\|u\|_{1,\mathbf{p}}=\|u\|_{\mathbf{p}}+\sum_{i=1}^{n}\left\|u_{x_{i}}\right\| _{p_{i}}=\|u\|_{\mathbf{p}}+\|\nabla u\|_{\mathbf{p}}.\]
It is easy to prove that \(W^{1,\mathbf{p}}(\mathbb{R}^{n})\) is a separable, reflexive Banach space.
Now, given a bounded domain \(\Omega\subset\mathbb{R}^{n}\) we define \(W^{1,\mathbf{p}}_{0}(\Omega)\) as the closure of \(C^{\infty}_{c}(\Omega)\) in \(W^{1,\mathbf{p}}(\mathbb{R}^{n})\).
### Fractional space
Next we present the fractional anisotropic Sobolev space.
First, given \(i=1,\ldots,n\), \(s\in(0,1]\) and \(p\in(1,\infty)\), for any \(u\colon\mathbb{R}^{n}\to\mathbb{R}\) measurable we define the quantity
\[[u]_{s,p,i}=\left(\int_{\mathbb{R}^{n}}\int_{\mathbb{R}}\frac{|u(x+he_{i})-u(x )|^{p}}{|h|^{1+sp}}\,dhdx\right)^{\frac{1}{p}},\]
where \(e_{i}\) is the \(i^{\text{th}}-\)canonical vector base in \(\mathbb{R}^{n}\).
Now, given \(\mathbf{p}=(p_{1},\ldots,p_{n})\) and \(\mathbf{s}=(s_{1},\ldots,s_{n})\) with \(1<p_{i}<\infty\) and \(0<s_{i}<1\), for \(i=1,\ldots,n\), we define the anisotropic fractional order Sobolev space as
\[W^{\mathbf{s},\mathbf{p}}(\mathbb{R}^{n}):=\left\{u\in L^{\mathbf{p}}( \mathbb{R}^{n})\colon[u]_{s_{i},p_{i},i}<\infty,\ i=1,\ldots,n\right\}.\]
This space has a natural norm defined as
\[\|u\|_{\mathbf{s},\mathbf{p}}:=\|u\|_{\mathbf{p}}+\sum_{i=1}^{n}[u]_{s_{i},p_ {i},i}=\|u\|_{\mathbf{p}}+[u]_{\mathbf{s},\mathbf{p}}.\]
It is easy to see that \(W^{\mathbf{s},\mathbf{p}}(\mathbb{R}^{n})\) is a separable and reflexive Banach space. See [6, 7].
As before, given \(\Omega\subset\mathbb{R}^{n}\) a bounded domain, we define \(W^{\mathbf{s},\mathbf{p}}_{0}(\Omega)\) as the closure of \(C^{\infty}_{c}(\Omega)\) in \(W^{\mathbf{s},\mathbf{p}}(\mathbb{R}^{n})\).
The following two theorems represent analogs to the classical Poincare inequality and the Rellich-Kondrashov type theorem within the context of \(L^{\mathbf{p}}(\Omega)\) and anisotropic fractional Sobolev space.
**Proposition 2.3** (Poincare).: _Given \(\Omega\) an open bounded subset on \(\mathbb{R}^{n}\), there exists constants \(C_{1}(\Omega,\mathbf{p},n)>0\) and \(C_{2}(\Omega,\mathbf{p},\mathbf{s},n)>0\) such that for every \(u\) in \(W^{1,\mathbf{p}}_{0}(\Omega)\), the following inequality holds_
\[\|u\|_{\mathbf{p}}\leq C_{1}\|\nabla u\|_{\mathbf{p}}, \tag{2.2}\]
_and for every \(u\) in \(W^{\mathbf{s},\mathbf{p}}_{0}(\Omega)\), the following inequality holds:_
\[\|u\|_{\mathbf{p}}\leq C_{2}[u]_{\mathbf{s},\mathbf{p}}. \tag{2.3}\]
Proof.: Let \(u\) be a function in \(W^{1,\mathbf{p}}_{0}(\Omega)\). On one hand, observe that since \(p_{i}\leq p_{n}\) for every \(i=1,\ldots,n-1\) and \(|\Omega|<\infty\), it follows by Holder's inequality that \(L^{p_{n}}(\Omega)\) is continuously embedded in \(L^{\mathbf{p}}(\Omega)\), that is, there exists a positive constant \(C>0\) such that
\[\|u\|_{\mathbf{p}}\leq C\|u\|_{L^{p_{n}}}.\]
On the other hand, the Poincare inequality for functions in \(W^{1,p_{n}}_{0}(\Omega)\)
\[\|u\|_{L^{p_{n}}(\Omega)}\leq C\|u_{x_{n}}\|_{L^{p_{n}}(\Omega)}\leq C\sum_{i=1 }^{n}\|u_{x_{i}}\|_{L^{p_{i}}(\Omega)}\]
Therefore, by combining these results, we obtain (2.2).
For the second inequality, let \(u\) be a function in \(W^{\mathbf{s},\mathbf{p}}_{0}(\Omega)\), we can assume that there exist \(R>0\) such that \(\operatorname{supp}u\subset Q_{R}=[-R,R]^{n}\). Hence,
\[[u]^{p_{1}}_{s_{1},p_{1},1} =\int_{\mathbb{R}^{n}}\int_{\mathbb{R}}\frac{|u(x+he_{1})-u(x)|^{ p_{1}}}{|h|^{1+s_{1}p_{1}}}\,dh\,dx\] \[\geq\int_{Q_{R}}\int_{\mathbb{R}}\frac{|u(x+he_{1})-u(x)|^{p_{1} }}{|h|^{1+s_{1}p_{1}}}\,dh\,dx\] \[\geq\int_{Q^{\prime}_{R}}\int_{|x_{1}|\leq R}\int_{|x_{1}+he_{1}| \geq R}\frac{|u(x)|^{p_{1}}}{|h|^{1+s_{1}p_{1}}}\,dh\,dx_{1}\,dx^{\prime}\] \[\geq\int_{Q^{\prime}_{R}}\int_{|x_{1}|\leq R}|u(x)|^{p_{1}}\int_{ |h|\geq 2R}\frac{1}{|h|^{1+s_{1}p_{1}}}\,dh\,dx_{1}\,dx^{\prime}\] \[\geq C\|u\|_{p_{1}}^{p_{1}},\]
where \(Q^{\prime}_{R}=[-R,R]^{n-1}\) and \(dx^{\prime}=dx_{2}\cdots dx_{n}\).
Arguing in a similar fashion we conclude that there exists \(C_{i}(\Omega,s_{i},p_{i})\) such that
\[C_{i}[u]_{s_{i},p_{i},i}\geq\|u\|_{p_{i}}.\]
Therefore taking \(K=\max_{i}\{C_{i}\}\) we have that
\[K\sum_{i=1}^{n}[u]_{s,p,i}\geq\sum_{i=1}^{n}\|u\|_{p_{i}}\geq\|u\|_{p_{n}} \geq C\|u\|_{\mathbf{p}}.\]
This fact concludes the proof of (2.3).
The following notation will be used. Given a vector \(\mathbf{q}=(q_{1},\ldots,q_{n})\) with \(q_{i}>0\) for \(i=1,\ldots,n\), we denote by \(\bar{\mathbf{q}}\) the _harmonic mean_ of the vector \(\mathbf{q}\), i.e.
\[\bar{\mathbf{q}}:=\left(\frac{1}{n}\sum_{i=1}^{n}\frac{1}{q_{i}}\right)^{-1}.\]
Next, given two vectors \(\mathbf{q}=(q_{1},\ldots,q_{n})\) and \(\mathbf{r}=(r_{1},\ldots,r_{n})\) with \(q_{i},r_{i}>0\) for \(i=1,\ldots,n\) we define the _product_\(\mathbf{qr}\) as
\[\mathbf{qr}=(q_{1}r_{1},\ldots,q_{n}r_{n}),\]
the coordinate by coordinate multiplication.
**Proposition 2.4** (Rellich-Kondrashov).: _Let \(\mathbf{p}=(p_{1},\ldots,p_{n})\) with \(1<p_{i}<\infty\), \(i=1,\ldots,n\) and be such that_
\[\bar{\mathbf{p}}\leq n. \tag{2.4}\]
_Define the critical exponent \(\mathbf{p}^{*}\) as_
\[\mathbf{p}^{*}:=\frac{n\bar{\mathbf{p}}}{n-\bar{\mathbf{p}}}. \tag{2.5}\]
_Then \(W_{0}^{1,\mathbf{p}}(\Omega)\subset L^{q}(\Omega)\), for all \(1\leq q\leq\mathbf{p}^{*}\). Even more \(W_{0}^{1,\mathbf{p}}(\Omega)\subset\subset L^{q}(\Omega)\) if \(1\leq q<\mathbf{p}^{*}\). In particular \(W_{0}^{1,\mathbf{p}}(\Omega)\subset\subset L^{\mathbf{p}}(\Omega)\)._
_Now, let \(\mathbf{s}=(s_{1},\ldots,s_{n})\) with \(0<s_{i}<1\), for \(i=1,\ldots,n\) and \(\mathbf{p}\) be as before. Assume that_
\[\overline{\mathbf{sp}}<n \tag{2.6}\]
_and define the fractional critical exponent_
\[\mathbf{p}_{\mathbf{s}}^{*}=\frac{n\frac{\overline{\mathbf{sp}}}{\overline{ \mathbf{s}}}}{n-\overline{\mathbf{sp}}}. \tag{2.7}\]
_Moreover, assume that_
\[p_{n}<\mathbf{p}_{\mathbf{s}}^{*}. \tag{2.8}\]
_Then \(W_{0}^{\mathbf{s},\mathbf{p}}(\Omega)\subset L^{q}(\Omega)\), for all \(1\leq q\leq\mathbf{p}_{\mathbf{s}}^{*}\). Even more, \(W_{0}^{\mathbf{s},\mathbf{p}}(\Omega)\subset\subset L^{q}(\Omega)\) for \(1\leq q<\mathbf{p}_{\mathbf{s}}^{*}\). In particular \(W_{0}^{\mathbf{s},\mathbf{p}}(\Omega)\subset\subset L^{\mathbf{p}}(\Omega)\)._
Proof.: The proof of \(W_{0}^{1,\mathbf{p}}(\Omega)\subset\subset L^{q}\) for all \(1<q<\mathbf{p}^{*}\) is studied in the previous references [23, 11]. To prove that \(W_{0}^{1,\mathbf{p}}(\Omega)\subset\subset L^{\mathbf{p}}(\Omega)\) observe that as \(p_{n}<\mathbf{p}^{*}\) then \(L^{p_{n}}(\Omega)\subset L^{\mathbf{p}}(\Omega)\) continuously.
The proof of fractional case is immediate of [7, Theorem 2.1] and the previous idea.
Without loss of generality, we can always assume that (2.1) is satisfied.
In the rest of the paper, it will always be assumed that conditions (2.4), (2.6) and (2.8) hold.
## 3. The Euler-Lagrange equation
### Non-fractional case
In this subsection we will establish the Euler-Lagrange equation associated to the Rayleigh-type quotient \(\mathcal{Q}_{\mathbf{p}}\) defined in (1.1). In fact, following ideas from [14] (see also [13]), we show that the EL equation turns out to be the following
\[\begin{cases}-\mathcal{L}_{\mathbf{p}}u:=\lambda\mathcal{F}_{\mathbf{p}}(u)& \text{in }\Omega\\ u=0&\text{in }\mathbb{R}^{n}\setminus\Omega,\end{cases} \tag{3.1}\]
where
\[-\mathcal{L}_{\mathbf{p}}u:=-\operatorname{div}\left(\sum_{i=1}^{n}\left| \frac{u_{x_{i}}}{\|u_{x_{i}}\|_{p_{i}}}\right|^{p_{i}-2}\frac{u_{x_{i}}}{\|u_ {x_{i}}\|_{p_{i}}}\right) \tag{3.2}\]
and
\[\mathcal{F}_{\mathbf{p}}(u)=\prod_{i=1}^{n}I_{i}(u)^{p_{i+1}-p_{i}}|u|^{p_{1}- 2}u, \tag{3.3}\]
where \(p_{n+1}=1\).
**Definition 3.1**.: Let \(u\) be a function in \(W^{1,\mathbf{p}}_{0}(\Omega)\), then \(u\) is a weak solution of (3.1) if and only if \(u\) verifies
\[\int_{\Omega}\sum_{i=1}^{n}\left|\frac{u_{x_{i}}}{\|u_{x_{i}}\|_{p_{i}}}\right| ^{p_{i}-2}\frac{u_{x_{i}}}{\|u_{x_{i}}\|_{p_{i}}}v_{x_{i}}\,dx=\lambda\int_{ \Omega}\mathcal{F}_{\mathbf{p}}(u)v\,dx,\]
for all \(v\in W^{1,\mathbf{p}}_{0}(\Omega)\).
We will need the following lemma regarding the behavior of the functional \(\mathcal{F}_{\mathbf{p}}\).
**Lemma 3.2**.: _Let \(\mathbf{p}=(p_{1},\ldots,p_{n})\) be such that \(1<p_{i}<\infty\) and let \(\mathbf{p}^{\prime}=(p^{\prime}_{1},\ldots,p^{\prime}_{n})\). Let \(\mathcal{F}_{\mathbf{p}}\) be the functional defined in (3.3)._
_Then \(\mathcal{F}_{\mathbf{p}}\colon L^{\mathbf{p}}(\mathbb{R}^{n})\to L^{\mathbf{p} ^{\prime}}(\mathbb{R}^{n})\) is continuous._
Proof.: To see that it is well defined, just observe that if \(u\in L^{\mathbf{p}}(\mathbb{R}^{n})\), then
\[\left(\int_{\mathbb{R}}|\mathcal{F}_{\mathbf{p}}(u)|^{p^{\prime}_ {1}}\,dx_{1}\right)^{1/p^{\prime}_{1}} =\prod_{i=1}^{n}I_{i}(u)^{p_{i+1}-p_{1}}\left(\int_{\mathbb{R}}|u| ^{(p_{1}-1)p^{\prime}_{1}}\,dx_{1}\right)^{1/p^{\prime}_{1}}\] \[=\prod_{i=1}^{n}I_{i}(u)^{p_{i+1}-p_{1}}I_{1}(u)^{p_{1}/p^{\prime }_{1}}\] \[=\prod_{i=2}^{n}I_{i}(u)^{p_{i+1}-p_{1}}I_{1}(u)^{p_{2}-1}.\]
Iterating this procedure, one easily conclude that
\[\|\mathcal{F}_{\mathbf{p}}(u)\|_{\mathbf{p}^{\prime}}=\|u\|_{\mathbf{p}}.\]
In order to see the continuity of \(\mathcal{F}_{\mathbf{p}}\), let \(\{u_{k}\}_{k\in\mathbb{N}}\subset L^{\mathbf{p}}(\mathbb{R}^{n})\) be such that \(u_{k}\to u\) in \(L^{\mathbf{p}}(\mathbb{R}^{n})\). Then, define
\[\tilde{I}_{1}(k):=\left(\int_{\mathbb{R}}|\mathcal{F}_{\mathbf{p }}(u_{k})-\mathcal{F}_{\mathbf{p}}(u)|^{p^{\prime}_{1}}\,dx_{1}\right)^{1/p^{ \prime}_{1}}\] \[\tilde{I}_{i+1}(k):=\left(\int_{\mathbb{R}}\tilde{I}_{i}(k)^{p^{ \prime}_{i+1}}\,dx_{i+1}\right)^{1/p^{\prime}_{i+1}},\qquad i=1,\ldots,n-1.\]
Observe that \(\|\mathcal{F}_{\mathbf{p}}(u_{k})-\mathcal{F}_{\mathbf{p}}(u)\|_{\mathbf{p}^ {\prime}}=\tilde{I}_{n}(k)\), so it is enough to show that, up to a subsequence,
\[\tilde{I}_{i}(k)\to 0\text{ as }k\to\infty\quad\text{a.e. }(x_{i+1},\ldots,x_{n}), \qquad i=1,\ldots,n\] and \[\text{a.e. }(x_{i+2},\ldots,x_{n}),\ \tilde{I}_{i}(k)(x_{i+1})\leq h_{i}(x_{i+1}), \quad\text{with }h\in L^{p^{\prime}_{i+1}}(\mathbb{R}). \tag{3.4}\]
In fact, let us see (3.4) for \(i=1\) and the rest will follow by induction.
By Remark 2.2, it is easy see that \(\mathcal{F}_{\mathbf{p}}(u_{k})\to\mathcal{F}_{\mathbf{p}}(u)\) a.e. So in order to see that \(\tilde{I}_{1}(k)\to 0\) for a.e. \(x^{\prime}=(x_{2},\ldots,x_{n})\) we need to find an integrable majorant for \(|\mathcal{F}_{\mathbf{p}}(u_{k})-\mathcal{F}_{\mathbf{p}}(u)|^{p^{\prime}_{1}}\) for a.e. \(x^{\prime}\in\mathbb{R}^{n-1}\).
Hence,
\[|\mathcal{F}_{\mathbf{p}}(u_{k})-\mathcal{F}_{\mathbf{p}}(u)|^{p^{\prime}_{1}} \leq C\left(\prod_{i=1}^{n}I_{i}(u_{k})^{(p_{i+1}-p_{i})p^{\prime}_{1}}|u_{k}| ^{p_{1}}+\prod_{i=1}^{n}I_{i}(u)^{(p_{i+1}-p_{i})p^{\prime}_{1}}|u|^{p_{1}}\right)\]
As, by Remark 2.2\(I_{i}(u_{k})(\cdot,x_{i+2},\ldots,x_{n})\to I_{i}(u)(\cdot,x_{i+2},\ldots,x_{n})\) in \(L^{p_{i+i}}(\mathbb{R})\) for a.e. \((x_{i+2},\ldots,x_{n})\), using [4, Theorem 4.9], there exists \(h_{i}=h_{i}(\cdot,x_{i+2},\ldots,x_{n})\in L^{p_{i+1}}(\mathbb{R})\) such that
\[I_{i}(u_{k})(x_{1},x_{i+2},\ldots,x_{n})\leq h_{i}(x_{1},x_{i+2},\ldots,x_{n}).\]
Moreover, since \(u_{k}(\cdot,x^{\prime})\to u(\cdot,x^{\prime})\) in \(L^{p_{1}}(\mathbb{R})\) we obtain the existence of \(h_{0}(x)\), \(h_{0}(\cdot,x^{\prime})\in L^{p_{1}}(\mathbb{R})\) such that
\[|u_{k}(x)|\leq h_{0}(x).\]
Hence
\[|\mathcal{F}_{\mathbf{p}}(u_{k})-\mathcal{F}_{\mathbf{p}}(u)|^{p^{\prime}_{1 }}\leq C\left(\prod_{i=1}^{n}h_{i}^{(p_{i+1}-p_{i})p^{\prime}_{i}}h_{0}^{p_{1 }}+\prod_{i=1}^{n}I_{i}(u)^{(p_{i+1}-p_{i})p^{\prime}_{1}}|u|^{p_{1}}\right)= :\Phi(x_{1},x^{\prime}).\]
Since \(\Phi(\cdot,x^{\prime})\in L^{1}(\mathbb{R})\) for a.e. \(x^{\prime}\in\mathbb{R}^{n-1}\) we obtain that \(\tilde{I}_{1}(k)\to 0\).
The proof of (3.4) now follows by induction and the details are left to the reader.
Knowing the definition of weak solution we can state our main result of this section.
**Theorem 3.3**.: _Let \(u\) be a function in \(W^{1,\mathbf{p}}_{0}(\Omega).\) Then \(u\) is a critical point of (1.1) if and only if \(u\) is a weak solution of (3.1)._
To prove this theorem we will use the following notation
\[H(u)=\|\nabla u\|_{\mathbf{p}}\quad\text{and}\quad I(u)=\|u\|_{\mathbf{p}} \tag{3.5}\]
and we need to establish some lemmas that will facilitate the proof.
First, we have to show that the functionals \(H\) and \(I\) are Frechet differentiable.
**Lemma 3.4**.: \(I\colon L^{\mathbf{p}}(\Omega)\to\mathbb{R}\) _and \(H\colon W^{1,\mathbf{p}}_{0}(\Omega)\to\mathbb{R}\) are Gateaux differentiable away from zero and its derivatives are given by_
\[\frac{d}{dt}I(u+tv)|_{t=0}=\langle I^{\prime}(u),v\rangle=\int_{\mathbb{R}^{n }}\prod_{i=1}^{n}I_{i}(u)^{p_{i+1}-p_{i}}|u|^{p_{1}-2}uv\,dx, \tag{3.6}\]
_where \(p_{n+1}=1\) and_
\[\frac{d}{dt}H(u+tv)|_{t=0}=\langle H^{\prime}(u),v\rangle=\sum_{i=1}^{n}\int_{ \mathbb{R}^{n}}\left|\frac{u_{x_{i}}}{\|u_{x_{i}}\|_{p_{i}}}\right|^{p_{i}-2} \frac{u_{x_{i}}}{\|u_{x_{i}}\|_{p_{i}}}v_{x_{i}}\,dx. \tag{3.7}\]
_That is, \(H^{\prime}=\mathcal{L}_{\mathbf{p}}\) and \(I^{\prime}=\mathcal{F}_{\mathbf{p}}.\)_
Proof.: To prove (3.6), let \(u,v\in L^{\mathbf{p}}(\Omega)\) and \(t\in\mathbb{R}\). Then, recalling Remark 2.1, we compute
\[\left.\frac{d}{dt}I_{1}(u+tv)\right|_{t=0}=I_{1}(u)^{1-p_{1}}\int_{\mathbb{R}}| u|^{p_{1}-2}uv\,dx_{1}.\]
Next,
\[\left.\frac{d}{dt}I_{2}(u+tv)\right|_{t=0} =I_{2}(u)^{1-p_{2}}\int_{\mathbb{R}}I_{1}(u+tv)^{p_{2}-1}\left. \frac{d}{dt}I_{1}(u+tv)\right|_{t=0}\,dx_{2}\] \[=\int_{\mathbb{R}^{2}}I_{2}(u)^{1-p_{2}}I_{1}(u)^{p_{2}-p_{1}}|u|^ {p_{1}-2}uv\,dx_{1}dx_{2}\]
Therefore, by induction, we arrive at
\[\left.\frac{d}{dt}I_{n}(u+tv)\right|_{t=0}=\int_{\mathbb{R}^{n}}\prod_{i=1}^{n}I_ {i}(u)^{p_{i+1}-p_{i}}|u|^{p_{1}-2}uv\,dx,\]
where \(p_{n+1}=1\) and the proof of (3.6) follows observing that \(I_{n}=I\).
The proof of (3.7) is standard and the details are left to the reader.
**Theorem 3.5**.: _The functionals \(I\) and \(H\) given in (3.5) are Frechet differentiable._
Proof.: The proof follows easily from Lemma 3.4 just observing that \(I^{\prime}=\mathcal{F}_{\mathbf{p}}\) and \(H^{\prime}=\mathcal{L}_{\mathbf{p}}\) are continuous. In fact, the continuity of \(\mathcal{F}_{\mathbf{p}}\) is proved in Lemma 3.2 and the continuity of \(\mathcal{L}_{\mathbf{p}}\) is an easy excercise.
At this point we can give a rigorous proof of Theorem 3.3.
Proof of Theorem 3.3.: Recall that, since
\[\mathcal{Q}_{\mathbf{p}}(u)=\frac{H(u)}{I(u)},\]
using Lemma 3.4, one obtain that, if \(u\neq 0\),
\[\langle\mathcal{Q}^{\prime}_{\mathbf{p}}(u),v\rangle=\frac{1}{I(u)}\left( \langle H^{\prime}(u),v\rangle-\mathcal{Q}_{\mathbf{p}}(u)\langle I^{\prime}( u),v\rangle\right).\]
Hence, \(u\in W^{1,\mathbf{p}}_{0}(\Omega)\) is a critical point of \(\mathcal{Q}_{\mathbf{p}}\) if and only if
\[\langle H^{\prime}(u),v\rangle=\mathcal{Q}_{\mathbf{p}}(u)\langle I^{\prime}( u),v\rangle.\]
But this is the same as saying that \(u\) is a weak solution to (3.1) with \(\lambda=\mathcal{Q}_{\mathbf{p}}(u)\).
### The fractional case
Now, we will analyze the fractional case. So, we consider the Rayleigh-type quotient \(\mathcal{Q}_{\mathbf{s},\mathbf{p}}\) defined in (1.3), and look for the Euler-Lagrange equation associated to it.
The main result of this section is to show that the E-L equation is given by
\[\begin{cases}-\mathcal{L}_{\mathbf{s},\mathbf{p}}u=\lambda\mathcal{F}_{ \mathbf{p}}(u)&\text{in }\Omega\\ u=0&\text{in }\mathbb{R}^{n}\setminus\Omega,\end{cases} \tag{3.8}\]
where \(\mathcal{F}_{\mathbf{p}}\) is given by (3.3) and
\[\mathcal{L}_{\mathbf{s},\mathbf{p}}u=\text{p.v.}\sum_{i=1}^{n}\int_{\mathbb{R }^{n}}\int_{\mathbb{R}}\left|\frac{D_{h}^{s_{i},i}u(x)}{[u]_{s_{i},p_{i},i}} \right|^{p_{i}-2}\frac{D_{h}^{s_{i},i}u(x)}{[u]_{s_{i},p_{i},i}}\frac{dh}{|h| ^{1+s_{i}}}\,dx,\]
\[=\lim_{\varepsilon\to 0}\sum_{i=1}^{n}\int_{\mathbb{R}^{n}}\int_{|h|> \varepsilon}\left|\frac{D_{h}^{s_{i},i}u(x)}{[u]_{s_{i},p_{i},i}}\right|^{p_{ i}-2}\frac{D_{h}^{s_{i},i}u(x)}{[u]_{s_{i},p_{i},i}}\frac{dh}{|h|^{1+s_{i}}}\,dx\]
and
\[D_{h}^{s,i}u(x)=\frac{u(x+he_{i})-u(x)}{|h|^{s}}.\]
It is shown in [6] that the operator \(\mathcal{L}_{\mathbf{s},\mathbf{p}}\) is the fractional version of \(\mathcal{L}_{\mathbf{p}}\). This operator \(\mathcal{L}_{\mathbf{s},\mathbf{p}}\) has to be understood in the weak sense, i.e. given \(u,v\in W^{\mathbf{s},\mathbf{p}}_{0}(\Omega)\),
\[\langle-\mathcal{L}_{\mathbf{s},\mathbf{p}}u,v\rangle=\sum_{i=1}^{n}\int_{ \mathbb{R}^{n}}\int_{\mathbb{R}}\left|\frac{D_{h}^{s_{i},i}u(x)}{[u]_{s_{i},p_{ i},i}}\right|^{p_{i}-2}\frac{D_{h}^{s_{i},i}u(x)}{[u]_{s_{i},p_{i},i}}D_{h}^{s_{i},i }v(x)\frac{dh}{|h|}\,dx.\]
Again, we have to give a definition of weak solution.
**Definition 3.6**.: Let \(u\) be a function in \(W^{\mathbf{s},\mathbf{p}}_{0}(\Omega)\), then \(u\) is a weak solution of (3.8) if \(u\) verifies
\[\sum_{i=1}^{n}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}}\left|\frac{D_{h}^{s_{i},i} u(x)}{[u]_{s_{i},p_{i},i}}\right|^{p_{i}-2}\frac{D_{h}^{s_{i},i}u(x)}{[u]_{s_{i},p_{i},i}}D_{h}^{s_{i},i}v(x)\frac{dh}{|h|}\,dx=\lambda\int_{\mathbb{R}^{n}}\mathcal{ F}_{\mathbf{p}}(u)v\,dx,\]
for all \(v\in W^{\mathbf{s},\mathbf{p}}_{0}(\Omega)\).
Again, we introduce the notation
\[H_{\mathbf{s}}(u)=[u]_{\mathbf{s},\mathbf{p}} \tag{3.9}\]
and in an analogous form as in Lemma 3.4 we have the following lemma, whose proof is left to the reader.
**Lemma 3.7**.: _The functional \(H_{\mathbf{s}}\colon W^{\mathbf{s},\mathbf{p}}_{0}(\Omega)\to\mathbb{R}\) is Gateaux differentiable away from zero and its derivative is given by_
\[\langle H^{\prime}_{\mathbf{s}}(u),v\rangle=\sum_{i=1}^{n}\int_{\mathbb{R}^{n }}\int_{\mathbb{R}}\left|\frac{D_{h}^{s_{i},i}u(x)}{[u]_{s_{i},p_{i},i}}\right| ^{p_{i}-2}\frac{D_{h}^{s_{i},i}u(x)}{[u]_{s_{i},p_{i},i}}D_{h}^{s_{i},i}v(x) \frac{dh}{|h|}\,dx.\]
_That is \(H^{\prime}_{\mathbf{s}}=\mathcal{L}_{\mathbf{s},\mathbf{p}}\)._
Finally, we can state the Euler-Lagrange Theorem for the fractional case, and its proof is analogous to the non-fractional case and therefore is ommitted.
**Theorem 3.8**.: _Let \(u\) be a function in \(W^{\mathbf{s},\mathbf{p}}_{0}(\Omega)\). Then \(u\) is a critical point of (1.3) if and only if \(u\) is a weak solution of the Euler Lagrange equation (3.8)_
## 4. General properties of eigenvalues
After having derived the Euler-Lagrange equation for each case, it becomes evident that these are eigenvalue problems, for which we can explore some properties.
We say that \(\lambda\in\mathbb{R}\) is an eigenvalue of \(\mathcal{L}_{\mathbf{p}}\) under Dirichlet boundary conditions in the domain \(\Omega\), if problem (3.1) admits a nontrivial weak solution \(u\in W^{1,\mathbf{p}}_{0}(\Omega)\). Then \(u\) is called an eigenfunction of \(\mathcal{L}_{\mathbf{p}}\) corresponding to \(\lambda\). We will denote \(\Sigma^{\mathbf{p}}\) the collection of these eigenvalues.
Similarly, we say that \(\lambda\in\mathbb{R}\) is an eigenvalue of \(\mathcal{L}_{\mathbf{s},\mathbf{p}}\) under Dirichlet boundary conditions in the domain \(\Omega\), if problem (3.8) admits a nontrivial weak solution \(u\in W^{\mathbf{s},\mathbf{p}}_{0}(\Omega)\). Then \(u\) is called an eigenfunction of \(\mathcal{L}_{\mathbf{s},\mathbf{p}}\) corresponding to \(\lambda\). We will denote \(\Sigma^{\mathbf{s},\mathbf{p}}\) the collection of these eigenvalues.
We begin this section by collecting some simple properties for the eigensets \(\Sigma^{\mathbf{p}}\) and \(\Sigma^{\mathbf{s},\mathbf{p}}\).
**Proposition 4.1**.: \(\Sigma^{\mathbf{p}},\Sigma^{\mathbf{s},\mathbf{p}}\subset(0,\infty)\) _are closed sets._
Proof.: As we have done throughout the article, we will only provide the proof for the non-fractional case, leaving the fractional case for the reader.
First, let \(\lambda\in\Sigma^{\mathbf{p}}\) and \(u\in W^{1,\mathbf{p}}_{0}(\Omega)\) be an associated eigenfunction. So, if we take the same \(u\) as a test function in the weak formulation of (3.1), we obtain that
\[\|\nabla u\|_{\mathbf{p}}=\lambda\|u\|_{\mathbf{p}}.\]
Therefore, \(\lambda>0\)
Next, let us see that \(\Sigma^{\mathbf{P}}\) is closed. To this end, let \(\{\lambda_{k}\}\) be a sequence of eigenvalues such that \(\lambda_{k}\to\lambda\in\mathbb{R}\) as \(k\to\infty\) and \(\{u_{k}\}_{k\in\mathbb{N}}\subset W^{1,\mathbf{p}}_{0}(\Omega)\) be a corresponding sequence of \(L^{\mathbf{P}}\)-normalized eigenfunctions. Observe that
\[\|\nabla u_{k}\|_{\mathbf{p}}=\lambda_{k}\|u_{k}\|_{\mathbf{p}}=\lambda_{k},\]
from where it follows that \(\{u_{k}\}_{k\in\mathbb{N}}\) is a bounded sequence in \(W^{1,\mathbf{p}}_{0}(\Omega)\). Hence, passing to a subsequence, we get that
\[u_{k}\rightharpoonup u\text{ in }W^{1,\mathbf{p}}_{0}(\Omega)\text{ and }u_{k}\to u\text{ in }L^{\mathbf{p}}(\Omega).\]
From Lemma 3.2, \(\mathcal{F}_{\mathbf{p}}\) is continuous and we get that
\[\langle\mathcal{L}_{\mathbf{p}}u_{k},v\rangle=\lambda_{k}\langle\mathcal{F}_{ \mathbf{p}}(u_{k}),v\rangle\to\lambda\langle\mathcal{F}_{p}(u),v\rangle.\]
As each \(u_{k}\) is an \(L^{\mathbf{p}}\)-normalized eigenfunction, then
\[\langle\mathcal{L}_{\mathbf{p}}u_{k},u_{k}\rangle=\lambda_{k}\to\lambda.\]
Now, we make use of Lemma 5.3, that is proved in the next section, to obtain that \(u_{k}\to u\) in \(W^{1,\mathbf{p}}_{0}(\Omega)\).
Hence, we can pass to the limit \(k\to\infty\) in the weak formulation
\[\langle\mathcal{L}_{\mathbf{p}}u_{k},v\rangle=\lambda_{k}\langle\mathcal{F}_{ p}(u_{k}),v\rangle\]
and get that
\[\langle\mathcal{L}_{\mathbf{p}}u,v\rangle=\lambda\langle\mathcal{F}_{\mathbf{ p}}(u),v\rangle\]
for any \(v\in W^{1,\mathbf{p}}_{0}(\Omega)\). That is, \(u\) is eigenfunction associated to \(\lambda\) and the proof is complete.
Now, we arrive at the main point of this section, that is the asymptotic behavior of the eigenset \(\Sigma^{\mathbf{s},\mathbf{p}}\) as the fractional parameters \(\mathbf{s}=(s_{1},\ldots,s_{n})\) verify that \(s_{i}\to 1\), \(i=1,\ldots,n\).
To this end, we will make use of the following result which is a particular case of [6, Theorem 3.3].
**Proposition 4.2**.: _[_6_, Theorem 3.3]_ _Let \(\{\mathbf{s}_{k}\}_{k\in\mathbb{N}}\) be a sequence of fractional parameters \(\mathbf{s}_{k}\to(1,\ldots,1)\) as \(k\to\infty\). Let \(\mathbf{p}=(p_{1},\ldots,p_{n})\) be such that \(1<p_{i}<\infty\) for each \(i=1,\ldots,n\) and for each \(k\in\mathbb{N}\) let \(u_{k}\in W^{\mathbf{s}_{k},\mathbf{p}}_{0}(\Omega)\) be such that_
\[\sup_{k\in\mathbb{N}}\|u_{k}\|_{\mathbf{s}_{k},\mathbf{p}}<\infty. \tag{4.1}\]
_Then, there exists a function \(u\in W^{1,\mathbf{p}}_{0}(\Omega)\) and a subsequence \(\{u_{k_{j}}\}_{j\in\mathbb{N}}\subset\{u_{k}\}_{k\in\mathbb{N}}\) such that_
\[u_{k_{j}}\to u\quad\text{in }L^{\mathbf{p}}(\Omega)\quad\text{and}\quad\| \nabla u\|_{\mathbf{p}}\leq\liminf_{k\to\infty}\left[u_{k}\right]_{\mathbf{s }_{k},\mathbf{p}}.\]
Moreover, we also need to borrow a Lemma from [6].
**Lemma 4.3**.: _[_6_, Lemma 5.7]_ _Let \(\{\mathbf{s}_{k}\}_{k\in\mathbb{N}}\) be a sequence of fractional parameters satisfying that \(\mathbf{s}_{k}\to(1,\ldots,1)\) as \(k\to\infty\) and let \(v_{k}\in W^{\mathbf{s}_{k},\mathbf{p}}_{0}(\Omega)\). Assume that \(\{v_{k}\}_{k\in\mathbb{N}}\) satisfy (4.1), and let \(u\in W^{1,\mathbf{p}}_{0}(\Omega)\) be fixed._
_Without loss of generality, we can assume that there exists \(v\in W^{1,\mathbf{p}}_{0}(\Omega)\) such that \(v_{k}\to v\) in \(L^{\mathbf{p}}(\Omega)\) as \(k\to\infty\)._
_Then_
\[\langle\mathcal{L}^{\mathbf{s}_{k}}_{\mathbf{p}}u,v_{k}\rangle\to\langle \mathcal{L}_{\mathbf{p}}u,v\rangle\quad\text{as }k\to\infty.\]
Hence we can state and prove the following theorem.
**Theorem 4.4**.: _Let \(\{\mathbf{s}_{k}\}_{k\in\mathbb{N}}\) be a sequence of fractional parameters \(\mathbf{s}_{k}\to(1,\ldots,1)\) as \(k\to\infty\). Let \(\mathbf{p}=(p_{1},\ldots,p_{n})\) be such that \(1<p_{i}<\infty\) for each \(i=1,\ldots,n\). Let \(\lambda_{k}\in\Sigma^{\mathbf{s}_{k},\mathbf{p}}\) be an eigenvalue of (3.8) such that \(\lambda_{k}\to\lambda\) as \(k\to\infty\). Then \(\lambda\in\Sigma^{\mathbf{p}}\). Moreover, if \(u_{k}\in W_{0}^{\mathbf{s}_{k},\mathbf{p}}(\Omega)\) is a normalized eigenfunction associated to \(\lambda_{k}\) then any \(L^{\mathbf{p}}(\Omega)-\)accumulation point \(u\) of the sequence \(\{u_{k}\}_{k\in\mathbb{N}}\), satisfy that \(u\in W_{0}^{1,\mathbf{p}}(\Omega)\) and is an eigenfunction associated to \(\lambda\)._
Proof.: Let \(\{\mathbf{s}_{k}\}_{k\in\mathbb{N}}\) be sequence of fractional parameters such that \(\mathbf{s}_{k}\to(1,\ldots,1)\) as \(k\to\infty\) and et \(\{\lambda_{\mathbf{s}_{k}}\}_{k\in\mathbb{N}}\) be a sequence of eigenvalues that converge to \(\lambda\) as \(k\to\infty\). For each \(\lambda_{\mathbf{s}_{k}}\) there is a eigenfunction \(u_{k}\), that we can assume to be \(L^{\mathbf{p}}\)-normalized.
Note that since \(\|u_{k}\|_{\mathbf{p}}=1\) and \(\lambda_{\mathbf{s}_{k}}\) is convergent, and therefore bounded, the sequence \(\{u_{k}\}_{k\in\mathbb{N}}\) satisfy (4.1). Therefore we can apply Proposition 4.2 and obtain a subsequence, that we still denote \(\{u_{k}\}_{k\in\mathbb{N}}\) and a function \(u\in W_{0}^{1,\mathbf{p}}(\Omega)\) such that \(u_{k}\to u\) in \(L^{\mathbf{p}}(\Omega)\).
Now, using Lemma 4.3, the proof follows by using a classical monotonicity argument.
## 5. Existence of eigenvalues
In this section, we establish the existence of eigenvalues using the Ljusternik-Schnirelman theory. This is the main part of the article.
First we recall some abstract results from critical point theory that will be essential in our proof of the existence of eigenvalues.
Let \(X\) be a reflexive Banach space and maps \(\phi,\psi\in C^{1}(X,\mathbb{R})\). We will assume that \(\phi,\psi\) verify the assumptions (H1)-(H4) below:
* \(\phi,\psi\in C^{1}(X,\mathbb{R})\), are even maps with \(\phi(0)=\psi(0)=0\) and the level set \[\mathcal{M}=\{u\in X:\psi(u)=1\}\] is bounded.
* \(\phi^{\prime}\) is completely continuous. Moreover, for any \(u\in X\) it holds that \[\langle\phi^{\prime}(u),u\rangle=0\iff\phi(u)=0,\] where \(\langle\cdot,\cdot\rangle\) denote the duality brackets for the pair \((X,X^{*})\).
* \(\psi^{\prime}\) is continuous, bounded and, as \(k\to\infty\), it holds that \[u_{k}\rightharpoonup u,\psi^{\prime}(u_{k})\rightharpoonup v\text{ and }\langle\psi^{\prime}(u_{k}),u_{k}\rangle\to\langle v,u\rangle\Rightarrow u_{k} \to u\text{ in }X.\]
* For every \(u\in X\setminus\{0\}\) it holds that \[\langle\psi^{\prime}(u),u\rangle>0,\ \lim_{t\to\infty}\psi(tu)=\infty\text{ and }\inf_{u\in M}\langle\psi^{\prime}(u),u\rangle>0.\]
Now, for any \(n\in\mathbb{N}\), we define
\[\mathcal{K}_{n}=\{K\subset\mathcal{M}\colon K\text{ is symmetric, compact, with }\phi|_{K}>0\text{ and }\gamma(K)\geq n\},\]
where \(\gamma(K)\) is the Krasnoselskii genus of the set \(K\).
Finally, let
\[c_{n}=\begin{cases}\sup_{K\in\mathcal{K}_{n}}\min_{u\in K}\phi(u)&\text{if } \mathcal{K}_{n}\neq\emptyset\\ 0&\text{if }\mathcal{K}_{n}=\emptyset\end{cases}\]
The following general abstract result is proved in [25] (see also [18, Theorem 9.27]).
**Theorem 5.1**.: _Let \(X\) be a reflexive Banach space and \(\phi,\psi\in C^{1}(X,\mathbb{R})\). Assume that \(\phi,\psi\) satisfy (H1)-(H4). Then_
1. \(c_{1}<\infty\) _and_ \(c_{n}\to 0\) _as_ \(n\to\infty\)_._
2. _If_ \(c=c_{n}>0\)_, then we can find an element_ \(u\in\mathcal{M}\) _that is a solution of_ (5.1) \[\mu\psi^{\prime}(u)=\phi^{\prime}(u),\quad(\mu,u)\in\mathbb{R}\times\mathcal{ M},\] _for an eigenvalue_ \(\mu\neq 0\) _and such that_ \(\phi(u)=c\)_._
3. _More generally, if_ \(c=c_{n}=c_{n+k}>0\) _for some_ \(k\geq 0\)_, then the set of solutions_ \(u\in\mathcal{M}\) _of (_5.1_) such that_ \(\phi(u)=c\) _has genus_ \(\geq k+1\)_._
4. _If_ \(c_{n}>0\) _for all_ \(n\geq 1\)_, then there is a sequence_ \(\{(\mu_{n},u_{n})\}_{n\in\mathbb{N}}\) _of solutions of (_5.1_) with_ \(\phi(u_{n})=c_{n}\)_,_ \(\mu_{n}\neq 0\) _for all_ \(n\geq 1\)_, and_ \(\mu_{n}\to 0\) _as_ \(n\to\infty\)_._
5. _If we further require that_ \[\langle\phi^{\prime}(u),u\rangle=0\text{ if and only if }\phi(u)=0\text{ if and only if }u=0,\] _then,_ \(c_{n}>0\) _for all_ \(n\geq 1\)_, and there is a sequence_ \(\{(\mu_{n},u_{n})\}_{n\in\mathbb{N}}\) _of solutions of (_5.1_) such that_ \(\phi(u_{n})=c_{n}\)_,_ \(\mu_{n}\neq 0\)_,_ \(\mu_{n}\to 0\)_, and_ \(u_{n}\rightharpoonup 0\) _in_ \(X\) _as_ \(n\to\infty\)_._
Now we will apply Theorem 5.1 in the case where \(X=W_{0}^{1,\mathbf{p}}(\Omega)\), \((\phi,\psi)=(I,H)\) and in the case where \(X=W_{0}^{\mathbf{s},\mathbf{p}}(\Omega)\) and \((\phi,\psi)=(I,H_{\mathbf{s}})\), where the operators \(I,H\) and \(H_{\mathbf{s}}\) where introduced in (3.5) and (3.9).
For enhanced readability of the work, we will demonstrate the properties of \(H\) and \(H_{\mathbf{s}}\) through a series of lemmas.
**Lemma 5.2**.: _Let \(H\colon W_{0}^{1,\mathbf{p}}(\Omega)\to\mathbb{R}\) be defined in (3.5) and \(H_{\mathbf{s}}\colon W_{0}^{\mathbf{s},\mathbf{p}}(\Omega)\to\mathbb{R}\) be defined in (3.9)._
_Then \(H^{\prime}\colon W_{0}^{1,\mathbf{p}}(\Omega)\to W^{-1,\mathbf{p}^{\prime}}(\Omega)\) and \(H^{\prime}_{\mathbf{s}}\colon W_{0}^{\mathbf{s},\mathbf{p}}(\Omega)\to W^{- \mathbf{s},\mathbf{p}^{\prime}}(\Omega)\) are bounded and monotone._
Proof.: Let us first demonstrate that \(H^{\prime}\) is bounded. In fact
\[|\langle H^{\prime}(u),v\rangle| =\left|\sum_{i=1}^{n}\frac{1}{\|u_{x_{i}}\|_{p_{i}}^{p_{i}-1}} \int_{\mathbb{R}}|u_{x_{i}}|^{p_{i}-2}u_{x_{i}}v_{x_{i}}\,dx\right|\] \[\leq\sum_{i=1}^{n}\frac{1}{\|u_{x_{i}}\|_{p_{i}}^{p_{i}-1}}\int_{ \mathbb{R}}|u_{x_{i}}|^{p_{i}-1}|v_{x_{i}}|\,dx\] \[\leq\sum_{i=1}^{n}\frac{\|u_{x_{i}}\|_{p_{i}}^{p_{i}/p_{i}^{ \prime}}\|v_{x_{i}}\|_{p_{i}}}{\|u_{x_{i}}\|_{p_{i}}^{p_{i}-1}}\] \[\leq\sum_{i=1}^{n}\|v_{x_{i}}\|_{p_{i}}=\|\nabla v\|_{\mathbf{p}}.\]
Therefore, \(H^{\prime}\) is a bounded operator. Moreover, from this result we can obtain the monotonicity of it.
\[\langle H^{\prime}(u),v\rangle+\langle H^{\prime}(v),u\rangle\leq|\langle H^{ \prime}(u),v\rangle|+|\langle H^{\prime}(v),u\rangle|\leq\|\nabla v\|_{ \mathbf{p}}+\|\nabla u\|_{\mathbf{p}}.\]
Hence,
\[\langle H^{\prime}(u)-H^{\prime}(v),u-v\rangle =\langle H^{\prime}(u),u\rangle+\langle H^{\prime}(v),v\rangle-( \langle H^{\prime}(u),v\rangle+\langle H^{\prime}(v),u\rangle)\] \[=\|\nabla u\|_{\mathbf{p}}+\|\nabla v\|_{\mathbf{p}}-(\langle H^ {\prime}(u),v\rangle+\langle H^{\prime}(v),u\rangle)\geq 0.\]
The proof for \(H^{\prime}\) is complete.
The proof for \(H^{\prime}_{\mathbf{s}}\) is analogous and the details are left to the reader.
**Lemma 5.3**.: _The operators \(H\) and \(H_{\mathbf{s}}\) given in (3.5) and (3.9) respectively, verify hypothesis_ (H3)_._
_That is \(H^{\prime}\colon W^{1,\mathbf{p}}_{0}(\Omega)\to W^{-1,\mathbf{p}^{\prime}}(\Omega)\) and \(H^{\prime}_{\mathbf{s}}\colon W^{\mathbf{s},\mathbf{p}}_{0}(\Omega)\to W^{- \mathbf{s},\mathbf{p}^{\prime}}(\Omega)\) are continuous and bounded operator, and moreover, as \(k\to\infty\), it holds that_
\[u_{k}\rightharpoonup u,\ H^{\prime}(u_{k})\rightharpoonup v\ \text{and}\ \langle H^{\prime}(u_{k}),u_{k}\rangle\to\langle v,u\rangle\Rightarrow u_{k} \to u\ \text{in}\ W^{1,\mathbf{p}}_{0}(\Omega) \tag{5.3}\] \[u_{k}\rightharpoonup u,\ H^{\prime}_{\mathbf{s}}(u_{k}) \rightharpoonup v\ \text{and}\ \langle H^{\prime}_{\mathbf{s}}(u_{k}),u_{k}\rangle\to\langle v,u\rangle \Rightarrow u_{k}\to u\ \text{in}\ W^{\mathbf{s},\mathbf{p}}_{0}(\Omega). \tag{5.2}\]
Proof.: In view of Lemma 5.2 it remains to see that \(H^{\prime}\) and \(H^{\prime}_{\mathbf{s}}\) are continuous and that verify (5.2) and (5.3) respectively.
First we claim that \(H^{\prime}\) is a continuous operator. In fact just observe that we can rewrite \(H^{\prime}\) as
\[H^{\prime}(u)=\sum_{i=1}^{n}\frac{J_{p_{i}}(u_{x_{i}})}{\|u_{x_{i}}\|_{p_{i}}^ {p_{i}-1}},\]
where \(J_{p}(u):=|u|^{p-2}u\).
Since \(J_{p}\colon L^{p}(\mathbb{R}^{n})\to L^{p^{\prime}}(\mathbb{R}^{n})\) is continuous, the claim follows
To verify (5.2) let \(\{u_{k}\}_{k\in\mathbb{N}}\) be a sequence in \(W^{1,\mathbf{p}}_{0}(\Omega)\) such that \(u_{k}\rightharpoonup u\) in \(W^{1,\mathbf{p}}_{0}(\Omega)\), \(H^{\prime}(u_{k})\rightharpoonup v\) in \(W^{-1,\mathbf{p}^{\prime}}(\Omega)\) and \(\langle H^{\prime}(u_{k}),u_{k}\rangle\to\langle v,u\rangle\) then we need to show that \(u_{k}\to u\) in \(W^{1,\mathbf{p}}_{0}(\Omega)\).
Given \(w\in W^{1,\mathbf{p}}_{0}(\Omega)\) arbitrary, by the monotonicity of \(H^{\prime}\) (Lemma 5.2) we get that
\[0\leq\langle H^{\prime}(w)-H^{\prime}(u_{k}),w-u_{k}\rangle.\]
Taking the limit as \(k\to\infty\), we arrive at
\[0\leq\langle H^{\prime}(w),w-u\rangle-\langle v,w-u\rangle.\]
Now we can take \(w=u+tz\) with \(t>0\) and we find that
\[0\leq\langle H^{\prime}(u+tz)-v,tz\rangle\]
Dividing by \(t\) and taking limit \(t\to 0+\) we get that for all \(z\in W^{1,\mathbf{p}}_{0}(\Omega)\),
\[0\leq\langle H^{\prime}(u)-v,z\rangle.\]
Therefore \(H^{\prime}(u)=v\). Moreover
\[\|\nabla u_{k}\|_{\mathbf{p}}=\langle H^{\prime}(u_{k}),u_{k}\rangle\to \langle v,u\rangle=\langle H^{\prime}(u),u\rangle=\|\nabla u\|_{\mathbf{p}}.\]
As the space \(W^{1,\mathbf{p}}_{0}(\Omega)\) is uniformly convex, weak convergence and norm convergence implies that \(u_{k}\to u\) strongly as desired.
The proof for \(H^{\prime}_{\mathbf{s}}\) is analogous.
The upcoming theorem is the main important result of this section, as it assures the existence of eigenvalues.
**Theorem 5.4**.: _There exist a sequence \(\{u_{k}\}_{k\in\mathbb{N}}\subset W^{1,\mathbf{p}}_{0}(\Omega)\) of critical points of \(\mathcal{Q}_{\mathbf{p}}\) with associated critical values \(\{\lambda_{k}\}_{k\in\mathbb{N}}\subset\mathbb{R}\) such that \(\lambda_{k}\to\infty\) as \(k\to\infty\). Moreover, these critical values have the following variational characterization_
\[\lambda_{k}=\inf_{K\in\mathcal{K}_{k}}\sup_{u\in K}H(u) \tag{5.4}\]
_where, for any \(k\in\mathbb{N}\)_
\[\mathcal{K}_{k}=\{K\subset M\ \text{compact, symmetric with}\ H^{\prime}(u)>0\ \text{on}\ K\ \text{and}\ \gamma(K)\geq k\}\]
\[M=\{u\in W^{1,\mathbf{p}}_{0}(\Omega):I(u)=1\}\]
_and \(\gamma\) is the Krasnoselskii genus of \(K\)._
_In particular, \(u_{k}\) is a weak solution to (3.1) with eigenvalue \(\lambda_{k}\)._
Proof.: We must confirm that the functionals \(I\) and \(H\) satisfy the hypotheses of Theorem 5.1.
Note that conditions (H1) and (H3) are direct consequences of Lemmas 3.4 and 5.3, respectively. Condition (H4) follows directly from the definition of \(H^{\prime}(u)\).
In order to show that (H2) holds, just observe that if \(u_{k}\rightharpoonup u\) in \(W^{1,\mathbf{p}}_{0}(\Omega)\), by the compactness of the immersion \(W^{1,\mathbf{p}}_{0}(\Omega)\subset\subset L^{\mathbf{p}}(\Omega)\), it follows that \(u_{k}\to u\) in \(L^{\mathbf{p}}(\Omega)\) and using Lemma 3.2, we get that \(I^{\prime}(u_{k})\to I^{\prime}(u)\) in \(L^{\mathbf{p}^{\prime}}(\Omega)\subset W^{-1,\mathbf{p}^{\prime}}(\Omega)\).
Finally observe that \(\langle I^{\prime}(u),u\rangle=I(u)=\|u\|_{\mathbf{p}}\). Therefore each one is zero if and only if \(u=0\).
We then apply the Ljusternik-Schnirelman theory, Theorem 5.1, to the functionals \(I\) and \(H\) on the level set \(\mathcal{M}=\{u\in W^{1,\mathbf{p}}_{0}(\Omega)\colon H(u)=1\}\).
By Theorem 5.1 there exist a sequence of numbers \(\{\mu_{k}\}_{k\in\mathbb{N}}\searrow 0\) and functions \(\{u_{k}\}_{k\in\mathbb{N}}\in W^{1,\mathbf{p}}_{0}(\Omega)\) normalized such that \(H(u_{k})=1\), and
\[\mu_{k}\langle H^{\prime}(u_{k}),v\rangle=\langle I^{\prime}(u_{k}),v\rangle \quad\forall v\in W^{1,\mathbf{p}}_{0}(\Omega) \tag{5.5}\]
and \(I(u_{k})=c_{k}\) with
\[c_{k}=\sup_{K\subset\mathcal{K}_{k}}\min_{u\in K}I(u). \tag{5.6}\]
Using that \(\langle H^{\prime}(u),u\rangle=H(u)\) and \(\langle I^{\prime}(u),u\rangle=I(u)\) one immediately obtain that \(c_{k}=\mu_{k}\).
So, if we denote \(\lambda_{k}=\mu_{k}^{-1}\), using (5.5), we have that \(u_{k}\) is a weak solution to (3.1) with eigenvalue \(\lambda_{k}\) and from (5.6) one also obtain the validity of (5.4).
_Remark 5.5_.: The eigenvalues obtained in Theorem 5.4 are commonly called the Ljusternik-Schnirelman eigenvalues or simply the **LS-eigenvalues** and are denoted by \(\Sigma^{\mathbf{p}}_{LS}\).
Similarly, we state a Theorem for the fractional counterpart. The proof is a slight variation of Theorem 5.4 and is omitted.
**Theorem 5.6**.: _There exist a sequence \(\{u_{k}^{\mathbf{s}}\}_{k\in\mathbb{N}}\subset W^{\mathbf{s},\mathbf{p}}_{0} (\Omega)\) of critical points of \(\mathcal{Q}_{\mathbf{s},\mathbf{p}}\) with critical values \(\{\lambda_{k}^{\mathbf{s}}\}_{k\in\mathbb{N}}\subset\mathbb{R}\) such that \(\lambda_{k}^{\mathbf{s}}\to\infty\) as \(k\to\infty\). Moreover, this critical values have the following variational characterization_
\[\lambda_{k}^{\mathbf{s}}=\inf_{K\in\mathcal{K}_{k}^{\mathbf{s}}}\sup_{u\in K} H_{\mathbf{s}}(u) \tag{5.7}\]
_where, for any \(k\in\mathbb{N}\)_
\[\mathcal{K}_{k}^{\mathbf{s}}=\{K\subset M_{\mathbf{s}}\text{ compact, symmetric with }H_{\mathbf{s}}^{\prime}(u)>0\text{ on }K\text{ and }\gamma(K)\geq k\}\]
\[M_{\mathbf{s}}=\{u\in W^{\mathbf{s},\mathbf{p}}_{0}(\Omega):I(u)=1\}\]
_and \(\gamma\) is the Krasnoselskii genus of \(K\)._
This eigenvalues related to the fractional problem will be denoted by \(\Sigma^{\mathbf{s},\mathbf{p}}_{LS}\).
## Acknowledgements
This work was partially supported by UBACYT Prog. 2018 20020170100445BA, by ANPCyT PICT 2016-1022 and by PIP No. 11220150100032CO.
J. Fernandez Bonder is a members of CONICET and I. Ceresa Dussel is a doctoral fellow of CONICET.
|
2309.12770 | Radiative metamaterials based on effective-medium theory | Thermal metamaterials have made significant advancements in the past few
decades. However, the concept of thermal metamaterials is primarily rooted in
the thermal conduction mechanism, which has consequently restricted their
application scope. It is imperative to consider thermal radiation, another
crucial thermal transport mechanism, particularly in high-temperature regimes,
when designing thermal devices. In this review paper, we present the
advancements in this area, with a specific focus on research conducted using
the effective-medium theory. Additionally, we explore the potential
applications of radiative thermal metamaterials and discuss prospective
research directions from a microscopic perspective for future investigations. | Haohan Tan, Liujun Xu | 2023-09-22T10:25:07Z | http://arxiv.org/abs/2309.12770v1 | # Radiative metamaterials based on effective-medium theory
###### Abstract
Thermal metamaterials have made significant advancements in the past few decades. However, the concept of thermal metamaterials is primarily rooted in the thermal conduction mechanism, which has consequently restricted their application scope. It is imperative to consider thermal radiation, another crucial thermal transport mechanism, particularly in high-temperature regimes, when designing thermal devices. In this review paper, we present the advancements in this area, with a specific focus on research conducted using the effective-medium theory. Additionally, we explore the potential applications of radiative thermal metamaterials and discuss prospective research directions from a microscopic perspective for future investigations.
Radiation +
Footnote †: H. Tan and L. Xu: _Preprint submitted to Elsevier_
+
Footnote †: H. Tan and L. Xu: _Preprint submitted to Elsevier_
## 1 Background
The research on diffusion metamaterials dates back to 2008 [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. In that year, Huang et al. proposed thermoics. It is worth noting that prior to the proposition of thermoics, Huang et al.'s research focus was not on metamaterials but on soft matter [13, 14, 15, 16], such as colloidal particles [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29], colloidal crystal [30, 31, 32, 33, 34, 35, 36], and colloidal ferrofluids [37, 38, 39, 40, 41]), electrorheological fluid [42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54], and electrorheological solid [55, 56, 57], magnetorheological fluid [58, 59, 60, 61]. They also ventured into the research of water, including the movement of water molecules [62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73] and phase transition [74, 75, 76], electromagnetism, including electrics [77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88], nonlinear response [89, 90, 91, 92, 93, 94, 95, 96], dispersion [97, 98, 99], electrorotation [100, 101, 102], transformation electrics [103, 104], ferrofluids [105, 106, 107, 108, 109, 110, 111], and other fields. Furthermore, they were also interested in sociology [112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125], econophysics, including the stock market [126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137], and other aspects [138, 139, 140, 141, 142, 143], statistics [144, 145], condensed matter physics [146], and energy engineering [147, 148]. Perhaps it is the study of optics, including optical materials [149, 150, 151, 152, 153, 154, 155, 156, 157, 158] and nonlinear response [159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169], and acoustics [170, 171, 172, 173] that paved the way for the eventual birth of thermoics.
Since then, the research in the field of diffusion metamaterials has made significant progress [174, 175, 176, 177, 178, 179, 180, 181, 182]. Researchers have designed numerous devices with novel functions, such as thermal cloaks [183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195], concentrators [196, 197], rotators [198], sensors [201, 202, 203, 204, 205, 206, 207, 208, 209], illusions [204, 205, 206, 207, 209], transparency [210, 211, 212, 211, 213, 214, 215], and expanders [216, 217]. The development of metamaterials in the thermal field began with research on thermal conduction [218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228]. However, thermal radiation [229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241], which is an important mode of thermal transport alongside conduction and convection [242, 243, 244], cannot be ignored in many cases. Finding ways to realize similar functions while considering this mechanism is an unavoidable problem. However, the nonlinearity of Stefan's law has posed significant obstacles to research in this field. Furthermore, it is unknown whether the dynamic equations for thermal conduction and radiation satisfy transformation invariance. Therefore, the design of thermal metamaterials considering conduction and radiation remains an unexplored topic, even many years after the proposition of thermal metamaterials. Xu et al. were the first to design diffusion metamaterials for manipulating thermal radiation. For convenience, they did not use Stefan's law but instead employed the Rosseland diffusion approximation. Based on this method, they designed devices with novel functions such as cloaks, transparency, and expanders.
## 2 Effective-medium theory under Rosseland approximation
Xu et al. investigated a passive and steady process of heat transfer, focusing on the total heat flux \(\mathbf{J}_{\text{total}}\), which comprises the conductive flux \(\mathbf{J}_{\text{con}}\) and the radiative flux \(\mathbf{J}_{\text{rad}}\). This heat flux satisfies the divergence-free condition:
\[\mathbf{\nabla}\cdot\mathbf{J}_{\rm total}=\mathbf{\nabla}\cdot\left(\mathbf{J}_{\rm con}+\mathbf{J}_{ \rm rad}\right)=0. \tag{1}\]
The conductive flux \(\mathbf{J}_{\rm con}\) is given by:
\[\mathbf{J}_{\rm con}=-\mathbf{x}\mathbf{\nabla}T, \tag{2}\]
where \(\kappa\) represents the thermal conductivity. On the other hand, based on the Rosseland diffusion approximation, the radiative flux \(\mathbf{J}_{\rm rad}\) is expressed as:
\[\mathbf{J}_{\rm rad}=-\gamma T^{3}\mathbf{\nabla}T, \tag{3}\]
Here, \(\gamma\) (given by \(\gamma=16\beta^{-1}n^{2}\sigma/3\)) can be considered as the radiative coefficient. In this expression, \(\beta\) corresponds to the Rosseland mean extinction coefficient, \(n\) represents the relative refractive index, and \(\sigma\) is the Stefan-Boltzmann constant (\(\sigma=5.67\times 10^{-8}\,{\rm Wm^{-2}K^{-4}}\)).
Figure 1: Schematic diagrams of (a) thermal transparency, (b) thermal cloak, and (c) thermal expander. (d) and (e) qualitatively show the radiative emittance \(j\), conductive flux \(J_{\rm con}\), and radiative flux \(J_{\rm rad}\) as a function of temperature \(T\). Adapted from Ref. [240]
Xu et al. further consider a three-dimensional core-shell structure (Fig. 1(a)) which consists of a core with thermal conductivity \(\kappa_{c}\), Rosseland mean extinction coefficient \(\beta_{c}\), and relative refractive index \(n_{c}\) (radiative coefficient \(\gamma_{c}\)), coated by a shell with corresponding parameters \(\kappa_{s}\), \(\beta_{s}\), and \(n_{s}\) (radiative coefficient \(\gamma_{s}\)). The subscript \(c\) (or \(s\)) denotes the core (or shell). The semi-axis lengths of the core and shell are \(\lambda_{ci}\) and \(\lambda_{si}\), respectively, where \(i=1,\ 2,\ 3\). Assuming that the ratio \(\gamma/\kappa\) of the core-shell structure is a constant \(\alpha\), specifically \(\gamma_{c}/\kappa_{c}=\gamma_{s}/\kappa_{s}=\alpha\), Equation (1) can be rewritten as
\[\mathbf{\nabla}\cdot\left(-\kappa\mathbf{\nabla}T-\alpha\kappa T^{3}\mathbf{\nabla}T \right)=\mathbf{\nabla}\cdot\left(-\kappa\left(1+\alpha T^{3}\right)\mathbf{\nabla}T \right)=\mathbf{\nabla}\cdot\left(-\kappa\mathbf{\nabla}\left(T+\alpha T^{4}/4\right) \right)=0. \tag{4}\]
By performing a variable substitution \(\varphi=T+\alpha T^{4}/4\), we obtain
\[\mathbf{\nabla}\cdot\left(-\kappa\mathbf{\nabla}\varphi\right)=0. \tag{5}\]
Therefore, the strongly nonlinear equation (Eq. (1)) can be transformed into a linear equation (Eq. (5)).
To proceed, Xu et al. introduced ellipsoidal coordinates (\(\rho\), \(\xi\), \(\eta\)), which are defined by the following equations:
\[\left\{\begin{array}{l}\frac{x^{2}}{\rho+\lambda_{1}^{2}}+\frac{y^{2}}{\rho+ \lambda_{2}^{2}}+\frac{z^{2}}{\rho+\lambda_{3}^{2}}=1\ \text{(confocal ellipsoids)}\\ \frac{x^{2}}{\xi+\lambda_{1}^{2}}+\frac{y^{2}}{\xi+\lambda_{2}^{2}}+\frac{z^{ 2}}{\xi+\lambda_{3}^{2}}=1\ \text{(hyperboloids of one sheet)}\\ \frac{x^{2}}{\eta+\lambda_{1}^{2}}+\frac{y^{2}}{\eta+\lambda_{2}^{2}}+\frac{z ^{2}}{\eta+\lambda_{3}^{2}}=1\ \text{(hyperboloids of two sheets)}\end{array}\right. \tag{6}\]
In these equations, \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) are three constants that satisfy \(\rho>-\lambda_{1}^{2}>\xi>-\lambda_{2}^{2}>\eta>-\lambda_{3}^{2}\). Equation (5) can be expressed in ellipsoidal coordinates as:
\[\frac{\partial}{\partial\rho}\left(g\left(\rho\right)\frac{\partial\varphi}{ \partial\rho}\right)+\frac{g\left(\rho\right)}{\rho+\lambda_{i}^{2}}\frac{ \partial\varphi}{\partial\rho}=0, \tag{7}\]
where \(g\left(\rho\right)=\sqrt{\left(\rho+\lambda_{1}^{2}\right)\left(\rho+\lambda_ {2}^{2}\right)\left(\rho+\lambda_{3}^{2}\right)}\). Since Eq. (7) has the solution:
\[\varphi=\left(u+v\int_{0}^{\rho}\left(\rho+\lambda_{i}^{2}\right)^{-1}g\left( \rho\right)^{-1}d\rho\right)x_{i}, \tag{8}\]
where \(u\) and \(v\) are constants, and \(x_{i}\) (\(i=1,\ 2,\ 3\)) denotes Cartesian coordinates. The temperatures of the core, shell, and background can be defined as \(\varphi_{c}\), \(\varphi_{s}\), and \(\varphi_{b}\), respectively. These temperatures can be obtained as follows:
\[\left\{\begin{array}{l}\varphi_{c}=u_{c}x_{i}\\ \varphi_{s}=\left(u_{s}+v_{s}\int_{\rho_{c}}^{\rho}\left(\rho+\lambda_{i}^{2} \right)^{-1}g\left(\rho\right)^{-1}d\rho\right)x_{i}\\ \varphi_{b}=u_{b}x_{i}\end{array}\right. \tag{9}\]
where \(u_{c}\), \(u_{s}\), and \(v_{s}\) are determined by the boundary conditions. The exterior surfaces of the core and shell are denoted by \(\rho_{c}\) and \(\rho_{s}\), respectively. The boundary conditions require continuity of temperatures and normal heat fluxes, leading to the following equations:
\[\left\{\begin{array}{l}u_{c}=u_{s}\\ u_{b}=u_{s}+v_{s}\int_{\rho_{c}}^{\rho_{s}}\left(\rho+\lambda_{i}^{2}\right)^{-1 }g\left(\rho\right)^{-1}d\rho\\ u_{c}=2v_{s}x_{s}\left(\kappa_{c}-\kappa_{s}\right)^{-1}g\left(\rho_{c}\right)^{- 1}\\ u_{b}=2v_{s}x_{s}\left(\kappa_{ci}-\kappa_{s}\right)^{-1}g\left(\rho_{s} \right)^{-1}\end{array}\right. \tag{10}\]
Here, \(\kappa_{ei}\) represents the effective thermal conductivity of the core-shell structure along the direction of \(x_{i}\). To obtain the expression for \(\kappa_{ei}\), one can solve Eq. (10). For this purpose, Xu et al. define the semi-axis lengths of the core, \(\lambda_{ci}\), and the shell, \(\lambda_{si}\), as follows:
\[\left\{\begin{array}{l}\lambda_{ci}=\sqrt{\lambda_{i}^{2}+\rho_{c}}\\ \lambda_{si}=\sqrt{\lambda_{i}^{2}+\rho_{s}}\end{array}\right. \tag{11}\]
Here, \(i=1,\,2,\,3\). Consequently, the volume fraction \(f\) can be expressed as:
\[f=\frac{\lambda_{c1}\lambda_{c2}\lambda_{c3}}{\lambda_{s1}\lambda_{s2}\lambda_ {s3}}=\frac{g\left(\rho_{c}\right)}{g\left(\rho_{s}\right)} \tag{12}\]
Xu et al. also introduce the shape factor \(d_{wei}\) along the direction of \(x_{i}\) as follows:
\[d_{wei}=\frac{\lambda_{w1}\lambda_{w2}\lambda_{w3}}{2}\int_{0}^{\infty}\left( \tau+\lambda_{wei}^{2}\right)^{-1}\left(\left(\tau+\lambda_{w1}^{2}\right) \left(\tau+\lambda_{w2}^{2}\right)\left(\tau+\lambda_{w3}^{2}\right)\right)^ {-1/2}d\tau, \tag{13}\]
Here, the subscript \(w\) can take the values \(c\) or \(s\), representing the shape factor of the core or shell, respectively. Subsequently, they can obtain the following expression:
\[\begin{array}{l}\int_{\rho_{c}}^{\rho_{s}}\left(\rho+\lambda_{i}^{2} \right)^{-1}g\left(\rho\right)^{-1}d\rho=\int_{\rho_{c}}^{\infty}\left(\rho+ \lambda_{i}^{2}\right)^{-1}g\left(\rho\right)^{-1}d\rho-\int_{\rho_{s}}^{ \infty}\left(\rho+\lambda_{i}^{2}\right)^{-1}g\left(\rho\right)^{-1}d\rho\\ =2d_{ci}g\left(\rho_{c}\right)^{-1}-2d_{si}g\left(\rho_{s}\right)^{-1}.\end{array} \tag{14}\]
Finally, Xu et al. derive a concise expression for \(\kappa_{ei}\) as follows:
\[\kappa_{ei}=\kappa_{s}\left[\frac{f\left(\kappa_{c}-\kappa_{s}\right)}{\kappa _{s}+\left(d_{ci}-fd_{si}\right)\left(\kappa_{c}-\kappa_{s}\right)}+1\right]. \tag{15}\]
The method described above is the standard approach for calculating the Laplace equation. The shape factors satisfy the sum rule \(d_{w1}+d_{w2}+d_{w3}=1\). In principle, the effective thermal conductivity of any core-shell structure can be obtained using Eq. (15) when the core-shell structure is confocal or concentric. In fact, Eq. (15) can be reduced to handle cylindrical (two-dimensional) cases by setting \(\lambda_{w3}=\infty\), resulting in \(d_{w1}=\lambda_{w2}/\left(\lambda_{w1}+\lambda_{w2}\right)\), \(d_{w2}=\lambda_{w1}/\left(\lambda_{w1}+\lambda_{w2}\right)\), and \(d_{w3}=0\) (the sum rule \(d_{w1}+d_{w2}+d_{w3}=1\) is still satisfied).
Since \(\gamma/\kappa\) is a constant, the effective radiative coefficient can be expressed as:
\[\gamma_{ei}=\gamma_{s}\left[\frac{f\left(\gamma_{c}-\gamma_{s}\right)}{\gamma _{s}+\left(d_{ci}-fd_{si}\right)\left(\gamma_{c}-\gamma_{s}\right)}+1\right], \tag{16}\]
Here, \(\gamma_{ei}\) represents the effective radiative coefficient of the core-shell structure along the direction of \(x_{i}\). By using Eqs. (15) and (16), one can predict the effective thermal conductivity and effective radiative coefficient. However, in order to achieve the same effect of conduction and radiation, it is necessary to maintain \(\gamma/\kappa\) as a constant.
Furthermore, to validate the theoretical analyses, Xu et al. conducted finite-element simulations. The radiative emittance \(j\) is shown in Fig. 1(d) to be proportional to \(T^{4}\) according to the Stefan-Boltzmann law. The conductive flux \(J_{\text{con}}\) depends on the temperature gradient, while the radiative flux \(J_{\text{rad}}\) is proportional to \(T^{3}\), as depicted in Fig. 1(e). These qualitative analyses indicate the significant role of thermal radiation at high temperatures. Therefore, Xu et al. performed finite-element simulations at three temperature intervals: (I) 273-313 K, representing a small upper
temperature limit where conduction (Con.) dominates; (II) 273-673 K, representing a medium upper temperature limit where conduction and radiation (Rad.) are approximately equal; and (III) 273-4273 K, representing a large upper temperature limit where radiation is the dominant mode of heat transfer.
Thermal transparency involves the design of a shell tailored to the object, ensuring an undistorted temperature profile outside the shell, as depicted in Fig. 2. By selecting appropriate parameters based on Eqs. (15) and (16), Xu et al. demonstrated that the temperature profile outside the shell remains undistorted, as shown in Figs. 2(a)-2(c) or Figs. 2(d)-2(f). Consequently, the core-shell structure at the center becomes indistinguishable, as illustrated in Figs. 2(g)-2(i). Figs. 2(j)-2(l) present the corresponding theoretical results from the references using the same parameters as Figs. 2(g)-2(i). The matching temperature profiles in both simulations and theory validate the theoretical analyses.
Furthermore, Xu et al. demonstrated that with a small upper temperature limit where conduction dominates, the temperature gradient outside the shell remains nearly uniform, as depicted in the first row of Fig. 2. As the upper temperature limit increases, the effect of radiation becomes prominent, resulting in nonuniform temperature gradients outside the shell, as shown in the last two rows of Fig. 2.
A thermal cloak is capable of shielding any object within it from detection. Generally, an insulated layer is employed to prevent heat flux from reaching the object. Consequently, the object along with the insulated layer can be treated as an insulated core, where \(\kappa_{c}=\gamma=0\). Moreover, Xu et al. designed a shell based on Eqs. (15) and (16) to eliminate the influence of the insulated core. The simulation results are presented in Figs. 3(a)-3(c) and Figs. 3(d)-3(f). Evidently,
the isotherms remain separated from the object, indicating that the heat flux is unable to enter the object. Additionally, the temperature profiles of the background remain undistorted. This successful outcome demonstrates the expected cloaking effect.
Figure 3: Steady simulations of thermal cloak. An inner object is coated by an insulated layer with \(\kappa=10^{-5}\) Wm\({}^{-1}\)K\({}^{-1}\) and \(\beta=10^{5}\) m\({}^{-1}\). Since the heat flux cannot enter into the insulated layer, the inner object plus the insulated layer can be equivalently regarded as an insulated core with \(\kappa_{c}=10^{-5}\) Wm\({}^{-1}\)K\({}^{-1}\) and \(\beta_{c}=10^{5}\) m\({}^{-1}\). Other parameters are as follows. (a)-(c): \(\lambda_{c1}=\lambda_{c2}=2.5\) cm, \(\lambda_{s1}=\lambda_{s2}=3\) cm, \(\kappa_{s}=5.54\) Wm\({}^{-1}\)K\({}^{-1}\), and \(\beta_{s}=18.1\) m\({}^{-1}\). (d)-(f): \(\lambda_{c1}=2.5\) cm, \(\lambda_{c2}=1.25\) cm, \(\lambda_{s1}=3\) cm, \(\lambda_{s2}=2.08\) cm, \(\kappa_{s}=2.35\) Wm\({}^{-1}\)K\({}^{-1}\), and \(\beta_{s}=42.5\) m\({}^{-1}\). Adapted from Ref. [240].
The thermal expander concept aims to magnify a small heat source into a larger one by utilizing the design of two elliptical cloaks. To demonstrate this effect, Xu et al. assembled two elliptical cloaks together and extracted a quarter of the entire structure as the expander, as depicted in Fig. 1(c). According to the uniqueness theorem in thermotics, the temperature distribution of the background remains undistorted, thereby achieving the desired expander effect. Finite-element simulations were conducted and presented in Figs. 4(a)-4(c). Clearly, the isotherms of the background appear as straight lines, indicating the excellent performance of the proposed structure. For comparison, Xu et al. also demonstrated that the thermal expander concept is a good tool for the development of the thermal expander concept.
Figure 4: Steady simulations of thermal expander. The sizes are \(\lambda_{c1}=2.08\) cm, \(\lambda_{c2}=4.17\) cm, \(\lambda_{s1}=3.46\) cm, \(\lambda_{s2}=5\) cm, and the width between hot and cold sources is 6 cm. Other parameters are as follows. (a)-(c): \(\kappa_{s}=4.91\) Wm\({}^{-1}\)K\({}^{-1}\) and \(\beta_{s}=20.3\) m\({}^{-1}\). (d)-(f): pure background parameters. Adapted from Ref. [240]
provided simulation results for a pure background material. These results reveal that the isotherms of the background become distorted, as shown in Figs. 4(d)-4(f).
Xu et al. demonstrated that the theoretical analyses are applicable not only to steady states but also to transient cases. To illustrate this point, Xu et al. considered density and heat capacity. In order to design transient transparency and cloak, the value of heat diffusivity \(\kappa/\left(\rho C\right)\) was set as a constant. Based on the simulation results, the performance of this approach remained satisfactory. The results at \(t=10,\,20,\,60\) mins are depicted in Figs. 5(a)-5(c) and Figs. 5(d)-5(f), respectively.
To showcase the effect of transient expansion and achieve the optimal transient effect, Xu et al. employed an optimization method and set the diffusivity of the shell to be larger than that of the background. The results at \(t=6,\,10,\,20\) mins are presented in Figs. 5(g)-5(i).
## 3 Potential applications of radiative metamaterials: thermal camouflage and radiative cooler
The development of diffusion metamaterials for radiation control holds significant importance for our future lives, owing to its potential applications such as thermal camouflage [245, 246, 247, 248, 249, 250, 251, 252] and radiative cooling. Thermal camouflage
refers to a device that not only prevents the detection of objects but also generates deceptive signals. Researchers have designed systems that can mislead the detection of cloaked objects. Additionally, encrypted thermal printing has been achieved, enabling object detection only when an appropriate heat source is applied. While the original structure for thermal camouflage is two-dimensional, there are also studies focusing on realizing the same functionality using three-dimensional structures.
Moreover, radiative cooling is another application field worth mentioning. Radiative cooling involves the automatic achievement of cooling effects without the need for an external source. The concept was first proposed in 1978, but at that time, it remained primarily theoretical. It was in 2014 that a practical radiative cooler was designed based on the photonic approach. The device operates by selectively reflecting electromagnetic waves in the mid-infrared range. Since then, researchers have developed more efficient methods to achieve the same functionality. Experimental results have also confirmed the effectiveness of these devices. We believe that in the future, radiative coolers could find widespread use in our daily lives, provided that the costs are reduced to a certain level.
## 4 Outlook: radiative metamaterials from microscopic view
Over the past decades, research on diffusion metamaterials has undergone significant changes. Initially, the focus was on single-function devices, but it has since shifted towards multi-functionality [253, 254]. Similarly, there has been a transition from linear to nonlinear response [255, 256, 257, 258, 259, 260], from temperature-independent to temperature-dependent devices [261, 262, 263], from single thermal to thermoelectrical effects [264, 265, 266, 267], from spatial to spatiotemporal metamaterials [268, 269, 270, 271, 272], and from temperature diffusion to temperature waves [273, 274, 275, 276, 277]. Furthermore, the research methodology has been applied to other fields as well, including hydrodynamics [278, 279, 280, 281, 282, 283, 284], plasma physics [285], mass diffusion [286, 287], and topology [288]. Novel concepts such as programmable metamaterials [289, 290], nonreciprocity metamaterials [291], intelligent metamaterials [292, 293, 294, 295, 296, 297, 298, 299], dipole-assisted thermotics [300], and negative thermal transport [301] have also been proposed. Additionally, the research on networks [302, 303] and phase transitions [304, 305, 306, 307] is expected to make further progress. In the future, the impact of diffusion metamaterials on the development of thermal diodes [308, 309, 310, 311, 312] is eagerly anticipated. Machine learning [313, 314] is also increasingly being incorporated into the research methodology. While significant progress has been made in the study of diffusion metamaterials for conduction and radiation, the research has primarily focused on macroscopic scales. Exploring the same effects at the microscopic scale is an intriguing topic [315]. To validate the Fourier law from a microscopic perspective, researchers have designed various models. The simplest among them is the harmonic lattice model [316, 317, 318, 319, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329], which includes one-dimensional ordered cases [330], higher-dimensional ordered cases, one-dimensional disordered cases [331, 332, 333, 334, 335, 336, 337, 338, 339, 340], and two-dimensional disordered cases [341, 342, 343, 344, 345, 346, 347, 348]. Additionally, the harmonic lattices with self-consistent reservoirs have been studied [349, 350, 351, 352, 353, 354]. More complex models involve interacting systems [355, 356, 357, 358, 359, 360]. The research in this field can be divided into two parts: momentum-conserving models, such as the FPU model [361, 362, 363, 364, 365, 366, 367], and momentum-non-conserving models [368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382]. Some of these models provide essential insights into illustrating the Fourier law. Therefore, exploring the possibility of achieving novel functions such as cloaking and concentration in a similar manner is a worthwhile topic to explore. Furthermore, the experimental validation of microscopic theories poses a significant challenge. However, advancements in nanotechnology inspire us to move forward, and the combination of diffusion metamaterials with low-dimensional systems offers a new research field.
|
2310.00217 | Regularity criteria of 3D generalized magneto-micropolar fluid system in
terms of the pressure | This work focuses on regularity criteria of 3D generalized magneto-micropolar
fluid system in terms of the pressure in Lorentz spaces inspired by the recent
works in \cite{FS22} and \cite{LN22}. | Jae-Myoung Kim | 2023-09-30T01:42:55Z | http://arxiv.org/abs/2310.00217v1 | # Regularity criteria of 3D generalized magneto-micropolar fluid system in terms of the pressure
###### Abstract
This work focuses on regularity criteria of 3D generalized magneto-micropolar fluid system in terms of the pressure in Lorentz spaces inspired by the recent works in [10] and [15].
_Mathematics Subject Classification(2000):_ 35Q35, 35B65, 76D05
Key words: Regularity criteria; Weak solution, generalized magneto-micropolar fluid system
## 1 Introduction
This paper is concerned about the regularity conditions of the weak solutions to generalized magneto-micropolar fluid equations in \(\mathbb{R}^{3}\), which are described by
\[\left\{\begin{aligned} \partial_{t}u+(u\cdot\nabla)u+\nabla \pi&=-(\mu+\chi)\Lambda^{2\alpha}u+\chi\nabla\times w+(b\cdot \nabla)b,\\ \partial_{t}w+(u\cdot\nabla)w&=-\kappa\Lambda^{2 \gamma}w+\eta\nabla(\nabla\cdot w)+\chi\nabla\times u-2\chi w,\\ \partial_{t}b+(u\cdot\nabla)b&=-\nu\Lambda^{2 \beta}b+(b\cdot\nabla)u,\\ \nabla\cdot u(\cdot,t)&=\nabla\cdot b(\cdot,t)\,=0, \end{aligned}\right. \tag{1.1}\]
where \(u=u(x,t)\), \(w=w(x,t)\), \(b=b(x,t)\) and \(\pi:=\pi(x,t)=\mathcal{P}+\frac{|b|^{2}}{2}\) denote the fluid velocity, the micro-rotation velocity (angular velocity of the rotation of the fluid particles), the magnetic and total pressure fields respectively. The notation \(\Lambda:=(-\Delta)^{1/2}\) stands for positive constants. The positive constant \(\kappa\) in (1.1) correspond to the angular
viscosity, \(\nu\) is the inverse of the magnetic Reynolds number and \(\chi\) is the micro-rotational viscosity. We consider the initial value problem of (1.1), which requires initial conditions
\[u(x,0)=u_{0}(x),\quad w(x,0)=w_{0}(x)\quad\text{and}\quad b(x,0)=b_{0}(x),\qquad x \in\mathbb{R}^{3}, \tag{1.2}\]
and we also assume that \(\text{div }u_{0}=0=\text{div }b_{0}\).
The authors in [4, Theorem 2.2] construct the existence of Leray-Hopf solution on the whole space \(\mathbb{R}^{3}\times(0,T)\) to the generalized Navier-Stokes equation. It is worth pointing that \(\dot{H}^{\frac{5-4\alpha}{2}}\) is a critical space, that is, \(\dot{H}^{\frac{5-4\alpha}{2}}\) norm is scaling invariant is also a solution, where \(u_{\lambda}(x,t)=\lambda^{2\alpha-1}u(\lambda x,\lambda^{2\alpha}t)\) with any \(\lambda>0\). By Sobolev embedding theorem, it is checked that \(\dot{H}^{\frac{5-3\alpha}{2}}(\mathbb{R}^{3})\hookrightarrow L^{\frac{6}{3-2 \alpha}}(\mathbb{R}^{3})\). Then
\[\|u\|_{L^{4}(0,T;L^{\frac{6}{3-2\alpha}})}\leq C\|u\|_{L^{\infty}(0,T;\dot{H}^ {\frac{5-4\alpha}{2}})}^{2}\|u\|_{L^{2}(0,T;\dot{H}^{\frac{5-2\alpha}{2}})}^{2 },\quad\frac{2\alpha}{4}+\frac{3}{\frac{6}{3\alpha-2}}=2\alpha-1.\]
For the regularity issues to weak solutions to the generalized Navier-Stokes equation, we refer to [3], [8], [9].
Recently, Deng and Shang [6] obtained global-in-time existenceand uniqueness of smooth solutions to the problem (1.1)-(1.2) if \(\alpha\geq\frac{1}{2}+\frac{n}{4},\alpha+\gamma\geq\max(2,\frac{n}{2}),\text{ and }\quad\alpha+\beta\geq 1+\frac{n}{2}\). On the other hands, Fan and Zhong [10] established local-in-time existene and uniqueness of smooth solutions to the problem (1.1)-(1.2) for \(\alpha+\gamma>1\) and furthermore they gave some regularity criteria via the gradient of velocity in a meaningful and appropriate spaces.
For the regularity criteria in Lorentz space, Li and Niu [15] proved that a weak solution \((u,\,w,\ b)\) for the standard 3D MHD equations become regular under the scaling invariant conditions for the total pressure, in particualr, so called Serrin's conditions, \(\pi\in L^{q,\infty}(0,\,T;L^{p,\infty}(\mathbb{R}^{3}))\) with \(3/p+2/q\leq 2\) and \(p>\frac{3}{2}\) (compare to [1], [21], [22], [23], [2], [19], [20] for Navier-Stokes equations). For \(p,q\in[1,\infty]\), we define
\[\|f\|_{L^{p,q}(\Omega)}=\begin{cases}\Big{(}p\int_{0}^{\infty}\alpha^{q}|\{x \in\Omega:|f(x)|>\alpha\}|^{\frac{q}{p}}\frac{d\alpha}{\alpha}\Big{)}^{\frac{ 1}{q}},\quad q<\infty,\\ \sup_{\alpha>0}\alpha\ |\{x\in\Omega:|f(x)|>\alpha\}|^{\frac{1}{p}}, \quad q=\infty.\end{cases}\]
(see e.g. [17], [11], [14]) Motivated by the recent works in [10] and [15], the purpose of this note is to establish regularity criteria of 3D generalized magneto-micropolar fluid system (1.1) in terms of the pressure in Lorentz space. In this paper, we assume that \(\gamma=\nu=\chi=1\) and \(\alpha=\beta\).
Our results are stated as follows.
**Theorem 1.1**.: _Let \(0<T<\infty\) and \(u_{0},b_{0},w_{0}\in H^{m}(\mathbb{R}^{3})\) with \(m>\frac{5}{2}\) and \(1\leq\alpha,\gamma\leq\frac{5}{4}\). There exists a sufficient constant \(\epsilon>0\) such that if \(\pi\) or \(\nabla\pi\) satisfy_
1. \(\pi\in L^{q,\infty}(0,T;L^{p,\infty}(\mathbb{R}^{3}_{+}))\) _and_ \[\|\pi\|_{L^{p,\infty}(0,T;L^{q,\infty}(\mathbb{R}^{3}))}\leq\varepsilon,\ \mbox{with}\ \ \frac{3}{p}+\frac{2\alpha}{q}=2(2\alpha-1),\ \frac{3}{2(\alpha-1)}<q<\infty;\]
2. \(\nabla\pi\in L^{\frac{2r\alpha}{2r\alpha-3}}(0,T;L^{r,\infty}(\mathbb{R}^{3}))\) _with_ \(\frac{3}{2\alpha}<r\leq\infty,\)__
_then the weak \((u,b)\) is regular on \((0,T].\)_
**Theorem 1.2**.: _Let \(\alpha=\beta=\gamma=1\). Assume \((u_{0},b_{0})\in L^{2}_{\sigma}(\mathbb{R}^{3})\cap L^{4}_{\sigma}(\mathbb{R }^{3})\) and \(w_{0}\in L^{2}(\mathbb{R}^{3})\cap L^{4}(\mathbb{R}^{3})\). Let the triple \((u,b,w)\) be a weak solution to system (1.1) on some time interval \([0,T)\) with \(0<T<\infty\). There exists a sufficient constant \(\epsilon>0\) such that if \(\nabla\mathcal{P}\) and \(b\) satisfy \(\nabla\mathcal{P}\in L^{\frac{2r}{3r-3},\infty}(0,T;L^{r,\infty}(\mathbb{R}^{3}))\) with_
\[\|\nabla\mathcal{P}\|_{L^{\frac{2r}{3r-3},\infty}(0,T;L^{r,\infty})}\leq \epsilon,\quad\frac{3}{2}<r\leq\infty,\]
_and_
\[\|b\|_{L^{\frac{2a_{1}}{a_{1}-3},\infty}(0,T;L^{r,\infty})}\leq\infty,\quad 3 <a_{1}\leq\infty,\]
_then the weak solutions \((u,b,w)\) is regular on \((0,T]\)._
## 2 Proof of Theorem 1.1
To control the fractional diffusion term, we recall the following result (see e.g. [5]).
**Lemma 2.1**.: _With \(0<\alpha<2\), \(v,\Lambda^{\alpha}v\in L^{p}(\mathbb{R}^{3})\) with \(p=2k\), \(k\in\mathbb{N}\), we obtain_
\[\int|v|^{p-2}v\Lambda^{\alpha}v\,dx\geq\frac{1}{p}\int|\Lambda^{\frac{\alpha} {2}}v^{\frac{p}{2}}|^{2}\,dx.\]
Also, we recall the following nonlinear Gronwall-type inequality established in [18] (see also [2] and [16]).
**Lemma 2.2**.: _Let \(T>0\) and \(\varphi\in L_{loc}([0,T))\) be non-negative function. Assume further that_
\[\varphi(t)\leq C_{0}+C_{1}\int_{0}^{t}\mu(s)\varphi(s)\,ds+\kappa\int_{0}^{t} \lambda(s)^{1-\epsilon}\varphi(s)^{1+A(\epsilon)}\,ds,\quad\forall\ 0< \epsilon<\epsilon_{0}.\]
_Where \(\kappa,\epsilon_{0}>0\) are constants, \(\mu\in L^{1}(0,T)\) and \(A(\epsilon)>0\) satisfies \(\lim_{\epsilon\to 0}\frac{A(\epsilon)}{\epsilon}=c_{0}>0\). Then \(\varphi\) is bounded on \([0,T]\) if \(\|\lambda\|_{L^{1,\infty}(0,T)}<c_{0}^{-1}\kappa^{-1}\)._
In order to derive the regularity criteria of weak solutions to the system (1.1), we introduce the definition of weak solution. Let us denote
\[z^{+}=u+b,\quad z^{-}=u-b. \tag{2.1}\]
Then system (1.1) can be reformulated as
\[\left\{\begin{array}{c}\partial_{t}z^{+}-\Lambda^{2\alpha}z^{+}+(z^{-}\cdot \nabla)z^{+}-\chi\nabla\times w+\nabla\pi=0,\\ \partial_{t}z^{-}-\Lambda^{2\alpha}z^{-}+(z^{+}\cdot\nabla)z^{-}-\chi\nabla \times w+\nabla\pi=0,\\ \partial_{t}w-\kappa\Lambda^{2\gamma}w+(\frac{z^{+}+z^{-}}{2})\cdot\nabla w- \eta\nabla{\rm div}\;w+w-\nabla\times(\frac{z^{+}+z^{-}}{2})=0,\\ \nabla\cdot z^{+}=\nabla\cdot z^{-}=0,\\ z^{+}(x,0)=z_{0}^{+}(x),\;\;z^{-}(x,0)=z_{0}^{-}(x),\end{array}\right. \tag{2.2}\]
It is easy to show the following global \(L^{2}\)-bound,
\[\|(u,b,w)((\tau))\|_{L^{2}}^{2}+\int_{0}^{t}(\|\Lambda^{\alpha}u(\tau)\|_{L^{ 2}}^{2}+\|\Lambda^{\gamma}w(\tau)\|_{L^{2}}^{2}+\|\Lambda^{\alpha}b(\tau)\|_{ L^{2}}^{2})\,d\tau\leq C.\]
**Proof:** Multiplying the first and the second equations of (2.2) by \(\left|z^{+}\right|^{2}z^{+}\) and \(\left|z^{-}\right|^{2}z^{-}\), respectively, integrating by parts and summing up, we have
\[\frac{1}{4}\frac{d}{dt}\big{|}\Big{(}z^{+},z^{-}\Big{)}\|_{L^{4}}^{4}+\| \Lambda^{\alpha}\Big{(}|z^{+}|^{2},|z^{-}|^{2}\Big{)}\|_{L^{2}}^{2}\]
\[=-\underbrace{\int_{\mathbb{R}^{3}}\nabla\pi\cdot(z^{+}\left|z^{+}\right|^{2}+ z^{-}\left|z^{-}\right|^{2})dx}_{\mathcal{J}_{1}}+\underbrace{\int_{\mathbb{R}^{ 3}}(|z^{+}|^{2}z^{+}+|z^{-}|^{2}z^{-})\cdot(\nabla\times w)dx}_{\mathcal{J}_{2 }}. \tag{2.3}\]
Taking the operator \({\rm div}\), to the first equation of (4.1), and using the facts \({\rm div}(\nabla\times w)=0\), we see
\[-\Delta\pi={\rm div}{\rm div}(z^{+}\otimes z^{-}),\]
and thus,
\[||\pi||_{L^{q,2}}\leq C||z^{+}\otimes z^{-}||_{L^{q,2}}\leq C||z^{+}||_{L^{2q, 4}}||z^{-}||_{L^{2q,4}}=C|||z^{+}|^{2}||_{L^{q,2}}^{1/2}|||z^{-}|^{2}||_{L^{q,2 }}^{1/2}\]
\[\leq C(|||z^{+}|^{2}||_{L^{q,2}}+|||z^{-}|^{2}||_{L^{q,2}}).\]
Using integration by parts, Holder's inequality in Lorentz space, Young's inequalities and Sobolev embedding for the fractional power, we note that for \(p>1\), \(q>2\) and \(\frac{5}{2}>\alpha>0\) with
\[\frac{1}{q}+\frac{1}{2p}+\frac{5-2\alpha}{6}=1, \tag{2.4}\]
\[\int_{\mathbb{R}^{3}}\nabla\pi\cdot\left|z^{+}\right|^{2}z^{+}\,dx=\int_{\mathbb{R}^ {3}}\nabla(z^{+}\left|z^{+}\right|^{2})\pi\,dx\leq C\int_{\mathbb{R}^{3}}|z^{+} ||\nabla|z^{+}|^{2}||\pi|\,dx\]
\[\leq C\|z^{+}\|_{L^{2q,4}}\|\nabla|z^{+}|^{2}\|_{L^{\frac{6}{5-2\alpha},2}}\| \pi\|^{1/2}\|_{L^{2p,\infty}}\||\pi|^{1/2}\|_{L^{2q,4}},\]
\[=C\||z^{+}|^{2}\|_{L^{q,2}}^{1/2}\|\nabla|z^{+}|^{2}\|_{L^{\frac{6}{5-2\alpha},2}}\|\pi\|_{L^{p,\infty}}^{1/2}\|\pi\|_{L^{q,2}}^{1/2}\]
\[\leq C\||z^{+}|^{2}\|_{L^{q,2}}^{1/2}\|\nabla|z^{+}|^{2}\|_{L^{\frac{6}{5-2 \alpha},2}}\|\pi\|_{L^{p,\infty}}^{1/2}\Big{(}||z^{+}|^{2}|_{L^{q,2}}^{1/2}+ |||z^{-}|^{2}|\Big{|}_{L^{q,2}}^{1/2}\Big{)}\]
\[\leq C\||z^{+}|^{2}\|_{L^{q,2}}\|\nabla|z^{+}|^{2}\|_{L^{\frac{6}{5-2\alpha},2 }}\|\pi\|_{L^{p,\infty}}^{1/2}|||z^{-}|^{1/2}|_{L^{q,2}}^{1/2}\]
\[+C\||z^{+}|^{2}\|_{L^{q,2}}^{1/2}\|\nabla|z^{+}|^{2}\|_{L^{\frac{6}{5-2\alpha},2}}\|\pi\|_{L^{p,\infty}}^{1/2}|||z^{-}|^{2}|\Big{|}_{L^{q,2}}^{1/2}\]
\[\leq C\|\pi\|_{L^{p,\infty}}^{1/2}\Big{(}\||z^{+}|^{2}\|_{L^{q,2}}+\||z^{-}|^ {2}\|_{L^{q,2}}\Big{)}\|\nabla|z^{+}|^{2}\|_{L^{\frac{6}{5-2\alpha},2}}.\]
In the same way, \(\int_{\mathbb{R}^{3}}\nabla\pi\cdot z^{-}\left|z^{-}\right|^{2}\,dx\) can be bounded by
\[C\|\pi\|_{L^{p,\infty}}^{1/2}\Big{(}\||z^{+}|^{2}\|_{L^{q,2}}+\||z^{-}|^{2}\| _{L^{q,2}}\Big{)}\|\nabla|z^{-}|^{2}\|_{L^{\frac{6}{5-2\alpha},2}}.\]
And thus, we get
\[\mathcal{J}_{1}\leq C\|\pi\|_{L^{p,\infty}}^{1/2}\Big{(}\||z^{+}|^{2}\|_{L^{q,2}}+\||z^{-}|^{2}\|_{L^{q,2}}\Big{)}\Big{(}\|\nabla|z^{+}|^{2}\|_{L^{\frac{6} {5-2\alpha},2}}+\|\nabla|z^{-}|^{2}\|_{L^{\frac{6}{5-2\alpha},2}}\Big{)}.\]
\[\leq C\|\pi\|_{L^{p,\infty}}^{1/2}\Big{(}\||z^{+}|^{2}\|_{L^{2}}^{1-\Big{(} \frac{3}{2\alpha}+\frac{3}{q\alpha}\Big{)}}+\||z^{-}|^{2}\|_{L^{2}}^{1-\Big{(} \frac{3}{2\alpha}+\frac{3}{q\alpha}\Big{)}}\Big{)}\]
\[\times\Big{(}\|\nabla|z^{+}|^{2}\|_{L^{\frac{6}{5-2\alpha},2}}^{1+\Big{(} \frac{3}{2\alpha}+\frac{3}{q\alpha}\Big{)}}+\|\nabla|z^{-}|^{2}\|_{L^{\frac{6} {5-2\alpha},2}}^{1+\Big{(}\frac{3}{2\alpha}+\frac{3}{q\alpha}\Big{)}}\Big{)}.\]
\[\leq C\|\pi\|_{L^{p,\infty}}^{\frac{2\alpha p}{4\alpha p-2p-3}}\Big{(}\||z^{+} |^{2}\|_{L^{2}}^{2}+\||z^{-}|^{2}\|_{L^{2}}^{2}\Big{)}+\frac{1}{16}\Big{(}\| \Lambda^{\alpha}|z^{+}|^{2}\|_{L^{2}}^{2}+\|\Lambda^{\alpha}|z^{-}|^{2}\|_{L^{ 2}}^{2}\Big{)}.\]
For \(\mathcal{J}_{2}\), integrating by parts, we note that
\[\int_{\mathbb{R}^{3}}|z^{+}|^{2}z^{+}\cdot(\nabla\times w)dx\leq\|w\|_{L^{4}}| \nabla|z^{+}|^{2}\|_{L^{2}}\|z^{+}\|_{L^{4}}\leq\|w\|_{L^{4}}^{2}\|z^{+}\|_{L^{ 4}}^{2}+\frac{1}{16}\|\nabla|z^{+}|^{2}\|_{L^{2}}^{2}\]
\[\leq\|w\|_{L^{4}}^{2}\|z^{+}\|_{L^{4}}^{2}+\frac{1}{16}\|z^{+}|^{2}|_{L^{2}}^{ 2}\|\Lambda^{\alpha}|z^{+}|^{2}\|_{L^{2}}^{2(1-\theta)},\quad\theta=\frac{ \alpha-1}{\alpha}\]
\[\leq C(\|w\|_{L^{4}}^{4}+\|z^{+}\|_{L^{4}}^{4})+\frac{C}{16}\|z^{+}|^{2}\|_{L^{ 2}}^{2}+\frac{1}{16}\|\Lambda^{\alpha}|z^{+}|^{2}\|_{L^{2}}^{2}\]
\[\leq C(\|w\|_{L^{4}}^{4}+\|z^{+}\|_{L^{4}}^{4})+\frac{1}{16}\|\Lambda^{\alpha}| z^{+}|^{2}\|_{L^{2}}^{2}.\]
And thus, \(\mathcal{J}_{2}\) is bounded by
\[\mathcal{J}_{2}\leq C(\|w\|_{L^{4}}^{4}+\|z^{+}\|_{L^{4}}^{4}+\|z^{-}\|_{L^{4}} ^{4})+\frac{1}{16}\Big{(}\|\Lambda^{\alpha}|z^{+}|^{2}\|_{L^{2}}^{2}+\| \Lambda^{\alpha}|z^{-}|^{2}\|_{L^{2}}^{2}\Big{)}.\]
To get \(L^{4}\)-estimate for \(w\), as before, multiplying the third equation of (2.2) by \(\left|w\right|^{2}w\), integrating by parts and summing up, we have
\[\frac{1}{4}\frac{d}{dt}|\!|w|\!|_{L^{4}}^{4}+|\!|\Lambda^{\gamma}|w|^{2}|\!|_{L^ {2}}^{2}+2\chi|\!|w|\!|_{L^{4}}^{4}+\kappa||w|\!|\mathrm{div}\ w|\!|_{L^{2}}^{2}\]
\[=\underbrace{\frac{\chi}{2}\int_{\mathbb{R}^{3}}|w|^{2}w\cdot(\nabla\times(z^{ +}+z^{-}))dx}_{\mathcal{J}_{3}}-\underbrace{\int_{\mathbb{R}^{3}}\mathrm{div} \ w\ (w\cdot\nabla|w|^{2})dx}_{\mathcal{J}_{4}}. \tag{2.5}\]
As same manner as \(\mathcal{J}_{2}\), \(\mathcal{J}_{3}\) is bounded by
\[\mathcal{J}_{3}\leq C(|\!|w|\!|_{L^{4}}^{4}+|\!|z^{+}|\!|_{L^{4}}^{4}+|\!|z^{ -}|\!|_{L^{4}}^{4})+\frac{1}{16}|\!|\Lambda^{\gamma}|w|^{2}|\!|_{L^{2}}^{2}, \quad\gamma\geq 1.\]
In a similar way, \(\mathcal{J}_{4}\) is also bounded by
\[\mathcal{J}_{4}\leq\kappa|\!|w|\!|\mathrm{div}\ w|\!|_{L^{2}}|\nabla|w|\!|^{2 }|\!|_{L^{2}}\leq C|\!|\nabla|w|^{2}|\!|_{L^{2}}^{2}+\frac{\kappa}{16}|\!|w| \!|\mathrm{div}\ w|\!|_{L^{2}}^{2}\]
\[\leq C|\!|w|\!|_{L^{4}}^{4}+\frac{1}{16}|\!|\Lambda^{\gamma}|w|^{2}|\!|_{L^{2 }}^{2}+\frac{\kappa}{16}|\!|w|\!|\mathrm{div}\ w|\!|_{L^{2}}^{2},\quad\gamma \geq 1.\]
Let \(Y(t):=\|(z^{+},z^{-},w)\|_{L^{4}(\mathbb{R}^{3})}^{4}\) and thus (2.8) becomes
\[\frac{d}{dt}Y(t)\lesssim\|\pi\|_{L^{q,\infty}(\mathbb{R}^{3})}^{q}Y(t),\qquad q =\frac{2\alpha p}{4\alpha p-2p-3}. \tag{2.6}\]
Now, we use an argument similar to the one used in the work of Bosia et al. [2]. For \(\epsilon>0\), Choose \(q_{\epsilon}=q-\epsilon(q+\frac{\alpha}{2\alpha-1}-\frac{3c_{0}}{4(2\alpha-1 )})\) and \(r_{\epsilon}:=\frac{q-\epsilon(q+\frac{\alpha}{2\alpha-1}-\frac{3c_{0}}{4(2 \alpha-1)})}{\frac{2}{3}(q(2\alpha-1)-\alpha)(1-\epsilon)+\frac{c_{0}\pi}{2}}\) with
\[\begin{cases}\frac{3}{p_{\kappa}}+\frac{2\alpha}{q_{\kappa}}=2(2\alpha-1),\\ \frac{q_{\kappa}}{p_{\kappa}}=\frac{q\big{(}1-\kappa\big{)}}{p}+\frac{c_{0} \kappa}{2}.\end{cases}\]
Due to the above relation, we get
\[\|\pi\|_{L^{p_{\kappa},\infty}(\mathbb{R}^{3})}^{q_{\kappa}}\lesssim\|\pi\|_{ L^{p,\infty}(\mathbb{R}^{3})}^{q(1-\kappa)}\|\pi\|_{L^{2,\infty}}^{4\kappa} \lesssim\|\pi\|_{L^{p,\infty}(\mathbb{R}^{3})}^{q(1-\kappa)}\|\pi\|_{L^{2}( \mathbb{R}^{3})}^{4\kappa}\]
\[\lesssim\|\pi\|_{L^{p,\infty}(\mathbb{R}^{3})}^{q(1-\kappa)}\Big{(}\|u|^{2} \big{|}_{L^{2}(\mathbb{R}^{3})}^{4\kappa}+\|b|^{2}\|_{L^{2}(\mathbb{R}^{3})} ^{4\kappa}\Big{)}. \tag{2.7}\]
Since the pair \((p_{\kappa},q_{\kappa})\) also meets \(3/p_{\kappa}+2\alpha/q_{\kappa}=2(2\alpha-1)\). Using the estimate (2.7), (2.6) becomes
\[\frac{d}{dt}Y(t)\lesssim\|\pi\|_{L^{q_{\kappa},\infty}(\mathbb{R}^{3})}^{p_{ \kappa}}\Big{\|}\|u|^{2}\Big{\|}_{L^{2}(\mathbb{R}^{3})}^{2}\leq C\|\pi\|_{L^{q,\infty}(\mathbb{R}^{3})}^{p(1-\kappa)}Y(t)^{1+2\kappa}.\]
And then integrating with respect to time from \(0\) to \(t\) with \(0\leq t<T\),
\[Y(t)\leq CY(0)+C\int_{0}^{t}\|\pi\|_{L^{q,\infty}(\mathbb{R}^{3})}^{p(1-\kappa)} Y(t)^{1+2\kappa}\,ds,\]
or equivalently,
\[\|w^{+}(t)\|_{L^{4}(\mathbb{R}^{3})}^{4}+\|w^{-}(t)\|_{L^{4}(\mathbb{R}^{3})}^{ 4}\leq C\|w_{0}^{+}\|_{L^{4}(\mathbb{R}^{3})}^{4}+\|w_{0}^{-}\|_{L^{4}(\mathbb{ R}^{3})}^{4}\]
\[+C\int_{0}^{t}\|\pi\|_{L^{q,\infty}(\mathbb{R}^{3})}^{p(1-\kappa)}\|w^{+}\|_{L ^{4}(\mathbb{R}^{3})}^{4}+\|w^{-}\|_{L^{4}(\mathbb{R}^{3})}^{4(1+2\kappa)}\,ds.\]
Due to Lemma 2.2, we are now able to complete the proof of Theorem 1.1 under the assumption \((A)\) in Theorem 1.1.
_Part (B): Indeed, a proof is almost same to that in the argument in [7] or [13], however, for the reader's convenience, a sketch of the proof will be given. Multiplying both side of \(\eqref{eq:2.2}_{1}\) by \(z^{+}|z^{+}|^{3r-4}\) and then integrating them over \(\mathbb{R}^{3}\) we conclude that_
\[\frac{1}{3r-2}\frac{d}{dt}\int_{\mathbb{R}^{3}}|z^{+}|^{3r-2}dx+\frac{4(3r-4)} {(3r-2)^{2}}\int_{\mathbb{R}^{3}}|\Lambda^{\alpha}|z^{+}|^{\frac{3r-2}{2}}|^{2 }\,dx\]
\[\lesssim\underbrace{\int_{\mathbb{R}^{3}}\nabla\pi\cdot|z^{+}|^{3r-4}z^{+}dx} _{\mathcal{J}_{5}}+\frac{1}{2}\underbrace{\int_{\mathbb{R}^{3}}|z^{+}|^{3r-4} z^{+}\cdot(\nabla\times w)dx}_{\mathcal{J}_{6}} \tag{2.8}\]
_By the integration by parts and Holder inequality, \(\mathcal{J}_{5}\) is also written by_
\[\mathcal{J}_{1} \leq(3r-4)\int_{\mathbb{R}^{3}}|\pi||\nabla|z^{+}||z^{+}|^{3r-4} \,dx \tag{2.9}\] \[\leq\frac{2(3r-4)}{(3r-2)}\Big{(}\int_{\mathbb{R}^{3}}|\pi|^{2}|z ^{+}|^{3r-4}\,dx\Big{)}^{\frac{1}{2}}\Big{(}\int_{\mathbb{R}^{3}}|\nabla|z^{+ }|^{\frac{3r-2}{2}}\,dx\Big{)}^{\frac{1}{2}}.\]
_Note that \(0\leq I\leq a\) and \(0\leq I\leq b\), then \(I\leq\sqrt{ab}\). Combining \(\mathcal{J}_{5}\) in (2.8) and (2.9), we get_
\[\mathcal{J}_{5} \lesssim\Big{(}\int_{\mathbb{R}^{3}}|\nabla\pi||z^{+}|^{3r-3}\,dx \Big{)}^{1/2}\Big{(}\int_{\mathbb{R}^{3}}|\pi|^{2}|z^{+}|^{3r-4}\,dx\Big{)}^{ \frac{1}{4}}\Big{(}\int_{\mathbb{R}^{3}}|\nabla|z^{+}|^{\frac{3r-2}{2}}|\,dx \Big{)}^{\frac{1}{4}}\] \[\leq C\Big{(}\int_{\mathbb{R}^{3}}\Big{(}|\nabla\pi|\Big{(}|z^{+} |^{2}+|z^{-}|^{2}\,dx\Big{)}^{(3r-3)/2}\Big{)}^{2/3}\Big{(}\int_{\mathbb{R}^{3 }}\Big{(}|\pi|\Big{(}|z^{+}|^{2}+|z^{-}|^{2}\,dx\Big{)}^{(3r-4)/2}\Big{)}^{1/3}\]
_Due to_
\[\int_{\mathbb{R}^{3}}|\pi|^{2}\Big{(}|z^{+}|^{2}+|z^{-}|^{2}\Big{)}^{\frac{3r- 4}{2}}\,dx\lesssim\|\pi\|_{L^{\frac{3r}{2},6r-6}}^{2}\||z^{+}|^{2}+|z^{-}|^{2} \|_{L^{\frac{3r}{2},\frac{3r-3}{2}}}^{\frac{3r-4}{2}}\]
\[\lesssim\||z^{+}|^{2}+|z^{-}|^{2}\|^{2}_{L^{\frac{3r}{2},\frac{3r-3}{2}}}\||z^{+} |^{2}+|z^{-}|^{2}\|^{\frac{3r-4}{2}}_{L^{\frac{3r}{2},\frac{3r-3}{2}}}=\||z^{+}| ^{2}+|z^{-}|^{2}\|^{\frac{3r}{2}}_{L^{\frac{3r}{2},\frac{3r-3}{2}}},\]
_and_
\[\int_{\mathbb{R}^{3}}|\nabla\pi|\Big{(}|z^{+}|^{2}+|z^{-}|^{2}\Big{)}^{\frac{3r -3}{2}}\,dx\lesssim\|\nabla\pi\|_{L^{r,\infty}}\||z^{+}|^{2}+|z^{-}|^{2}\|^{ \frac{3r-3}{2}}_{L^{\frac{3r}{2},\frac{3r-3}{2}}},\]
\(\mathcal{J}_{5}\) _is estimated by_
\[\mathcal{J}_{5}\leq C\|\nabla\pi\|^{\frac{2}{3}}_{L^{r,\infty}}\||z^{+}|^{2}+ |z^{-}|^{2}\|^{\frac{3r-2}{2}}_{L^{\frac{3r}{2},\frac{3r-2}{2}}}+\frac{3r-4}{( 3r-2)^{2}}\Big{(}\int_{\mathbb{R}^{3}}|\nabla|z^{+}|^{\frac{3r-2}{2}}|^{2}\,dx \Big{)}. \tag{2.10}\]
_Next, for \(\mathcal{J}_{6}\), using the Holder and Young inequalities, we have_
\[\mathcal{J}_{6}\lesssim\|w\|_{L^{3r-2}}\||z^{+}|^{\frac{3r-4}{2}}\|_{L^{\frac{ 2(3r-2)}{3r-4}}}\|\nabla|z^{+}|^{\frac{3r-2}{2}}\|_{L^{2}} \tag{2.11}\]
\[\lesssim\|w\|^{2}_{L^{3r-2}}\||z^{+}|^{\frac{3r-4}{2}}\|^{2}_{L^{\frac{2(3r-2) }{3r-4}}}+\|\nabla|z^{+}|^{\frac{3r-2}{2}}\|^{2}_{L^{2}}\lesssim(\|w\|^{3r-2} _{L^{3r-2}}+\||z^{+}\|^{3r-2}_{L^{3r-2}})+\|\nabla|z^{+}|^{\frac{3r-2}{2}}\|^{ 2}_{L^{2}}.\]
_And them, considering the estimates (2.10) and (2.11), (2.8) reduces_
\[\frac{d}{dt}\int_{\mathbb{R}^{3}}|z^{+}|^{3r-2}dx+\int_{\mathbb{R}^{3}}|\Lambda ^{\alpha}|z^{+}|^{\frac{3r-2}{2}}|^{2}\,dx \tag{2.12}\]
\[\lesssim\|\nabla\pi\|^{\frac{2}{3}}_{L^{r,\infty}}\||z^{+}|^{2}+|z^{-}|^{2}\| ^{\frac{3r-2}{2}}_{L^{\frac{3r}{2},\frac{3r-2}{2}}}+C(\|w\|^{3r-2}_{L^{3r-2}}+ \||z^{+}\|^{3r-2}_{L^{3r-2}})+\|\nabla|z^{+}|^{\frac{3r-2}{2}}\|^{2}_{L^{2}}.\]
\[\leq C\|\nabla\pi\|^{\frac{2}{3}}_{L^{r,\infty}}\||z^{+}|^{2}+|z^{-}|^{2}\|^{ \frac{3r-2}{2}}_{L^{\frac{3r}{2},\frac{3r-2}{2}}}+C(\|w\|^{3r-2}_{L^{3r-2}}+ \||z^{+}\|^{3r-2}_{L^{3r-2}})+\frac{1}{256}\|\Lambda^{\alpha}|z^{+}|^{\frac{3r -2}{2}}\|^{2}_{L^{2}}.\]
_where we use the estimate_
\[\|\nabla|z^{+}|^{\frac{3r-2}{2}}\|^{2}_{L^{2}}\leq C\|z^{+}\|^{3r-2}_{L^{3r-2} }\|\Lambda^{\alpha}|z^{+}|^{\frac{3r-2}{2}}\|^{2(1-\theta)}_{L^{2}}\leq\||z^{+ }|^{\frac{3r-2}{2}}\|^{2\theta}_{L^{2}}+\frac{1}{256}\|\Lambda^{\alpha}|z^{+ }|^{\frac{3r-2}{2}}\|^{2}_{L^{2}}.\]
_In a similar fashion, if you do it for the equation \((\ref{eq:2.2})_{2}\), we have_
\[\frac{d}{dt}\int_{\mathbb{R}^{3}}|z^{-}|^{3r-2}dx+\int_{\mathbb{R}^{3}}|\Lambda ^{\alpha}|z^{-}|^{\frac{3r-2}{2}}|^{2}\,dx \tag{2.13}\]
\[\leq C\|\nabla\pi\|^{\frac{2}{3}}_{L^{r,\infty}}\||z^{+}|^{2}+|z^{-}|^{2}\|^{ \frac{3r-2}{2}}_{L^{\frac{3r}{2},\frac{3r-2}{2}}}+C(\|w\|^{3r-2}_{L^{3r-2}}+ \||z^{-}\|^{3r-2}_{L^{3r-2}})+\frac{1}{256}\|\Lambda^{\alpha}|z^{-}|^{\frac{3r- 2}{2}}\|^{2}_{L^{2}}.\]
_After summing up (2.12) and (2.13), using Sobolev embedding and Young's inequality,_
_we obtain_
\[\begin{split}&\frac{d}{dt}\int_{\mathbb{R}^{3}}\Big{(}|z^{+}|^{3r-2 }+|z^{-}|^{3r-2}\Big{)}dx+\int_{\mathbb{R}^{3}}\Big{(}|\Lambda^{\alpha}|z^{+}|^{ \frac{3r-2}{2}}|^{2}+|\Lambda^{\alpha}|z^{-}|^{\frac{3r-2}{2}}|^{2}\Big{)}\,dx \\ &\lesssim\|\nabla\pi\|_{L^{r,\infty}}^{\frac{3}{2}}\Big{(}\||z^{+} |^{\frac{3r-2}{2}}\|_{L^{\frac{6r}{3r-2}},1}^{2}+\||z^{-}|^{\frac{3r-2}{2}}\|_ {L^{\frac{6r}{3r-2}},1}^{2}\Big{)}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \
_Let \(\mathcal{Y}(t):=\|z^{+}\|^{3r-2}_{L^{3r-2}(\mathbb{R}^{3})}+\|z^{-}\|^{3r-2}_{L^{3r -2}(\mathbb{R}^{3})}+\|w\|^{3r-2}_{L^{3r-2}(\mathbb{R}^{3})}\) and then (2) becomes_
\[\mathcal{Y}(t)\leq C\|\nabla\pi\|^{\frac{2r\alpha}{3r-3}}_{L^{r,\infty}( \mathbb{R}^{3})}\mathcal{Y}(t)+\mathcal{Y}(t).\]
_As the previous way, it allow us to finish the proof of Theorem 1.1. \(\Box\)_
## 3 Proof of Theorem 1.2
For this, according to the argument in [22] or [12], we can establish a Serrin's type regularity criterion on the gradient of pressure function \(\pi\). Indeed, from (2.8), we know
\[\frac{1}{4}\frac{d}{dt}\|(u,b,w)\|^{4}_{L^{4}}+\|\nabla(|u|^{2},|b|^{2},|w|^{2} )\|^{2}_{L^{2}}\]
\[+\|u||\nabla u\|^{2}_{L^{2}}+\||b||\nabla b||^{2}_{L^{2}}+\||w||\nabla w\|^{2} _{L^{2}}+2\chi\|w\|^{4}_{L^{4}}+\kappa|||w|{\rm div}\ w||^{2}_{L^{2}}\]
\[\lesssim\underbrace{\int_{\mathbb{R}^{3}}|\nabla\pi||u|^{3}|\nabla|u|^{2}|dx}_ {\mathcal{J}_{1}}+\underbrace{\int_{\mathbb{R}^{3}}(b\cdot\nabla)b\cdot|u|^{2 }udx}_{\mathcal{J}_{2}}\]
\[+\frac{1}{2}\underbrace{\int_{\mathbb{R}^{3}}\nabla(|b|^{2})\cdot|u|^{2}u\,dx }_{\mathcal{J}_{3}}+\underbrace{\int_{\mathbb{R}^{3}}(b\cdot\nabla)u\cdot|b|^{ 2}bdxdt}_{\mathcal{J}_{4}}+\frac{\chi}{2}\underbrace{\int_{\mathbb{R}^{3}}|w|^ {2}w\cdot(\nabla\times u)dx}_{\mathcal{J}_{5}}\]
\[+\underbrace{\frac{\chi}{2}\int_{\mathbb{R}^{3}}|w|^{2}w\cdot(\nabla\times u) dx}_{\mathcal{J}_{6}}-\underbrace{\int_{\mathbb{R}^{3}}{\rm div}\ w\ (w\cdot\nabla|w|^{2})dx}_{\mathcal{J}_{7}} \tag{3.1}\]
For the result, \(\mathcal{J}_{1}\) only has been changed as follows: for \(\gamma>1\)
\[\int_{\mathbb{R}^{3}}\nabla\pi\cdot|u|^{2}u\,dx\leq\||\nabla\pi|^{1/2}\|_{L^{ 4,4}}\||\nabla\pi|^{1/2}\|_{L^{2\gamma,\infty}}\||u^{3}\|_{L^{\frac{4\gamma}{ 3\gamma-2},\frac{4}{3}}}\]
\[=\|\nabla\pi\|^{1/2}_{L^{2},2}\|\nabla\pi\|^{1/2}_{L^{\gamma,\infty}}\|u\|^{3 }_{L^{\frac{12\gamma}{3\gamma-2},4}}\leq\frac{1}{4}\|\nabla\pi\|^{2}_{L^{2}}+C \|\nabla\pi\|^{3/2}_{L^{\gamma,\infty}}\|u\|^{4}_{L^{\frac{12\gamma}{3\gamma-2 },4}}\]
\[\leq\frac{1}{4}\|\nabla\pi\|^{2}_{L^{2}}+C\|\nabla\pi\|^{\frac{2}{3}}_{L^{ \gamma,\infty}}\||u|^{2}\|^{2}_{L^{\frac{6\gamma}{3\gamma-2},2}}\]
\[\leq\frac{1}{4}\|\nabla\pi\|^{2}_{L^{2}}+C\|\nabla\pi\|^{\frac{2}{3}}_{L^{ \gamma,\infty}}\||u|^{2}\|^{2(1-\frac{1}{\gamma})}_{L^{2,2}}|\nabla|u|^{2}\|^ {\frac{2}{2}}_{L^{2,2}}\]
\[\leq\frac{1}{16}\|\nabla\pi\|^{2}_{L^{2}}+\frac{1}{8}\|\nabla|u|^{2}\|^{2}_{L^ {2}}+C\|\nabla\pi\|^{\frac{2\gamma}{3(\gamma-1)}}_{L^{\gamma,\infty}}\|u\|^{4}_ {L^{4}},\]
and thus
\[\mathcal{J}_{1}\leq\frac{1}{4}\|\nabla\pi\|^{2}_{L^{2}}+\frac{1}{16}|\nabla|u|^ {2}\|^{2}_{L^{2}}+C\|\nabla\pi\|^{\frac{2\gamma}{3(\gamma-1)}}_{L^{\gamma, \infty}}\|u\|^{4}_{L^{4}}\]
Using the following estimate,
\[\|\nabla v\|_{L^{2}}^{2}\lesssim\|(u\cdot\nabla)u+b\cdot\nabla)b\|_{L^{2}}^{2} \lesssim\||u||\nabla u|\|_{L^{2}}^{2}+\||b||\nabla b|\|_{L^{2}}^{2}.\]
we get
\[\mathcal{J}_{1}\leq C\|\nabla\pi\|_{L^{\gamma,\infty}}^{\frac{2\gamma}{3( \gamma-1)}}\|u\|_{L^{4}}^{4}+\frac{1}{8}(\||u||\nabla u|\|_{L^{2}}^{2}+\||b|| \nabla b|\|_{L^{2}}^{2}).\]
Using the integration by parts, \(\mathcal{J}_{2}\), \(\mathcal{J}_{3}\) and \(\mathcal{J}_{4}\) is bounded by
\[\int_{0}^{T}\int_{\mathbb{R}^{3}}|u||b|^{2}(\nabla|u|^{2}+\nabla|b|^{2})dxdt \leq C(\|(|u|^{2}+|b|^{2})b\|_{L^{2}}^{2}+\frac{1}{16}(\|\nabla|u|^{2}\|_{L^{2} }^{2}+\|\nabla|b|^{2}\|_{L^{2}}^{2})\]
\[\leq C\|b\|_{L^{1}_{1}}^{\frac{2a_{1}}{a_{1}-3}}(\||u|^{2}\|_{L^{2}}^{2}+\|b| ^{2}\|_{L^{2}}^{2})+\frac{1}{16}(\|\nabla|u|^{2}\|_{L^{2}}^{2}+\|\nabla|b|^{2} \|_{L^{2}}^{2})\]
where we use the following inequality:
\[\|b\|_{L^{a_{1}}}^{2}\|u|^{2}\|_{L^{\frac{2a_{1}}{a_{1}-2}}}^{2}\lesssim\|b\|_{ L^{a_{1}}}^{2}\||u|^{2}\|_{L^{2}}^{2(1-\frac{3}{a_{1}})}\|\nabla|u|^{2}\|_{L^{2}}^{ \frac{6}{a_{1}})}\leq C\|b\|_{L^{a_{1}}}^{\frac{2a_{1}}{a_{1}-3}}\||u|^{2}\|_{ L^{2}}^{2}+\frac{1}{16}\|\nabla|u|^{2}\|_{L^{2}}^{2}\]
In a similar way, for \(\mathcal{J}_{5}\) and \(\mathcal{J}_{6}\), it shows
\[|J_{5}|\leq C\int_{\mathbb{R}^{3}}(|u|^{4}+|w|^{4})\,dx+\frac{1}{16}\int_{ \mathbb{R}^{3}}||w||\nabla w||^{2}\,dx,\]
and
\[|J_{6}|\leq C\int_{\mathbb{R}^{3}}|\text{div}\ w|^{2}dx+\frac{1}{16}\int_{ \mathbb{R}^{3}}|\nabla|w|^{2}|^{2}dx.\]
Plugging this into (3.1), we get
\[\frac{d}{dt} \|(u,b,w)\|_{L^{4}}^{4}+\|\nabla(|u|^{2},|b|^{2},|w|^{2})\|_{L^{2 }}^{2}+\|u||\nabla u\|_{L^{2}}^{2}+\|b||\nabla b|_{L^{2}}^{2}+\|w||\nabla w| \|_{L^{2}}^{2}\] \[\lesssim\|\nabla\pi\|_{L^{q,\infty}(\mathbb{R}^{3})}^{p}\|(u,b,w) \|_{L^{4}}^{4}+\|b\|_{L^{a_{1}}}^{\frac{2a_{1}}{a_{1}-3}}\|(u,b,w)\|_{L^{4}}^ {4}\] \[\lesssim\|\nabla\pi\|_{L^{q,\infty}(\mathbb{R}^{3})}^{p(1-\kappa) }\|\nabla\Pi\|_{L^{2}(\mathbb{R}^{3})}^{c_{1}\kappa}\|(u,b,w)\|_{L^{4}}^{4}+ \|b\|_{L^{a_{1}-3}}^{\frac{2a_{1}}{a_{1}-3}}\|(u,b,w)\|_{L^{4}}^{4}+\|b\|_{L^{ a_{1}-3}}^{\frac{2a_{1}}{a_{1}-3}}\|(u,b,w)\|_{L^{4}}^{4}\] \[\leq C\|\nabla\pi\|_{L^{q,\infty}(\mathbb{R}^{3})}^{p(1-\kappa)} \bigg{\|}|u||\nabla u|+|b||\nabla b|\bigg{\|}_{L^{2}(\mathbb{R}^{3})}^{c_{1} \kappa}\|(u,b,w)\|_{L^{4}}^{4}\] \[\leq C\|\nabla\pi\|_{L^{q,\infty}(\mathbb{R}^{3})}^{\frac{2p(1- \kappa)}{2-c_{1}\kappa}}\|u\|_{L^{4}(\mathbb{R}^{3})}^{\frac{8}{2-c_{1}\kappa} }+|b\|_{L^{a_{1}-3}}^{\frac{2a_{1}}{a_{1}-3}}\|(u,b,w)\|_{L^{4}}^{4}+\frac{1}{ 8}\Big{(}\|u||\nabla u\|_{L^{2}(\mathbb{R}^{3})}^{2}+\||b||\nabla b\|_{L^{2}( \mathbb{R}^{3})}^{2}\Big{)}\] \[\leq C\|\nabla\pi\|_{L^{q,\infty}(\mathbb{R}^{3})}^{p(1-\delta)} \|(u,b,w)\|_{L^{4}(\mathbb{R}^{3})}^{4(1+2\delta)}+\|b\|_{L^{a_{1}-3}}^{\frac{2 a_{1}}{a_{1}-3}}\|(u,b,w)\|_{L^{4}}^{4}\]
Notice that \(2/p_{\kappa}+3/q_{\kappa}=3\). Chooing \(\delta=\frac{(2-c_{1})\kappa}{2-c_{1}\kappa},\ \ c_{1}=\frac{4}{3}\), it finally follows
\[\frac{d}{dt}\|(u,b,w)\|_{L^{4}}^{4}\lesssim\|\nabla\pi\|_{L^{q,\infty}( \mathbb{R}^{3})}^{p(1-\delta)}\|(u,b,w)\|_{L^{4}(\mathbb{R}^{3})}^{4(1+2\delta)}.\]
As the previous way, it allow us to finish the proof of Theorem 1.2. |
2309.10167 | Testaro: Efficient Ensemble Testing for Web Accessibility | As automated web accessibility testing tools become enriched with new and
improved tests, it can be impractical to leverage those advances. Each tool
offers unique benefits, but effectively using multiple tools would require
integrating them into a uniform testing and reporting scheme. Such integration
is complex, because tools vary in what they try to detect, what they actually
detect, and how they classify, describe, and report defects. Consequently,
testers typically use only one tool.
Testaro is a novel open-source NPM package that checks compliance with about
650 rules defined by an ensemble of 8 tools: alfa, Axe, Equal Access, HTML
CodeSniffer, Nu Html Checker, QualWeb, Testaro, and WAVE.
Attendees at the demonstration will, within 5 minutes, create jobs for
Testaro, run them, and generate unified reports documenting more accessibility
issues than any single tool can discover. | Jonathan Robert Pool | 2023-09-18T21:32:36Z | http://arxiv.org/abs/2309.10167v2 | # Testaro
###### Abstract.
As automated web accessibility testing tools become enriched with new and improved tests, it can be impractical to leverage those advances. Each tool offers unique benefits, but effectively using multiple tools would require integrating them into a uniform testing and reporting scheme. Such integration is complex, because tools vary in what they try to detect, what they actually detect, and how they classify, describe, and report defects. Consequently, testers typically use only one tool.
Testaro(T) is a novel open-source NPM package that checks compliance with about 650 rules defined by an ensemble of 8 tools: alfa, Axe, Equal Access, HTML CodeSniffer, Nu Html Checker, QualWeb, Testaro, and WAVE.
Attendees at the demonstration will, within 5 minutes, create jobs for Testaro, run them, and generate unified reports documenting more accessibility issues than any single tool can discover.
web accessibility, accessibility testing, test automation, test efficiency +
Footnote †: journal: Acousibility test
+
Footnote †: journal: Acousibility test
Until now, no project integrating multiple accessibility testing tools under programmatic control with standardized reporting has been discovered. Pa11y[(18)], kayle[(11)], and AAT[(9)] integrate 2 tools: Axe and HTML CodeSniffer. Although a11yTools[(1)] integrates 5 tools and 13 single-issue tests, it runs only one tool or test at a time, and only under human control.
## 3. Architecture
Testaro (in contrast with the Englefield _et al._ proposal) does not depend on cooperation from tool makers. It integrates existing tools as they are.
Testaro tests the way humans do. It launches web browsers, navigates to web pages, performs actions, checks whether the pages behave as expected, and notes the results. Hence, it runs on a Windows, MacOS, or Ubuntu workstation.
Testaro is an NPM package that performs its own tests and those of 7 other tools, of which one is a remote service and the others are installed dependencies. The tools integrated by Testaro are listed in Table 1. Among them, they check compliance with about 650 rules. Testaro uses Playwright[(12)] to launch and control Chromium, Webkit, and Firefox browsers.
## 4. Process
A _job_ is an object giving information and instructions to Testaro. The core of a job is its _acts_, an array of instructions to be executed. Version 18.0.0 of Testaro defines 19 act types, which include actions on a page, navigations among pages, and tool executions.
When an act tells Testaro to execute one of the 9 tools, the act can specify which rules of that tool the tool should test for, which of the 3 browser types the tool should use, how granular the output should be, and other options. Here is an example of an act, telling Testaro to make the alfa tool perform tests for two of its rules:
{ type: 'test', which: 'alfa', what: 'Siteimprove alfa tool', rules: ['r25', 'r71'] }
As it performs a job, Testaro adds results to the acts. At the end of the job, Testaro adds whole-job data to the job and returns this elaborated job as a _report_.
## 5. Efficiencies
Testaro is designed to streamline tool installation and configuration. It installs all the tools and provides a uniform configuration interface. The options made available by all the tools are documented in one location and selected in the job file with a uniform syntax.
Testaro simplifies the task of executing multiple tools. A single job file tells Testaro which tools to launch in which order, and Testaro runs them all. A job that includes all the tests of all the tools typically takes about 3 minutes. If that were not fast enough, execution could be further accelerated with job partitioning: installing Testaro on multiple workstations, having them perform complementary jobs in parallel, and combining their reports.
An instance of Testaro can be configured as an on-call agent. It polls a server for jobs. When the server replies by sending a job, Testaro performs it and sends the report to the server.
Finally, Testaro is designed to make the utilization of tool reports more efficient. For this purpose, Testaro translates the most common elements of native tool reports into standard results. Fully documented in the README.md file, the standard results uniformly present each tool's reports of violations of its rules, including what rule was violated, how serious the estimated impact is (on a 0-to-3 ordinal scale), what HTML element was involved, where on the page it appeared, and an excerpt from the HTML code.
Here is an example of an entry from a standard result:
{ totals: [23, 11, 6, 8], instances: [ { ruleID: 'image-no-alt', what: img element has no text alternative, count: 1, ordinalSeverity: 3, tagName: 'IMG', id: 'ocean-beach-sunset', location: { doc: 'dom', type: 'xpath', spec: '/html/body/div[4]/p[2]/img[1]' } excerpt: <img src='images/obSunset.jpg"> },... ] }
In this example, a tool reported 23 instances of rule violations at severity 0, 11 at severity 1, etc. The first reported instance was an img element that violated a rule named image-no-alt.
Given the diverse ontologies of the tools, any standardization reflects some judgment. An example is the ordinalSeverity property, which interprets and combines the tools' various classifications of severity, priority, and certainty. Users are free to rely on the standardization performed by Testaro to simplify report consumption, but, if they want more control, they may extract data from original tool results, too.
\begin{table}
\begin{tabular}{l l l} \hline \hline Code & Name & Creator \\ \hline alfa & alfa[(15)] & Siteimprove \\ axe & axe-core[(2)] & Deque \\ htmlcs & HTML CodeSniffer[(16)] & Squiz \\ ibm & Equal Access[(4)] & IBM \\ nuVal & Nu Html Checker[(21)] & W3C \\ qualWeb & QualWeb[(5)] & Universidade da Lisboa \\ testaro & Testaro[(7)] & Testaro \\ wave & WAVE[(6)] & WebAIM \\ \hline \hline \end{tabular}
\end{table}
Table 1. Tools integrated by Testaro
## 6. Customization
Effective accessibility management requires checking conformity not only to industry standards such as the Web Content Accessibility Guidelines(Han et al., 2017) (WCAG), but also to rules (brand standards, design systems, etc.) of one's own organization. In a multi-tool integrator, each tool is potentially a platform for the creation of custom rules, and the set of tools is extensible. Users can customize Testaro by any of these methods:
* creating a tool and adding it as an installed dependency
* creating a tool and adding it as a remote service
* extending any of the tools, if it permits, by adding new rules to it
The Testaro tool contains a template for the creation of custom rules. Existing Testaro rules are typically defined in 10 to 30 lines of code. A custom rule would likely require a similar amount of code.
## 7. Job Preparation
For routinized use of Testaro, job preparation can be partly automated. One package performing this function is Testilo(Testilo, 2017). The user can create files that answer the questions "What tests do you want to run?" and "What _targets_ do you want to test?". Testilo can convert those files to a job that Testaro will execute. Table 2 gives an example of data that might be in a target file.
## 8. Report Enhancement
A JSON report from Testaro narrows the gap between native tool reports and user-friendly reporting, but does not close that gap. A report contains standard results, but they are presented sequentially, tool by tool, and the result from each tool describes violations of that tool's rules, not of universally defined norms. Users will often want to:
* map the tool rules onto a set of tool-agnostic issues
* gather the complaints of all tools about each issue into one place
* aggregate the issue reports into a total accessibility score
* export scores for use in dashboards or reports
* summarize the JSON report in a developer- or manager-friendly HTML document
* collect scores from reports on related targets into a comparative report
To perform such functions, users can create procedures and/or use Testilo. To interpret tool rules, Testilo offers a rule classifier that maps the approximately 650 tool rules onto about 260 _issues_. For example, two tools have rules prohibiting broken same-page links. One is AAA_2\(4\)1.G1,G123,G124.NoSuchID from htmlcs, and the other is link_internal_broken from wave. Testilo maps both of these onto an internalLinkBroken issue and references WCAG Success Criterion 1.3.1 as the most relevant standard.
## 9. Demonstration
In the demonstration, a simple web service will ask each user for the URL of a web page to be tested. The service will use Testilo to create a job for Testaro. Testaro will perform the job and convert all the tools' results into standard results. As the final step, Testilo will convert the Testaro report to a human-oriented web page.
## 10. Future Work
Some engineers using Testaro for accessibility testing have requested richer and more tailored issue reports, with better identification of instance locations, consolidation of duplicates, and resolution of tool disagreements.
Such improvements will require more work on instance identification. Some tools may locate instances by line number, others by XPath, others by CS selector, others by bounding box, and others only with an HTML code excerpt. Further work could aim to determine when instances reported by various tools are the same and to supply images of, and links to, instances.
Empirical data from the use of Testaro may facilitate rule sequencing, pruning, and deprecation; bug reports to tool makers; and the training of machine learners in the prediction of accessibility issues. [https://www.overleaf.com/project/64d0165e85f656474c8sea3](https://www.overleaf.com/project/64d0165e85f656474c8sea3). Testaro welcomes contributions to improve functionality, reliability, and issue coverage.
## 11. Conclusion
As makers of testing tools innovate to narrow the gaps(Han et al., 2017)(Han et al., 2017) between formal and practical accessibility, tools continue to complement each other. Accessibility testing with an ensemble of tools is complex but valuable, and it can be made practical even without the cooperation of tool makers.
## Acknowledgments
I acknowledge valuable editorial comments and research from Susan M. Colowick.
My opinions expressed herein are my own views and do not necessarily reflect the views of CVS Health, its affiliates, or any of my colleagues at CVS Health or its affiliates.
|
2309.11292 | Multivariate Dirichlet Moments and a Polychromatic Ewens Sampling
Formula | We present an elementary non-recursive formula for the multivariate moments
of the Dirichlet distribution on the standard simplex, in terms of the pattern
inventory of the moments' exponents. We obtain analog formulas for the
multivariate moments of the Dirichlet-Ferguson and Gamma measures. We further
introduce a polychromatic analogue of Ewens sampling formula on colored integer
partitions, discuss its relation with suitable extensions of Hoppe's urn model
and of the Chinese restaurant process, and prove that it satisfies an adapted
notion of consistency in the sense of Kingman. | Lorenzo Dello Schiavo, Filippo Quattrocchi | 2023-09-20T13:19:25Z | http://arxiv.org/abs/2309.11292v1 | # Multivariate Dirichlet Moments and
###### Abstract
We present an elementary non-recursive formula for the multivariate moments of the Dirichlet distribution on the standard simplex, in terms of the pattern inventory of the moments' exponents. We obtain analog formulas for the multivariate moments of the Dirichlet-Ferguson and Gamma measures.
We further introduce a polychromatic analogue of Ewens sampling formula on colored integer partitions, discuss its relation with suitable extensions of Hoppe's urn model and of the Chinese restaurant process, and prove that it satisfies an adapted notion of consistency in the sense of Kingman.
**Keywords: Dirichlet distribution; Ewens sampling formula; Hoppe urn model; colored partitions. MSC2020 subject classifications: 60C05 (Primary), 60J10.**
## 1 Introduction
We present extensions of some celebrated models of random integer partitions, to the case when such partitions are decorated by a subordinate specification, for simplicity described as a categorically distributed coloring. The _fil rouge_ of our presentation is an algebraic approach to the count of integer partitions, which we draw from well-known connections among the _Dirichlet distribution, Ewens sampling formula_ (ESF), Hoppe's urn model_, the _Chinese restaurant process_ (CRP), etc.
Our starting point is the observation that univariate moments of the Dirichlet distribution are the generating functions of the (standard,'monocromatic') ESF (cf. (3.3) below). Here, our goal is to describe the relation between _multivariate moments_ of the Dirichlet distribution and a 'polychromatic' ESF on colored partitions. A systematic treatment of the arising 'colored partition structure', including a representation theorem in the sense of Kingman [17], will be the subject of future work.
Denote by \(\Gamma\) the _Euler Gamma function_, by \(\left\langle\alpha\right\rangle_{k}\mathop{:=}\Gamma(\alpha+k)/\Gamma(\alpha)\) the _Pochhammer symbol_ of \(\alpha>0\), and by \(\mathrm{B}(x_{1},\ldots,x_{k})\mathop{:=}\Gamma(x_{1})\cdots\Gamma(x_{k})/ \Gamma(x_{1}+\cdots+x_{k})\) the _multivariate Euler Beta function_. For \(k\geq 1\) further let \(\Delta^{k-1}\) be the _standard simplex_ (3.1). For \(\boldsymbol{\alpha}\in\mathds{R}^{k}_{+}\), the _Dirichlet distribution_\(D_{\boldsymbol{\alpha}}\) is the probability measure with density
\[\frac{\boldsymbol{1}_{\Delta^{k-1}}(x_{1},\ldots,x_{k})}{\mathrm{B}(\alpha_{1}, \ldots,\alpha_{k})}x_{1}^{\alpha_{1}-1}\cdots x_{k}^{\alpha_{k}-1}\]
w.r.t. the standard Lebesgue measure on the hyperplane of equation \(x_{1}+\cdots+x_{k}=1\).
_Moments of Dirichlet measures._ To find useful representations for the moments of \(D_{\boldsymbol{\alpha}}\) is a difficult problem, of which we present a brief historical account in SS3.1. As a first main result, we provide a simple, elementary, closed formula for all multivariate moments of \(D_{\boldsymbol{\alpha}}\). Precisely, fix integers \(q\in\mathds{N}_{1}\) and \(\mathbf{n}\mathop{:=}(n_{1},\ldots,n_{q})\in\mathds{N}_{1}^{q}\), and let \(\mathscr{Z}_{\mathbf{n}}\) be the _pattern_ inventory (2.6) of \(\mathbf{n}\), also see (2.9).
**Theorem 1** (see Thm. 3.1).: _For every \(\mathbf{s}_{1},\ldots,\mathbf{s}_{q}\in\mathds{C}^{k}\) and \(\boldsymbol{\alpha}\in\mathds{R}^{k}_{+}\),_
\[\int_{\Delta^{k-1}}\prod_{j=1}^{q}(\mathbf{s}_{j}\cdot\mathbf{y})^{n_{j}}\, \mathrm{d}D_{\boldsymbol{\alpha}}(\mathbf{y})=\frac{n_{1}!\cdots n_{q}!}{ \langle\alpha_{1}+\cdots+\alpha_{k}\rangle_{n_{1}+\cdots+n_{q}}}\,\mathscr{Z}_ {\mathbf{n}}[\mathbf{s}_{1},\ldots,\mathbf{s}_{q};\boldsymbol{\alpha}]\,. \tag{1.1}\]
By'simple' we mean that our formula is not further simplifiable in terms of actions of the symmetric groups \(\mathfrak{S}_{n_{1}},\ldots,\mathfrak{S}_{n_{q}}\), by 'elementary' that it is expressed only in terms of elementary functions, and by 'closed' that it is both non-recursive and non-iterative.
Ewens Sampling Formula.For a permutation \(\pi\) in the symmetric group \(\mathfrak{S}_{n}\), denote by \(r\coloneqq r(\pi)\) the total number of its cycles (including fixed points). Let \(\theta>0\) and recall that a probability distribution on \(\mathfrak{S}_{n}\) is \(\theta\)-_biased_ if its value on each \(\pi\) is proportional to \(\theta^{r}\). The _Ewens Sampling Formula_ (ESF) with parameter \(\theta\) is the probability distribution
\[E_{\theta}(\boldsymbol{\lambda})\coloneqq\frac{n!}{\langle\theta\rangle_{n}} \prod_{i=1}^{n}\frac{\theta^{\lambda_{i}}}{i^{\lambda_{i}}\lambda_{i}!}\,, \qquad\boldsymbol{\lambda}\coloneqq(\lambda_{1},\ldots,\lambda_{n})\,\]
on the set of integer partitions \(\boldsymbol{\lambda}\) of \(n\), i.e. satisfying \(\sum_{i}i\lambda_{i}=n\). It is the probability that a \(\theta\)-biased permutation has given cycle structure \(\boldsymbol{\lambda}\), i.e. with \(\lambda_{1}\) fixed points, \(\lambda_{2}\) transpositions, \(\lambda_{3}\) 3-cycles, etc. In particular, the distribution \(E_{1}\) describes the frequency of a permutation in \(\mathfrak{S}_{n}\) with a given cycle structure.
We refer the reader to the recent surveys [3, 28] and references therein for a complete account of the history and importance of the ESF throughout mathematics and beyond.
A Polychromatic ESF.The proof of Theorem 1 will partly consist in counting the cardinality of the orbits of a certain group action with homogeneous space the symmetric group \(\mathfrak{S}_{n_{1}+\cdots+n_{q}}\). As a byproduct we derive a _polychromatic ESF_ which we now describe.
For positive integers \(q\) and \(\mathbf{n}\coloneqq(n_{1},\ldots,n_{q})\) we set \(n\coloneqq n_{1}+\cdots+n_{q}\) and consider the set \([n]\coloneqq\{1,\ldots,n\}\). We interpret \([q]\coloneqq\{1,\ldots,q\}\) as a set of colors --or, more generally, of categories-- and assign color \(c_{1}\) to \(n_{1}\) elements of \([n]\), color \(c_{2}\) to \(n_{2}\) elements, and so on, in a fixed deterministic way. Taking into account the coloring of the elements in \([n]\), one may ask for the following refinement of the standard ESF.
**Question 1**.: _What is the probability that a \(\theta\)-biased random permutation \(\pi\in\mathfrak{S}_{n}\), has a given cycle structure and each orbit of \(\pi\) has a given number of elements of color \(c_{j}\), \(j\in[q]\)?_
In order to answer Question 1, it is convenient to encode both the cycle structure of \(\pi\) and the number of \(c_{j}\)-colored elements in each cycle (orbit) of \(\pi\) into a multiset, namely a \(q\)_-colored partition_ which we now describe; also see Drfn. 2.2 below. Suppose that \(\pi=\kappa_{1}\cdots\kappa_{r}\) is a permutation with cycles \(\kappa_{i}\), including (!) fixed points. To each cycle \(\kappa=(y_{1}\cdots y_{m})\) of \(\pi\) we associate its _color count_, i.e. the vector \(\mathbf{a}=(a_{1},\ldots,a_{q})\) where \(a_{j}\) is the number of elements of color \(c_{j}\) in \(\{y_{1},\ldots,y_{m}\}\subset[n]\). The colored partition associated to \(\pi\) is the function \(A\) assigning to each fixed a the number of cycles \(\kappa\) of \(\pi\) with color count \(\mathbf{a}\). We say that \(\pi\) has _(cycle structure and) coloring_\(A\). As it turns out, the number of permutations with given coloring \(A\) is the multinomial coefficient (2.2) of \(A\).
Now, let \(\theta>0\) be a rate parameter, and \(\mathbf{p}\in\Delta^{q-1}\) be the parameter of a categorical distribution on \([q]\). We define a probability measure \(E^{n}_{\theta,\mathbf{p}}\) (Drfn. 4.1) on the set of all \(q\)-colored partitions of \(n\), the properties of which we collect hereafter.
**Theorem 2** (Polychromatic ESF).: _For every \(\theta>0\) and every \(\mathbf{p}\in\Delta^{q-1}\),_
1. _when_ \(q=1\)_, hence_ \(\mathbf{p}=p=1\)_,_ \(E^{n}_{\theta,1}\) _is the Ewens distribution_ \(E_{\theta}\) _on partitions of_ \(n\)_;_
2. _(Prop._ 4.4_) conditioning_ \(E^{n}_{\theta,\mathbf{p}}\) _on a_ \(q\)_-colored partition_ \(A\) _coloring_ \(\mathbf{n}\) _gives the probability that a_ \(\theta\)_-biased random permutation_ \(\pi\) _has cycle structure and coloring_ \(A\)_; (This answers Question_ 1_.)_
3. _(Prop._ 4.7_)_ \(E^{n}_{\theta,\mathbf{p}}\) _is the marginal distribution at time_ \(n\) _of the polychromatic Hoppe_ urn model _described in SS_4.1 _and of the extension of the CRP described below;_
* _(Thm._ 4.10_) the family_ \(E^{n}_{\theta,\mathbf{p}^{\prime}}\)_,_ \(n\in\mathds{N}_{1}\)_, is consistent in a suitable sense extending the notion of Kingman's consistency_ _[_17_]_ _to_ \(q\)_-colored partitions._
The ESF appears in connection with a variety of models. In order to illustrate the analogies between \(E^{n}_{\theta,\mathbf{p}}\) and \(E_{\theta}\), let us briefly discuss two of them: Ewens' original allelic partition, and the CRP. In SS4.1 we present in full detail the polychromatic analogue to Hoppe's urn model [13].
The ESF in population geneticsIn the seminal work [9], W.J. Ewens introduced the formula later named after him, and showed that \(E_{\theta}\) is the joint probability distribution of the number of selectively neutral alleles \(A^{{(n)}}_{i}\) represented \(i\) times in a sample of \(n\) genes taken from a large (\(\gg n\)) population, viz.
\[\mathbf{P}[A^{{(n)}}_{1}=\lambda_{1},\ldots,A^{{(n)}}_{n}= \lambda_{n}]=E_{\theta}(\boldsymbol{\lambda})\,,\]
where the parameter \(\theta>0\) defines the rate \(\frac{\theta}{\theta+n}\) at which novel alleles appear.
The polychromatic analogue \(E^{n}_{\theta,\mathbf{p}}\) to the ESF is the distribution of the very same model, when alleles are additionally marked by a 'color' in \([q]\). Such a marking describes any of \(q\) (hereditary or non-hereditary) features specific to a given allele and which are not reflected by the sequence of its base pairs. This includes, for instance, in situ epigenetic modifications such as DNA-methylation.
Tourists at the Chinese restaurantIt would not be difficult to introduce polychromatic generalizations to many well-known problems and constructions in the theory, such as the Spaghetti Loop distribution, or the Feller coupling. For the sake of brevity, we only discuss the Chinese restaurant process (CRP). In [1], D.J. Aldous introduced1 the CRP as a sequential description of the sampling of random partitions distributed according to the Poisson-Dirichlet distribution. The process (and many of its variations) has proven a very successful tool in the study of random partitions/permutations. Let us briefly discuss a variation2 of the CRP well-suited to describe our colored partitions.
Footnote 1: In fact, Aldous credits the introduction of the CRP to J. Pitman, who in turn acknowledges the contribution of L. Dubins, see e.g. the attribution to Dubins and Pitman in [28, §4.1].
As usual, _<<[customers] \(1,2,\ldots,n\) arrive sequentially at an initially empty restaurant with a large number of large [circular] tables. [Customer] \(j\) either sits at the same table as [customer] \(i\), with probability \(1/(j-1+\theta)\) for each \(i<j\), or else sits at an empty table, with probability \(\theta/(j-1+\theta)\).\(\Rightarrow\)[1, (11.19), p. 91]. Additionally however, each customer randomly chooses to order from one out of the \(q\) proposed means, independently of the other customers and according to a fixed categorical distribution with parameter \(\mathbf{p}\). The colored partition 'people at each table ordering from each menu' is distributed according to \(E^{n}_{\theta,\mathbf{p}}\)._
Plan of the workIn SS2.1 we introduce some necessary notation and define the pattern inventory \(\mathscr{Z}_{n}\) in the right-hand side of (1.1). In SS2.2 we show that \(\mathscr{Z}_{n}\) coincides with a'refined' cycle index polynomial \(Z_{n}\) of a certain group action, counting \(q\)-colored partitions coloring \(\mathbf{n}\). We then move to prove Theorem 1 (SS3.4) together with an overview of previously known results (SS3.1), some corollaries (SS3.2), and applications to other measures (SS3.3). Finally, we study the polychromatic ESF by means of a polychromatic Hoppe urn model (SS4.1) and discuss its consistency in the sense of Kingman (SS4.2).
## 2 Counting pattern inventories
For \(n\in\mathds{N}_{1}\) let \([n]\!:=\!\{1,\ldots,n\}\), and \(\mathfrak{S}_{n}\) be the symmetric group of degree \(n\), naturally acting on \([n]\) by permutation of its elements.
MultisetsGiven a set \(S\), an \(S\)_-multiset_ is any map \(m\!:S\to\mathds{N}_{0}\). We denote by \(\mathsf{supp}\,m\) the support of \(m\). The _cardinality_\(\mathsf{card}(m)\) of \(m\) is the sum \(\sum_{s\in S}m(s)\) of all its values. Given a map \(f\!:S\to T\), the push-forward via \(f\) of an \(S\)-multiset \(m\) is the \(T\)-multiset
\[f_{*}m\coloneqq\sum_{s\in\mathsf{supp}\,m}m(s)\,\mathbf{1}_{f(s)}. \tag{2.1}\]
Vectors.Whenever no confusion may arise, we do not distinguish between row vectors and column vectors. When relevant, we write \(\mathbf{x}^{(\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{}}}}}}}}}}}}}}})\) to indicate that \(\mathbf{x}\in\mathds{R}^{n}\) or, more generally, that \(\mathbf{x}\) has \(n\) entries. Let \(\mathbf{e}_{i}^{(\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{{\tiny{\tiny{{\tiny{\tiny{\tiny{\tiny{\tiny{{\tiny{\tiny{ }}}}}}}}}}}}}})}}\) be the \(i^{\mathrm{th}}\) vector of the canonical basis of \(\mathds{R}^{n}\), and set \(\mathbf{1}^{(\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{{\tiny{ }}}}}}}}}}}}}}}})}}\!= \left(1\right)_{i\in[n]}\) and analogously for \(\mathbf{0}^{(\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{}}}}}}}}}}}}}}}}}})}\). For vectors \(\mathbf{x},\mathbf{y}\in\mathds{R}^{n}\) and \(\pi\in\mathfrak{S}_{n}\), write
\[\mathbf{x}\cdot\mathbf{y} \coloneqq x_{1}y_{1}+\cdots+x_{n}y_{n}\,, \mathbf{x}\circ\mathbf{y} \coloneqq\left(x_{1}y_{1},\ldots,x_{n}y_{n}\right)\,,\] \[\mathbf{x}^{\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{{ }}}}}}}}}}}}}}}}\mathbf{\bullet} \coloneqq\mathbf{x}\cdot\mathbf{1}\,.\]
For any \(f\colon\mathds{C}\to\mathds{C}\) further write \(f(\mathbf{x})\coloneqq f(x_{1})\cdots f(x_{n})\).
Matrices.For a matrix \(\mathbf{M}\coloneqq\left[m_{i,j}\right]_{i\in[a],j\in[b]}\in\mathds{R}^{a \times b}\) (\(a\) rows, \(b\) columns) set
\[\mathbf{M}_{i} \coloneqq\mathbf{e}_{i}^{(\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \
#### 2.1.2 Pattern inventory
Let \(G<\mathfrak{S}_{n}\) be a permutation group of degree \(n\). The _cycle index polynomial_\(Z^{G}\) of \(G\) is
\[Z^{G}(\mathbf{t})\coloneqq\frac{1}{|G|}\sum_{\pi\in G}\mathbf{t}^{\boldsymbol{ \lambda}(\pi)}\,,\qquad\mathbf{t}=(t_{1},\ldots,t_{n})\,\]
where \(\boldsymbol{\lambda}(\pi)\vdash n\) accounts for the number of cycles in \(\pi\) of given length, i.e. \(\lambda_{1}(\pi)\) is the number of fixed points of \(\pi\), \(\lambda_{2}(\pi)\) the number of \(2\)-cycles in \(\pi\), and so on. We denote by \(Z_{n}\coloneqq Z^{\mathfrak{S}_{n}}\) the cycle index polynomial of \(\mathfrak{S}_{n}\). It is not difficult to show that (cf. (2.3))
\[Z_{n}(\mathbf{t})=\frac{1}{n!}\sum_{\boldsymbol{\lambda}\vdash n}M_{2}( \boldsymbol{\lambda})\,\mathbf{t}^{\boldsymbol{\lambda}}\,,\qquad\mathbf{t}=( t_{1},\ldots,t_{n}). \tag{2.4}\]
Pattern inventory. We represent a permutation \(\pi\) in its cycle notation, viz.
\[\pi=(y_{1,1}y_{1,2}\cdots)(y_{2,1}y_{2,2}\cdots)\cdots(y_{r,1}y_{r,2}\cdots)\,. \tag{2.5}\]
Let \(\mathbf{S}\coloneqq(\mathbf{s}_{1},\ldots,\mathbf{s}_{q})\) be a \(k\times q\)-matrix of dummy variables. We denote by \(\mathbf{S}^{1}=\mathbf{s}_{1},\ldots,\mathbf{S}^{q}=\mathbf{s}_{q}\) the columns of \(\mathbf{S}\) and by \(\mathbf{S}_{1},\ldots,\mathbf{S}_{k}\) the rows of \(\mathbf{S}\). Further let \(\boldsymbol{\alpha}\in\mathbb{R}^{k}\).
The following definition is inspired by Polya Enumeration Theory.
**Definition 2.5** (Pattern inventory).: The \(\mathbf{n}\)-_pattern_ of a permutation \(\pi\) is
\[w_{\mathbf{n}}[\mathbf{S};\boldsymbol{\alpha}](\pi)\coloneqq\prod_{i}^{r} \left(\mathbf{s}_{\mathbf{s}_{n}(y_{i,1})}\diamond\mathbf{s}_{\mathbf{s}_{n}(y _{i,2})}\diamond\cdots\right)\cdot\boldsymbol{\alpha}\,.\]
The pattern inventory of \(\mathbf{n}\) is the polynomial
\[\mathscr{Z}_{\mathbf{n}}[\mathbf{S};\boldsymbol{\alpha}]\coloneqq\frac{1}{ \mathbf{n}!}\sum_{\pi\in\mathfrak{S}_{\boldsymbol{\alpha}}}w_{\mathbf{n}}[ \mathbf{S};\boldsymbol{\alpha}](\pi)\,. \tag{2.6}\]
Up to a different normalization, \(\mathscr{Z}_{\mathbf{n}}\) is a refinement of the cycle index polynomial of \(\mathfrak{S}_{\mathbf{n}_{\boldsymbol{\alpha}}}\), in the sense that each monomial in \(\mathscr{Z}_{\mathbf{n}}\) depends not only on the cycle structure of a permutation, but also on its coloring. In order to simplify the expression of \(\mathscr{Z}_{\mathbf{n}}\), let
\[Z_{\mathbf{n}}(\mathbf{t})\coloneqq\frac{1}{\mathbf{n}!}\sum_{A\vdash n}M_{ \mathbf{n}}(A)\,\prod_{\mathbf{a}\in\text{supp}A}t_{\mathbf{a}}^{A(\mathbf{a}) }\,,\qquad\mathbf{t}\coloneqq(t_{\mathbf{a}})_{\mathbf{a}\leq\mathbf{n}}. \tag{2.7}\]
Finally, for every \(\mathbf{a}\leq_{\diamond}\mathbf{n}\) set
\[\omega_{\mathbf{a}}[\mathbf{S};\boldsymbol{\alpha}]\coloneqq\left(\mathbf{s} _{1}^{\circ a_{1}}\diamond\cdots\diamond\mathbf{s}_{q}^{\circ a_{q}}\right) \cdot\boldsymbol{\alpha}\,,\quad\text{and}\quad\Omega_{\mathbf{n}}[\mathbf{S}; \boldsymbol{\alpha}]\coloneqq\left.(\omega_{\mathbf{a}}[\mathbf{S};\boldsymbol {\alpha}]\right)_{\mathbf{a}\leq_{\diamond}\mathbf{n}}\,. \tag{2.8}\]
In Theorem 2.14 below, we will prove that
\[\mathscr{Z}_{\mathbf{n}}[\mathbf{S};\boldsymbol{\alpha}]=Z_{\mathbf{n}}( \Omega_{\mathbf{n}}[\mathbf{S};\boldsymbol{\alpha}])\,. \tag{2.9}\]
_Remark 2.6_ (\(q=1\)).: When \(q=1\), the polynomial \(Z_{\mathbf{n}}\) in (2.7) reduces to \(Z_{n}\) in (2.4).
### Group actions
In order to prove (2.9), we identify the algebraic meaning of \(\mathscr{Z}_{\mathbf{n}}\) in terms of the action of a certain group of permutations.
#### 2.2.1 Some bijections of the symmetric group
Let \(G\) be any finite group. For \(h\in G\) we denote by \(\tau_{h}\colon G\to G\) the conjugation map \(\tau_{h}\colon g\mapsto hgh^{-1}\). For each \(\pi\) in \(\mathfrak{S}_{n}\) and \(i,j\in[n]\) we write
\[i\underset{\pi}{\sim}j\quad\text{if}\quad j=\pi^{p}(i)\quad\text{for some }p\in\mathds{Z}\,\]
i.e., if \(i,j\in[n]\) belong to the same orbit (cycle) of \(\pi\). We note that \(\underset{\pi}{\sim}\) is an equivalence relation on \([n]\), and that
\[i\underset{\pi}{\sim}j\iff\sigma(i)\underset{\tau_{\sigma}(\pi)}{\sim}\sigma(j )\,,\qquad i,j\in[n]\,,\quad\pi,\sigma\in\mathfrak{S}_{n}\,. \tag{2.10}\]
Let \((B_{n},\circ)\) be the group of bijections of \(\mathfrak{S}_{n}\) leaving conjugacy classes invariant. That is, \(g\in B_{n}\) if and only if \(g(\pi)\) has the same cycle structure as \(\pi\) for every \(\pi\in\mathfrak{S}_{n}\). We have \(B_{n}\cong\raisebox{-1.72pt}{\includegraphics[height=14.226378pt]{./}}_{ \lambda_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Proof.: Since \(\tau_{\sigma}\) leaves conjugacy classes in \(\mathfrak{S}_{n}\) invariant, we have \(\tau_{\sigma}\in B_{n}\). Thus \(\varphi_{\sigma}\) is an inner automorphism of \(B_{n}\). Furthermore, since for every group \(G\) the map \(\tau^{G}\colon g\mapsto\tau_{g}\) is a group homomorphism \(G\to\operatorname{Aut}(G)\), the map \(\varphi\,\,.=\tau^{B_{n}}\circ\tau^{\mathfrak{S}_{n}}\) is a group homomorphism as well. Thus, it suffices to show that \(\varphi_{\sigma}(H_{n})\subset H_{n}\) for every \(\sigma\in\mathfrak{S}_{n}\). To this end, it suffices to verify (2.11) with \(\varphi_{\sigma}(h)\) in place of \(h\). Indeed, respectively by (2.10) with \(\sigma^{-1}\) in place of \(\sigma\), by (2.11), and by (2.10),
\[i\underset{\pi}{\sim}j \implies\,\sigma^{-1}(i)\underset{\tau_{\sigma^{-1}}(\pi)}{\sim} \sigma^{-1}(j)\implies\sigma^{-1}(i)\underset{(h\circ\tau_{\sigma^{-1}})(\pi)} {\sim}\sigma^{-1}(j)\] \[\implies\,\sigma\sigma^{-1}(i)\underset{(\tau_{\sigma}\circ h \circ\tau_{\sigma^{-1}})(\pi)}{\sim}\sigma\sigma^{-1}(j)\]
and the conclusion follows since \(\tau_{\sigma}^{-1}=\tau_{\sigma^{-1}}\).
#### 2.2.2 Semi-direct product and group action
Fix an \(\mathbf{n}\)-coloring \(\mathfrak{c}_{\mathbf{n}}\). All results in the following hold for every such coloring. Proposition 2.13 below will provide an algebraic interpretation of the multinomial coefficient \(M_{\mathbf{n}}\) in (2.2) by means of the surjective map \(\Pi=\Pi_{\mathfrak{c}_{\mathbf{n}}}\colon\mathfrak{S}_{\mathbf{n}_{\bullet}} \to\mathcal{A}_{\mathbf{n}}\) which we now define. Firstly, to every cycle \(\kappa=(y_{1}y_{2}\cdots)\) we associate a vector \(\varepsilon(\kappa)\) in \(\mathbb{N}_{0}^{\eta}\) by
\[\varepsilon(\kappa)_{j}\,{:=}\,|\{h:\mathfrak{c}_{\mathbf{n}}(y_{h})=j\}|\,\qquad j \in[q]\,. \tag{2.15}\]
For \(\pi=\kappa_{1}\cdots\kappa_{r}\in\mathfrak{S}_{\mathbf{n}_{\bullet}}\), with cycles \(\kappa_{1},\dots,\kappa_{r}\) (including fixed points), we then set
\[\Pi\colon\pi\longmapsto\sum_{i=1}^{r}\mathbf{1}_{\varepsilon(\kappa_{i})}. \tag{2.16}\]
Semi-direct product. In the following, we regard
\[\mathfrak{S}_{\mathbf{n}}\,{\coloneqq}\,\mathfrak{S}_{\varepsilon_{\mathbf{ n}}^{-1}(1)}\times\cdots\times\mathfrak{S}_{\varepsilon_{\mathbf{n}}^{-1}(q)} \cong\mathfrak{S}_{n_{1}}\times\cdots\times\mathfrak{S}_{n_{q}}\]
as a subgroup of \(\mathfrak{S}_{\mathbf{n}_{\bullet}}\).
**Definition 2.11**.: Let \((G_{\mathbf{n}},\star)\,{\coloneqq}\,H_{\mathbf{n}_{\bullet}}\rtimes \mathfrak{S}_{\mathbf{n}}\) be the semi-direct product induced by the group homomorphism \(\varphi\,\). defined by (2.14), that is
\[(h_{1},\sigma_{1})\star(h_{2},\sigma_{2})\,{\coloneqq}\,(h_{1}\circ\varphi_{ \sigma_{1}}(h_{2}),\sigma_{1}\sigma_{2})\,.\]
**Lemma 2.12**.: _The function \(\,{\odot}\,:G_{\mathbf{n}}\times\mathfrak{S}_{\mathbf{n}_{\bullet}}\to \mathfrak{S}_{\mathbf{n}_{\bullet}}\) given by_
\[{\odot}\,:\,\big{(}(h,\sigma),\pi\big{)}\longmapsto(h,\sigma).\pi\,{\coloneqq }\,(h\circ\tau_{\sigma})(\pi) \tag{2.17}\]
_defines a group action of \(G_{\mathbf{n}}\) on \(\mathfrak{S}_{\mathbf{n}_{\bullet}}\) faithful if \(\mathbf{n}_{\bullet}\geq 3\)._
Proof.: In order to show that \(\,{\odot}\,\) is a group action it suffices to verify that
\[\big{(}(h_{1},\sigma_{1})\star(h_{2},\sigma_{2})\big{)}.\pi =\big{(}h_{1}\circ\varphi_{\sigma_{1}}(h_{2})\big{)}(\sigma_{1} \sigma_{2}\pi\sigma_{2}^{-1}\sigma_{1}^{-1})\] \[=h_{1}\big{(}\sigma_{1}h_{2}(\sigma_{2}\pi\sigma_{2}^{-1})\sigma_{ 1}^{-1}\big{)}\] \[=(h_{1},\sigma_{1}).(h_{2},\sigma_{2}).\pi\,.\]
In order to show faithfulness, it suffices to prove that \((h,\sigma)=(\operatorname{id},e)\) whenever
\[(h,\sigma).\pi=\pi\,,\qquad\pi\in\mathfrak{S}_{\mathbf{n}_{\bullet}}\,. \tag{2.18}\]
If \(\sigma=e\), since \(B_{\mathbf{n}_{\bullet}}\) (hence \(H_{\mathbf{n}_{\bullet}}\)) acts faithfully on \(\mathfrak{S}_{\mathbf{n}_{\bullet}}\), (2.18) implies \(h=\operatorname{id}\). If \(\sigma\neq e\), since \(\mathbf{n}_{\bullet}\geq 3\), there exist mutually different \(i,j,k\in[n]\) with \(\sigma(i)=j\). Choosing \(\pi\,{\coloneqq}\,(ik)\),
\[(h,\sigma).\pi=h\big{(}(\sigma(i),\sigma(k))\big{)}=h\big{(}(j,\sigma(k))\big{)} =(j,\sigma(k))\neq\pi\,,\]
where the last equality follows again from Remark 2.9.
**Proposition 2.13**.: _The orbit space \(\mathfrak{S}_{\mathbf{n_{s}}}/G\) is (parametrized by) the set \(\mathcal{A}_{\mathbf{n}}\) of all \(q\)-colored partitions, and \(|G.\pi|=M_{\mathbf{n}}(\Pi(\pi))\) for every \(\pi\in\mathfrak{S}_{\mathbf{n_{s}}}\)._
Proof.: For every \(\pi,\pi^{\prime}\in\mathfrak{S}_{\mathbf{n_{s}}}\), let us prove that \(\Pi(\pi)=\Pi(\pi^{\prime})\) if and only if \(\pi\in G.\pi^{\prime}\). Let \(\pi=\kappa_{1}\cdots\kappa_{r}\) and \(\pi^{\prime}=\kappa_{1}^{\prime}\cdots\kappa_{r^{\prime}}\) be cycle decompositions. If \(\Pi(\pi)=\Pi(\pi^{\prime})\), then \(r=r^{\prime}\) and, up to reordering the cycles, we may assume without loss of generality that \(\Pi(\kappa_{i})=\Pi(\kappa_{i}^{\prime})\) for every \(i\). Therefore, there exists \(\sigma\in\mathfrak{S}_{\mathbf{n}}\) such that for every \(i\) the cycles \(\kappa_{i}\) and \(\sigma\kappa_{i}^{\prime}\sigma^{-1}\) transitively permute the same set of numbers. Equivalently,
\[i\underset{\pi}{\sim}j\iff i\underset{\tau_{\sigma}(\pi^{\prime})}{\sim}j\,, \qquad i,j\in[\mathbf{n_{s}}]\,.\]
Hence, the map \(h\in B_{n}\) that swaps \(\pi\) and \(\tau_{\sigma}(\pi^{\prime})\), and fixes every other element of \(\mathfrak{S}_{\mathbf{n_{s}}}\) is in \(H_{\mathbf{n_{s}}}\). We can thus write \(\pi=(h,\sigma).\pi^{\prime}\). Conversely, if \(\pi=(h,\sigma).\pi^{\prime}\) holds for some \(h\) and \(\sigma\), then we can rearrange the cycle decompositions \(\pi=\kappa_{1}\cdots\kappa_{r}\) and \(\tau_{\sigma}(\pi^{\prime})=\tau_{\sigma}(\kappa_{1}^{\prime})\cdots\tau_{ \sigma}(\kappa_{r}^{\prime})\) in such a way that \(\kappa_{i}\) and \(\sigma\kappa_{i}^{\prime}\sigma^{-1}\) transitively permute the same set of numbers for every \(i\). Therefore, \(\Pi(\kappa_{i})=\Pi(\sigma\kappa_{i}^{\prime}\sigma^{-1})\). Furthermore, since \(\sigma\in\mathfrak{S}_{\mathbf{n}}\), we have \(\Pi(\sigma\kappa_{i}^{\prime}\sigma^{-1})=\Pi(\kappa_{i}^{\prime})\), whence \(\Pi(\kappa_{i})=\Pi(\kappa_{i}^{\prime})\) as desired.
Cardinality of the orbits. Let \(A\in\mathcal{A}_{\mathbf{n}}\). We aim to show that \(\big{|}\Pi^{-1}(A)\big{|}=M_{\mathbf{n}}(A)\). In order to do so, it is convenient to introduce some new sets and maps, as schematized in Figure 1 below, and compute the cardinality of their fibers.
\((a)\): Firstly, given a vector \(\mathbf{c}\coloneqq(c_{1},c_{2},\dots)\) with entries in \([q]\) and arbitrary (possibly zero) length, we consider the \(\mathbb{N}_{0}^{q}\)-valued map \(\boldsymbol{\varepsilon}\) defined by
\[\boldsymbol{\varepsilon}(\mathbf{c})_{j}\coloneqq|\{h:c_{h}=j\}|\,\qquad j\in[q]\,. \tag{2.19}\]
\((b)\): We denote by \(\#\mathbf{M}\) the number of rows of a matrix \(\mathbf{M}\). The map \(\#\) is naturally extended to matrix-valued functions by post-composition.
\((c)\): Let \(\mathcal{Y}\) be the space of all matrix-valued functions \(Y\) on \(\mathbb{N}_{*}^{q}\) satisfying, for all \(\mathbf{a}\in\mathbb{N}_{*}^{q}\),
\[Y(\mathbf{a})_{i}\in\boldsymbol{\varepsilon}^{-1}(\mathbf{a})\,,\quad i\in[ \#Y(\mathbf{a})]\,,\qquad\text{and}\qquad\#\circ Y\in\mathcal{A}_{\mathbf{n}}\,.\]
We explicitly allow for \(Y(\mathbf{a})\) to possibly be the empty matrix for some \(\mathbf{a}\in\mathbb{N}_{*}^{q}\).
\((d)\): Denote by \(\mathfrak{c}_{\mathbf{n}}^{\circ}\) the entry-by-entry extension of \(\mathfrak{c}_{\mathbf{n}}\) to vectors and matrices. We define \(\mathcal{X}\) as the set of all matrix-valued functions \(X\) on \(\mathbb{N}_{*}^{q}\),
\[X(\mathbf{a})=\begin{bmatrix}y_{\mathbf{a},1,1}&y_{\mathbf{a},1,2}&\dots\\ y_{\mathbf{a},2,1}&y_{\mathbf{a},2,2}&\dots\\ \vdots&\vdots&\ddots\end{bmatrix}\,,\]
satisfying, for all \(\mathbf{a}\in\mathbb{N}_{*}^{q}\),
\[X(\mathbf{a})_{i}\in(\boldsymbol{\varepsilon}\circ\mathfrak{c}_{\mathbf{n}}^{ \circ})^{-1}(\mathbf{a})\,,\quad i\in[\#X(\mathbf{a})]\,,\] \[\{y_{\mathbf{a},i,j}\}_{\mathbf{a},i,j}=[\mathbf{n_{s}}]\,,\qquad \text{and}\qquad y_{\mathbf{a},i,j}\neq y_{\mathbf{a}^{\prime},i^{\prime},j^{ \prime}}\,,\quad(\mathbf{a},i,j)\neq(\mathbf{a}^{\prime},i^{\prime},j^{\prime })\.\]
\((e)\): Denote by \(\mathcal{Z}\) the family of set-valued functions of the form
\[Z\colon\mathbf{a}\longmapsto\big{\{}\,(y_{\mathbf{a},1,1},y_{\mathbf{a},1,2}, \dots)\,,(y_{\mathbf{a},2,1},y_{\mathbf{a},2,2},\dots)\,,\dots\,\big{\}} \tag{2.20}\]
additionally so that
\[\left(\mathbf{a}\longmapsto\begin{bmatrix}y_{\mathbf{a},1,1}&y_{\mathbf{a},1,2}& \dots\\ y_{\mathbf{a},2,1}&y_{\mathbf{a},2,2}&\dots\\ \vdots&\vdots&\ddots\end{bmatrix}\right)\in\mathcal{X}\,.\]
* Finally let \(f_{1}\colon\mathcal{X}\to\mathcal{Z}\) and \(f_{2}\colon\mathcal{Z}\to\mathfrak{S}_{\mathbf{n_{n}}}\) be maps _forgetting_ part of the structure: \[f_{1}(X)(\mathbf{a})\coloneqq\{X(\mathbf{a})_{i}\}_{i\in[\#X(\mathbf{a})]}\, \qquad\mathbf{a}\in\mathds{N}_{*}^{q}\,,\] and, using the notation of (2.20), \[f_{2}\colon Z\longmapsto\pi\coloneqq\prod_{\begin{subarray}{c}\mathbf{a}\in \mathds{N}_{*}^{q}\\ Z(\mathbf{a})\neq\varnothing\end{subarray}}(y_{\mathbf{a},1,1}\ y_{\mathbf{a}, 1,2}\ \cdots)\,(y_{\mathbf{a},2,1}\ y_{\mathbf{a},2,2}\ \cdots)\cdots\in\mathfrak{S}_{\mathbf{n_{n}}}\,.\]
It is a tedious verification that the diagram in Figure 1 commutes.
Now, let \(\pi=(y_{1,1}y_{1,2}\cdots)\cdots(y_{r,1}y_{r,2}\cdots)\in\Pi^{-1}(A)\) and define \(\mathbf{a}_{i}\leq_{\circ}\mathbf{n}\) by
\[\Pi\big{(}(y_{i,1}y_{i,2}\cdots)\big{)}=\mathbf{1}_{\mathbf{a}_{i}}\,\qquad i\in[r]\,.\]
The fiber \(f_{2}^{-1}(\pi)\) consists of all the (distinct) set-valued functions
\[Z_{k_{1},\ldots,k_{r}}\colon\mathbf{a}\longmapsto\big{\{}\big{(}\pi^{k_{i}}(y_{ i,1}),\pi^{k_{i}}(y_{i,2}),\ldots\big{)}\big{\}}_{i\colon\mathbf{a}=\mathbf{a}_{i}},\qquad k_{1}\in[\mathbf{a}_{1\,\bullet}],\ldots,k_{r}\in[\mathbf{a}_{r\bullet}]\,,\]
and has therefore cardinality \(|f_{2}^{-1}(\pi)|=\mathbf{a}_{1\bullet}\cdots\mathbf{a}_{r\bullet}=\prod_{ \mathbf{a}\in\mathsf{supp}A}\mathbf{a}_{\bullet}^{A(\mathbf{a})}\). As for the fibers of \(f_{1}\), given \(Z\in(\Pi\circ f_{2})^{-1}(A)\) and \(X\in f_{1}^{-1}(Z)\), every element of \(f_{1}^{-1}(Z)\) is induced by a permutation-valued function \(\varsigma\) on \(\mathds{N}_{*}^{q}\) such that
\[\varsigma\colon\mathbf{a}\longmapsto\varsigma_{\mathbf{a}}\in\mathfrak{S}_{A (\mathbf{a})}\,,\qquad\mathbf{a}\in\mathds{N}_{*}^{q}\,,\]
via the formula
\[X_{\varsigma}\colon\mathbf{a}\longmapsto P_{\varsigma_{\mathbf{a}}}X( \mathbf{a})\,.\]
where \(P_{\varsigma_{\mathbf{a}}}\) is the permutation matrix induced by \(\varsigma_{\mathbf{a}}\). It follows that \(\big{|}f_{1}^{-1}(Z)\big{|}=\prod_{\mathbf{a}\in\mathsf{supp}A}A(\mathbf{a})\)!. It is easy to see that the fibers of \(\mathfrak{c}_{\mathbf{a}}^{\circ}\colon\mathcal{X}\to\mathcal{Y}\) all have cardinality \(\mathbf{n}!\). Lastly, the computation of the cardinality of the fibers of \(\#\colon\mathcal{Y}\to\mathcal{A}_{\mathbf{n}}\) can be performed a by a and, thanks to the properties of the multinomial coefficient,
\[\big{|}\#^{-1}(A)\big{|}=\prod_{\mathbf{a}\in\mathsf{supp}A}\big{|}\mathbf{ \varepsilon}^{-1}(\mathbf{a})\big{|}^{A(\mathbf{a})}=\prod_{\mathbf{a}\in \mathsf{supp}A}\binom{\mathbf{a}_{\bullet}}{\mathbf{a}}^{A(\mathbf{a})}\,.\]
In conclusion,
\[\mathbf{n}!\prod_{\mathbf{a}\in\mathsf{supp}A}\binom{\mathbf{a}_{ \bullet}}{\mathbf{a}}^{A(\mathbf{a})} =\big{|}(\#\circ\mathfrak{c}_{\mathbf{a}}^{\circ})^{-1}(A)\big{|}= \sum_{\pi\in\Pi^{-1}(A)}\big{|}(f_{2}\circ f_{1})^{-1}(\pi)\big{|}\] \[=\big{|}\Pi^{-1}(A)\big{|}\prod_{\mathbf{a}\in\mathsf{supp}A} \mathbf{a}_{\bullet}^{A(\mathbf{a})}A(\mathbf{a})!\,,\]
which yields the desired identity.
We conclude this section with the proof of (2.9).
**Theorem 2.14**.: _The polynomial \(Z_{\mathbf{n}}\) in (2.7) is the orbit generating function of the action (2.17). Furthermore, (2.9) holds._
Proof.: It suffices to collect all terms in \(\mathcal{Z}_{\mathbf{n}}\) with the same monomials. By Proposition 2.13, for each \(\pi\in\mathfrak{S}_{\mathbf{n_{n}}}\) there are exactly \(|G.\pi|=M_{\mathbf{n}}(\Pi(\pi))\) monomials indexed by \(A=\Pi(\pi)\), and the conclusion follows using that \(w_{\mathbf{n}}[\mathbf{S};\mathbf{\alpha}](\pi)=\prod_{\mathbf{a}\in\mathsf{supp}A }\omega_{\mathbf{a}}[\mathbf{S};\mathbf{\alpha}]^{A(\mathbf{a})}\).
Figure 1: Auxiliary maps and sets in the proof of Proposition 2.13.
#### Necklaces
Theorem 2.14 provides an algebraic interpretation for (2.9). Let us now give a combinatorial interpretation of the same formula, i.e. of the multinomial coefficient \(M_{\mathbf{n}}\), in terms of necklaces, which will in turn provide a connection to ESF via the extension of the CRP discussed in SS1.
On the one hand, waiters in our busy restaurant take care to remember, for every table, which clients order from each menu. The arrangement of the customers around the table is important in serving them efficiently. All the information the waiters need about the customers' arrangement is thus contained in a \(q\)-colored necklace. On the other hand, chefs in the restaurant only care about how many customers at each table order from each menu, so that customers at the same table may be served at the same time. All the information the chefs need about the customers' arrangement is thus contained in a \(q\)-colored partition. Let us now count \(q\)-colored partitions by collecting together \(q\)-colored necklaces with the same occurrences of each color.
For integer \(q\in\mathds{N}_{1}\) denote by \([q]^{*}\) the free monoid generated by \([q]\). Elements of \([q]^{*}\) are called _(\(q\)-)words_. Two words \(u,v\) are _conjugate_ if there exist words \(s,t\) so that \(u=st\) and \(v=ts\). Two conjugate words are cyclic shifts of one another. Thus, conjugacy is an equivalence relation on words. Its equivalence classes are called _(\(q\)-)necklaces_.
Let \(\nu=\llbracket w\rrbracket\) be a necklace and \(w=c_{1}c_{2}\cdots c_{\ell}\) be any of its representatives. The _length_\(\ell_{\nu}\) of \(\nu\) is the total number \(\ell\) of characters in \(w\). The _period_\(p_{\nu}\) of \(\nu\) is the minimal integer \(p\geq 1\) with \(c_{i}=c_{i+p-1\pmod{\ell}+1}\) for every \(i\in[\ell]\). Clearly, \(p_{\nu}\) divides \(\ell_{\nu}\).
\((a)\): Let \(w=c_{1}c_{2}\cdots\in[q]^{*}\). Consistently with (2.19), we denote by \(\varepsilon(w)\in\mathds{N}_{0}^{q}\) the vector of occurrences of its characters, viz.
\[\varepsilon(w)_{j}\coloneqq\left|\{h:c_{h}=j\}\right|\,.\]
It is readily seen that \(\varepsilon\) descends to a (non-relabeled) map on necklaces.
\((b)\): Let \(\mathcal{N}_{\mathbf{n}}\) be the family of all multisets \(N\) of \(q\)-necklaces satisfying \(\varepsilon_{*}N\in\mathcal{A}_{\mathbf{n}}\), cf. (2.1).
\((c)\): Define a map \(\mathfrak{c}_{\mathbf{n}}^{\circ}\) on \(\mathfrak{S}_{\mathbf{n_{\bullet}}}\) in the following way. For a cyclic permutation \(\kappa=(y_{1}y_{2}\cdots)\) let \(\nu\) be the necklace \(\llbracket\mathfrak{c}_{\mathbf{n}}(y_{1})\,\mathfrak{c}_{\mathbf{n}}(y_{2})\ \cdots\rrbracket\) and set \(\mathfrak{c}_{\mathbf{n}}^{\circ}(\kappa)\coloneqq\mathfrak{1}_{\nu}\). Extend \(\mathfrak{c}_{\mathbf{n}}^{\circ}\) by
\[\mathfrak{c}_{\mathbf{n}}^{\circ}\colon\pi\coloneqq\kappa_{1}\cdots\kappa_{r} \longmapsto\sum_{i=1}^{r}\mathfrak{c}_{\mathbf{n}}^{\circ}(\kappa_{i})\,.\]
\((d)\): It is readily verified that \(\Pi=\varepsilon_{*}\circ\mathfrak{c}_{\mathbf{n}}^{\circ}\colon\mathfrak{S}_ {\mathbf{n_{\bullet}}}\to\mathcal{A}_{\mathbf{n}}\) factors over \(\mathcal{N}_{\mathbf{n}}\).
**Proposition 2.15**.: _It holds that_
\[\left|(\mathfrak{c}_{\mathbf{n}}^{\circ})^{-1}(N)\right|=\mathbf{n}!\prod_{ \nu\in\text{supp}N}\frac{p_{\nu}/\ell_{\nu}}{N(\nu)!}\quad\text{ and }\quad\quad M_{\mathbf{n}}(A)=\mathbf{n}!\sum_{ \begin{subarray}{c}N\in\mathcal{N}_{\mathbf{n}}\\ \varepsilon_{*}N=\mathcal{N}_{\mathbf{n}}\end{subarray}}\prod_{\nu\in\text{ supp}N}\frac{p_{\nu}/\ell_{\nu}}{N(\nu)!}\,.\]
Proof.: We provide a sketch of the proof, the details being similar to Proposition 2.13.
\((a)\): A word in \([\mathbf{n_{\bullet}}]^{*}\) is simple if each of its characters appears exactly once. Two words in \([\mathbf{n_{\bullet}}]^{*}\) are _disjoint_ if they share no common character. We denote by \(\ell_{w}\) the length of \(w\in[\mathbf{n_{\bullet}}]^{*}\). Further set
\[\mathcal{X} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
* Let \(\mathfrak{c}_{\mathfrak{n}}^{*}\colon[\mathfrak{n}_{\bullet}]^{*}\to[q]^{*}\) be defined by \(\mathfrak{c}_{\mathfrak{n}}^{*}\colon w\mathbin{\vcentcolon}w\mathbin{ \vcentcolon}y_{1}\cdots y_{\ell}\longmapsto\mathfrak{c}_{\mathfrak{n}}(y_{1}) \cdots\mathfrak{c}_{\mathfrak{n}}(y_{\ell})\), and denote again by \(\mathfrak{c}_{\mathfrak{n}}^{*}\) its component-wise extension to \(\mathcal{X}\).
* Set \(\mathcal{V}\mathbin{\vcentcolon}\mathsf{=}\mathfrak{c}_{\mathfrak{n}}^{*}( \mathcal{X})\), denote again by \([\![\,\cdot\,]\!]\) the component-wise extension to \(\mathcal{V}\) of the quotient map \([\![\,\cdot\,]\!]\) from \([q]^{*}\) to necklaces, and \(\mathcal{U}\mathbin{\vcentcolon}\mathsf{=}\left\{([\![v_{1}]\!],\ldots,[\![v _{r}]\!]):(v_{1},\ldots,v_{r})\in\mathcal{V}\right\}\).
* Define a map \(\boldsymbol{\nu}\) on \(\mathcal{U}\) by \(\boldsymbol{\nu}\colon\left([\![v_{1}]\!],\ldots,[\![v_{r}]\!]\right) \longrightarrow\sum_{i=1}^{r}\boldsymbol{1}_{[\![v_{i}]\!]}\).
* Finally, define maps \(f\colon\mathcal{X}\to\mathcal{Z}\) and \([\![\,\cdot\,]\!]^{*}\colon\mathcal{Z}\to\mathfrak{S}_{\mathfrak{n}_{\bullet}}\) by \[f\colon\left(w_{1},\ldots,w_{r}\right)\longmapsto\left\{w_{1},\ldots,w_{r} \right\}\,,\qquad\llbracket\,\cdot\,\rrbracket^{*}\colon\,\left\{w_{1},\ldots,w_{r}\right\}\longmapsto\llbracket w_{1}\rrbracket\cdots\llbracket w_{r} \rrbracket\,.\]
It is a tedious verification that the diagram in Figure 2 commutes, and a simple computation of the cardinality of the fibers of the maps involved yields the conclusion.
## 3 Multivariate moments
For \(k\geq 1\) let \(\Delta^{k-1}\) be the standard simplex
\[\Delta^{k-1}\mathbin{\vcentcolon}\mathsf{=}\left\{\mathbf{x}\in\mathds{R}^{ k}:x_{i}\geq 0\,,\;x_{1}+\cdots+x_{k}=1\right\}, \tag{3.1}\]
and recall the definition (1) of the Dirichlet distribution \(D_{\boldsymbol{\alpha}}\).
Our main result in this section is a formula for the multivariate moments of \(D_{\boldsymbol{\alpha}}\).
**Theorem 3.1** (Multivariate moments of \(D_{\boldsymbol{\alpha}}\)).: _The following identity holds_
\[\mu_{\mathbf{n}}[\mathbf{S};\boldsymbol{\alpha}]\mathbin{\vcentcolon}\mathsf{ =}\int_{\Delta^{k-1}}\prod_{j}^{q}(\mathbf{s}_{j}\cdot\mathbf{x})^{n_{j}}\, \mathrm{d}D_{\boldsymbol{\alpha}}(\mathbf{x})=\frac{\mathbf{n}!}{\langle \boldsymbol{\alpha}_{\bullet}\rangle_{\mathfrak{n}_{\bullet}}}\,Z_{\mathbf{n }}\bigl{(}\Omega_{\mathbf{n}}[\mathbf{S};\boldsymbol{\alpha}]\bigr{)}\mathbin {\vcentcolon}\zeta_{\mathbf{n}}[\mathbf{S};\boldsymbol{\alpha}]\,. \tag{3.2}\]
In order to put Theorem 3.1 into context, we briefly survey previously known results on moments of Dirichlet and related measures.
### Overview on Dirichlet measures
Moment and Laplace/Fourier-transform methods for \(D_{\boldsymbol{\alpha}}\) and its infinite-dimensional counterpart, the Dirichlet-Ferguson measure \(\mathcal{D}_{\alpha}\)[12] over a measure space \((X,\alpha)\) are notoriously difficult, as we briefly summarize below.
Transforms.It is well-known that the Fourier transform \(\widehat{D_{\boldsymbol{\alpha}}}\) of the Dirichlet distribution \(D_{\boldsymbol{\alpha}}\) may be expressed in terms of \({}_{k}\Phi_{2}\), the \(k\)-variate confluent hypergeometric Lauricella function of type \(D\)[20, 10]. The power-series representation of \({}_{k}\Phi_{2}\) is <<_inconvenient_ for numerical calculations when \(k>2\)>>[22, p. 4]. Instead, the complex-contour-integral representation [7, Eqn. (7)] is preferred, but its treatment remains quite involved, see e.g. [25]. In particular, differentiating \(\widehat{D_{\boldsymbol{\alpha}}}\) in this form does not provide any useful representation for the moments of the measure.
For decades the Fourier transform \(\widehat{\mathcal{D}_{\boldsymbol{\alpha}}}\) of \(\mathcal{D}_{\alpha}\) was widely considered intractable [14], which led to the introduction of other characterizing transforms, such as the Markov-Krein transform [15] and the \(c\)-transform[14]. These methods too are unsatisfactory, since there is no counterpart for such transforms of foundational results available for the Fourier transform, such as, for instance, Bochner-Minlos-Sazonov's (BMS) or Levy's Continuity Theorem. The
Figure 2: Auxiliary maps and sets in the proof of Proposition 2.15.
Fourier transform \(\widehat{D_{\alpha}}\) was eventually computed in [4] by methods in combinatorics and representation theory.
Moments.Multivariate moments of \(D_{\alpha}\) are virtually well-known in the form (3.5) below, which may be regarded as an extension of the ESF. Whereas easily computable for small \(k\), this form is unsuitable for letting \(k\to\infty\) and thus provides no insight on multivariate moments of \(\mathcal{D}_{\alpha}\).
Partially in order to overcome this issue, other expressions have been considered: \((a)\) the univariate moment \(\mathcal{D}_{\alpha}(f^{n})\) has appeared in [24] in terms of incomplete Bell polynomials, solely in the case \(X\Subset\mathbb{R}\) and \(f=\operatorname{id}_{\mathbb{R}}\); \((b)\) more general univariate moments for \(D_{\alpha}\) have implicitly appeared in [21, proof of Prop. 3.3] in iterative form; \((c)\) univariate moments for both \(D_{\alpha}\) and \(\mathcal{D}_{\alpha}\) have appeared in full generality in [4] in terms of the cycle index polynomials \(Z_{n}\), which allowed the aforementioned computation of \(\widehat{D_{\alpha}}\). As for multivariate moments, they have appeared: \((d)\) in [15, Prop. 7.4], in terms of summations over constrained permutations, only in the case \(\boldsymbol{\alpha}_{\bullet}=1\); \((e)\) in [8, Eqn. (4.20)], [11, Lem. 5.2], and [6, Cor. 3.5], in terms of summations over constrained set partitions.
Other measures.The measure \(\mathcal{D}_{\alpha}\) is the simplicial part of other known measures on the space \(\mathscr{M}^{+}\) of non-negative Borel measures on \(X\). Among them are: the law \(\mathcal{G}_{\alpha}\) of the \(\gamma\)_-point process_[18] with intensity \(\alpha\), and A.M. Vershik's multiplicative infinite-dimensional Lebesgue measure \(\mathcal{L}_{\alpha}\)[30, 31] with intensity \(\alpha\). Together with \(\mathcal{D}_{\alpha}\), these measures have a wide range of applications, from the theory of point processes and of measure-valued Markov diffusions, see [5, SS1] and references therein, to the representation theory of infinite-dimensional Lie groups of currents/multipliers, see [29], or [4, 61] for a unified treatment.
In SS3.3 we give moment formulas for \(\mathcal{D}_{\alpha}\) and \(\mathcal{G}_{\alpha}\) analog to the one in Theorem 3.1.
Relations to the ESF.One relation between the Dirichlet distribution and the ESF is made apparent by the expression of the generating function of \(E_{\theta}\) in the dummy variables \(\mathbf{t}\coloneqq(t_{1},\ldots,t_{n})\) in terms of the cycle index polynomial \(Z_{n}\) (2.4) of \(\mathfrak{S}_{n}\), viz.
\[\sum_{\boldsymbol{\lambda}\vdash n}E_{\theta}(\boldsymbol{\lambda})\,\mathbf{ t}^{\boldsymbol{\lambda}}=\frac{n!}{\langle\theta\rangle_{n}}Z_{n}[\theta\, \mathbf{t}]\,,\qquad\mathbf{t}^{\boldsymbol{\lambda}}\!\coloneqq\!t_{1}^{ \lambda_{1}}\cdots t_{n}^{\lambda_{n}}\,. \tag{3.3}\]
### Some corollaries
Let us collect some corollaries and special cases of Theorem 3.1.
**Corollary 3.2**.: _Let \(P_{\pi}\) be the permutation matrix of a permutation \(\pi\in\mathfrak{S}_{q}\). Then,_
\[Z_{\mathbf{n}}[\boldsymbol{\alpha};\mathbf{S}]=Z_{P_{\pi}\mathbf{n}}[ \boldsymbol{\alpha};\mathbf{S}P_{\pi}]\,,\qquad\mathbf{S}\in\mathds{R}^{k\times q }\,.\]
**Corollary 3.3**.: _For every \(\mathbf{n}\in\mathds{N}_{\mathbf{s}}^{q}\) we have_
\[\sum_{\begin{subarray}{c}\lambda\vdash\mathbf{n}\\ \operatorname{shape}(\boldsymbol{\lambda})=\boldsymbol{\lambda}\end{subarray}}M _{\mathbf{n}}(A)=M_{2}(\boldsymbol{\lambda})\,.\]
Proof.: In (3.2), choose \(\mathbf{s}_{1}=\ldots=\mathbf{s}_{q}\coloneqq\mathbf{s}\) and \(\boldsymbol{\alpha}\) with \(\boldsymbol{\alpha}_{\bullet}=1\), and set \(n\!\coloneqq\!\mathbf{n}_{\bullet}\). Then, the left-hand side of (3.2) becomes the \(n^{\text{th}}\)-moment of the linear functional \(\mathbf{x}\mapsto\mathbf{s}\cdot\mathbf{x}\) of \(D_{\boldsymbol{\alpha}}\) and is thus equal to \(Z_{n}[\mathbf{s}^{\circlearrowrowrowrowrow}\alpha]\) by [4, Thm. 3.2]. As for the right-hand side, for the above choice of the \(\mathbf{s}_{i}\)'s the monomials \(\omega_{\mathbf{a}}[\mathbf{S};\boldsymbol{\alpha}]\) satisfy \(\omega_{\mathbf{a}}[\mathbf{S};\boldsymbol{\alpha}]=\omega_{\boldsymbol{ \alpha}}[\mathbf{S};\boldsymbol{\alpha}]\) whenever \(\mathbf{a}_{\bullet}=\mathbf{a}_{\bullet}^{\prime}\). Collecting terms in the right-hand side and equating the coefficients of the corresponding monomials on both sides yields the assertion.
**Corollary 3.4**.: _The following identity holds_
\[\sum_{\begin{subarray}{c}\mathbf{n}\in\mathds{N}_{\mathbf{s}}^{q}\\ \mathbf{n}_{\bullet}=n\end{subarray}}Z_{\mathbf{n}}(\Omega_{\mathbf{n}}[ \mathbf{S};\boldsymbol{\alpha}])=Z_{n}[\mathsf{row}(\mathbf{S})\cdot \boldsymbol{\alpha},\mathsf{row}(\mathbf{S})^{\circlearrowrowrowrow}\alpha]\,. \tag{3.4}\]
Proof.: Set
\[\Phi[\boldsymbol{\alpha};\mathbf{S}]\mathrel{\mathop{:}}=\sum_{\mathbf{M}\in \mathbb{N}_{0}^{\lambda_{\times}}}\frac{\langle\boldsymbol{\alpha}\rangle_{\text {row}(\mathbf{M})}}{\langle\boldsymbol{\alpha}_{\bullet}\rangle_{\text{col}( \mathbf{M})_{\bullet}}}\frac{\mathbf{S}^{\mathbf{M}}}{\mathbf{M}!}\,.\]
By definition of \(\Phi[\boldsymbol{\alpha};\mathbf{S}]\) and Lemma 3.8 below, and by [4, Eqn. (2.9)], for every \(t\in\mathds{R}\),
\[\Phi[\boldsymbol{\alpha};t\,\mathbf{S}]=\int_{\Delta^{k-1}}e^{t(\mathbf{s}_{1 }+\cdots+\mathbf{s}_{q})\cdot\mathbf{x}}\,\mathrm{d}D_{\boldsymbol{\alpha}}( \mathbf{x})\mathrel{\mathop{:}}=\widehat{D_{\boldsymbol{\alpha}}}\big{(} \text{row}(\mathbf{S})\big{)}={}_{k}\Phi_{2}[\boldsymbol{\alpha};\boldsymbol{ \alpha}_{\bullet};t\,\text{row}(\mathbf{S})]\,.\]
Expanding the left-hand side as a series in \(n\in\mathbb{N}_{0}\), each summand is the left-hand side of (3.4) by Theorem 3.1. Expanding the right-hand side as series in \(n\in\mathbb{N}_{0}\), each summand is the right-hand side of (3.4) by [4, Prop. 3.5]. Since, for same \(n\), the summands in each of these expansions are polynomials of same degree equal to \(n\) in the variables \(\mathbf{S}\), we may equate the summands one by one, which yields (3.4).
### Dirichlet-Ferguson and Gamma measures
Let \(X\) be a second countable locally compact Hausdorff space, and \(\mathscr{P}\) be the space of all Borel probability measures on \(X\), endowed with the Borel \(\sigma\)-algebra of the narrow topology. For any finite Borel measure \(\eta\) on \(X\) and any bounded Borel \(f\colon X\to\mathds{R}\) we set \(\eta f\mathrel{\mathop{:}}=\int f\,\mathrm{d}\eta\).
Dirichlet-Ferguson measures.: For \(\beta>0\) and \(\sigma\in\mathscr{P}\), let \(\alpha\mathrel{\mathop{:}}=\beta\sigma\) be the finite Borel measure on \(X\) with total mass \(\beta\) and shape (also: simplicial part) \(\sigma\). The Dirichlet-Ferguson measure \(\mathcal{D}_{\alpha}\) with intensity (measure) \(\alpha\) is the unique Borel probability measure on \(\mathscr{P}\) with Fourier transform [4, Thm. 3.10]
\[\widehat{\mathcal{D}_{\alpha}}(f)\mathrel{\mathop{:}}=\int_{\mathscr{P}}e^{ i\,\eta f}\,\mathrm{d}\mathcal{D}_{\alpha}(f)=\sum_{n=0}^{\infty}\frac{\mathrm{i}^{n} }{\langle\beta\rangle_{n}}Z_{n}\big{(}\alpha f,\alpha f^{2},\ldots,\alpha f^{ n}\big{)}\,,\qquad f\in\mathcal{C}_{b}\,.\]
For continuous bounded \(f_{1},\ldots,f_{q}\colon X\to\mathds{R}\), set
\[\Omega_{\mathbf{n}}[f_{1},\ldots,f_{q};\alpha]\mathrel{\mathop{:}}=\,\Big{(} \alpha\big{(}f_{1}^{h_{1}}\cdots f_{q}^{h_{q}}\big{)}\Big{)}_{\mathbf{n}\mathrel {\mathop{:}}=\infty\,\mathbf{n}}\,.\]
By a straightforward adaptation of the proof for the univariate case [4, Thm. 3.10], as a corollary of Theorem 3.1 we obtain an explicit expression for the moments of \(\mathcal{D}_{\alpha}\).
**Corollary 3.5** (Multivariate moments of \(\mathcal{D}_{\alpha}\)).: _We have_
\[\int_{\mathscr{P}}\prod_{j}^{q}(\eta f_{j})^{n_{j}}\,\mathrm{d}\mathcal{D}_{ \alpha}(\eta)=\frac{\mathbf{n}!}{\langle\beta\rangle_{\mathbf{n}_{\bullet}}}Z _{\mathbf{n}}\big{(}\Omega_{\mathbf{n}}[f_{1},\ldots,f_{q};\alpha]\big{)}\,.\]
We recover Theorem 3.1 by choosing a Borel partition \((X_{i})_{i}^{k}\) of \(X\) with \(\alpha_{i}\mathrel{\mathop{:}}=\alpha X_{i\iota}\) and simple functions \(f_{1},\ldots,f_{q}\), constantly equal to, respectively, \(s_{1,i},\ldots,s_{q,i}\) on each set \(X_{i}\) for each \(i\in[k]\).
Gamma measures.: Let \(\mathcal{G}_{\alpha}\) be the law of the Gamma point process with intensity \(\alpha\), e.g. [18].
**Corollary 3.6** (Multivariate moments of \(\mathcal{G}_{\alpha}\)).: _We have_
\[\int_{\mathscr{M}_{\alpha}^{+}}\prod_{j}^{q}(\eta f_{j})^{n_{j}}\,\mathrm{d} \mathcal{G}_{\alpha}(\eta)=\mathbf{n}!\,Z_{\mathbf{n}}\big{(}\Omega_{\mathbf{ n}}[f_{1},\ldots,f_{q};\alpha]\big{)}\,.\]
Remark 3.7.: Alternative expressions for the multivariate moments of the Gamma measure may be obtained by differentiating its characteristic functional (e.g. [6, p. 5]). Such expressions are however not informative on their algebraic and combinatorial meaning in connection with \(Z_{\mathbf{n}}\), as they rather rely on the multivariate multi-factor Leibniz rule. A similar approach does not apply to the Dirichlet-Ferguson measure, due to the convoluted form of its characteristic functional.
### Proof of Theorem 3.1
**Lemma 3.8**.: _The following identity holds_
\[\mu_{\mathbf{n}}[\mathbf{S};\boldsymbol{\alpha}]=\frac{\mathbf{n}!}{\langle \boldsymbol{\alpha_{\bullet}}\rangle_{\mathbf{n_{\bullet}}}}\sum_{\begin{subarray} {c}\mathbf{M}\in\mathbb{N}_{0}^{k\times q}\\ \text{col}(\mathbf{M})=\mathbf{n}\end{subarray}}\langle\boldsymbol{\alpha} \rangle_{\text{row}(\mathbf{M})}\,\frac{\mathbf{S}^{\mathbf{M}}}{\mathbf{M}! }\eqqcolon\nu_{\mathbf{n}}[\mathbf{S};\boldsymbol{\alpha}]\,. \tag{3.5}\]
Proof.: By the Multinomial Theorem and by properties of the Dirichlet distribution
\[\mu_{\mathbf{n}}[\mathbf{S};\boldsymbol{\alpha}] =\frac{1}{\mathrm{B}[\boldsymbol{\alpha}]}\int_{\Delta^{k-1}} \left(\prod_{j}^{q}\sum_{\begin{subarray}{c}\mathbf{m}\in\mathbb{N}_{0}^{k}\\ \mathbf{m}_{\bullet}=n_{j}\end{subarray}}\binom{n_{j}}{\mathbf{m}}\,\mathbf{s }_{j}^{\mathbf{m}}\,\mathbf{x}^{\mathbf{m}}\right)\mathbf{x}^{\boldsymbol{ \alpha}-\mathbf{1}}\,\mathrm{d}\mathbf{x}\] \[=\frac{1}{\mathrm{B}[\boldsymbol{\alpha}]}\int_{\Delta^{k-1}} \left(\sum_{\begin{subarray}{c}\mathbf{m}_{1},\dots,\mathbf{m}_{q}\in\mathbb{N }_{0}^{k}\\ \mathbf{m}_{\bullet}=n_{1},\dots,\mathbf{m}_{q}=n_{q}\end{subarray}}\prod_{j}^{ q}\binom{n_{j}}{\mathbf{m}_{j}}\,\mathbf{s}_{j}^{\mathbf{m}_{j}}\mathbf{x}^{ \mathbf{m}_{j}}\right)\mathbf{x}^{\boldsymbol{\alpha}-\mathbf{1}}\,\mathrm{d} \mathbf{x}\] \[=\sum_{\begin{subarray}{c}\mathbf{m}_{1},\dots,\mathbf{m}_{q}\in \mathbb{N}_{0}^{k}\\ \mathbf{m}_{1}=n_{1},\dots,\mathbf{m}_{q}=n_{q}\end{subarray}}\frac{1}{ \mathrm{B}[\boldsymbol{\alpha}]}\left(\prod_{j}^{q}\binom{n_{j}}{\mathbf{m}_{ j}}\,\mathbf{s}_{j}^{\mathbf{m}_{j}}\right)\int_{\Delta^{k-1}}\mathbf{x}^{ \mathbf{m}_{1}+\dots+\mathbf{m}_{q}+\boldsymbol{\alpha}-\mathbf{1}}\,\mathrm{d} \mathbf{x}\] \[=\sum_{\begin{subarray}{c}\mathbf{m}_{1},\dots,\mathbf{m}_{q}\in \mathbb{N}_{0}^{k}\\ \mathbf{m}_{1}=n_{1},\dots,\mathbf{m}_{q}=n_{q}\end{subarray}}\frac{\mathrm{ B}[\mathbf{m}_{1}+\dots+\mathbf{m}_{q}+\boldsymbol{\alpha}]}{\mathrm{B}[ \boldsymbol{\alpha}]}\prod_{j}^{q}\binom{n_{j}}{\mathbf{m}_{j}}\,\mathbf{s}_{ j}^{\mathbf{m}_{j}}\,.\]
Reindexing the summation over \(\mathbf{M}=(\mathbf{m}_{1},\dots,\mathbf{m}_{q})\in\mathds{R}^{k\times q}\), we conclude that
\[\mu_{\mathbf{n}}[\mathbf{S};\boldsymbol{\alpha}]=\frac{\mathbf{n}!}{\langle \boldsymbol{\alpha_{\bullet}}\rangle_{\mathbf{n_{\bullet}}}}\sum_{ \begin{subarray}{c}\mathbf{M}\in\mathbb{N}_{0}^{k\times q}\\ \text{col}(\mathbf{M})=\mathbf{n}\end{subarray}}\langle\boldsymbol{\alpha} \rangle_{\text{row}(\mathbf{M})}\,\frac{\mathbf{S}^{\mathbf{M}}}{\mathbf{M}!}\,.\qed\]
Let us recall the following fact, e.g. [27, Eqn. (8), p. 1060], or [2, Eqn. (10), p. 39].
**Lemma 3.9**.: _For every \(k\in\mathds{N}\), every \(\mathbf{v}\in\mathds{N}_{0}^{k}\), and every integer \(0\leq m\leq\mathbf{v}_{\bullet}\),_
\[\binom{\mathbf{v}_{\bullet}}{\mathbf{v}}=\sum_{\begin{subarray}{c}\mathbf{w} \in\mathbb{N}_{0}^{k}\\ \mathbf{w}_{\bullet}\leq m\end{subarray}}\binom{\mathbf{w}_{\bullet}}{ \mathbf{w}}\binom{\mathbf{v}_{\bullet}-\mathbf{w}_{\bullet}}{\mathbf{v}- \mathbf{w}}\,. \tag{3.6}\]
Proof of Theorem 3.1.: Set
\[\tilde{\nu}_{\mathbf{n}}[\mathbf{S};\boldsymbol{\alpha}]\,\eqqcolon\frac{ \langle\boldsymbol{\alpha}\rangle_{\mathbf{n_{\bullet}}}}{\mathbf{n}!}\nu_{ \mathbf{n}}[\mathbf{S};\boldsymbol{\alpha}]\qquad\text{and}\qquad\tilde{ \zeta}_{\mathbf{n}}[\mathbf{S};\boldsymbol{\alpha}]\,\eqqcolon\frac{\langle \boldsymbol{\alpha}\rangle_{\mathbf{n_{\bullet}}}}{\mathbf{n}!}\zeta_{\mathbf{n }}[\mathbf{S};\boldsymbol{\alpha}]\,.\]
We show that \(\tilde{\nu}_{\mathbf{n}}[\mathbf{S};\boldsymbol{\alpha}]=\tilde{\zeta}_{ \mathbf{n}}[\mathbf{S};\boldsymbol{\alpha}]\) and conclude the assertion by Lemma 3.8.
Step 1.: We claim that
\[\tilde{\nu}_{\mathbf{n}-\mathbf{e}_{j}}[\mathbf{S};\boldsymbol{\alpha}+ \mathbf{e}_{\ell}]=\sum_{\mathbf{e}_{j}\leq\mathbf{e}_{\ell}\mathbf{n}\leq \mathbf{e}_{\ell}}\mathbf{S}_{\ell}^{\mathbf{h}-\mathbf{e}_{j}}\frac{(\mathbf{h} _{\bullet}-1)!}{(\mathbf{h}-\mathbf{e}_{j})!}\,\tilde{\nu}_{\mathbf{n}- \mathbf{h}}[\mathbf{S};\boldsymbol{\alpha}]\,, \tag{3.7}\]
where, conventionally,
\[\tilde{\nu}_{\mathbf{n}}[\mathbf{S};\boldsymbol{\alpha}]=0\quad\text{whenever} \quad\mathbf{n}\not\succeq^{\diamond}\mathbf{0}_{q}. \tag{3.8}\]
We argue by induction on \(\mathbf{n}_{\bullet}\) with trivial (i.e. \(1=1\)) base step for \(\mathbf{n}_{\bullet}=1\).
Inductive step.: Let \(\partial_{a}^{b}\coloneqq\partial_{s_{a}^{b}}\), set \(\mathbf{E}_{a}^{b}\coloneqq[\delta_{ai}\delta_{bj}]_{i}^{j}\in\{0,1\}^{k\times q}\), and note that
\[\partial_{a}^{b}\tilde{\nu}_{\mathbf{n}}[\mathbf{S};\boldsymbol{ \alpha}] =\sum_{\begin{subarray}{c}\mathbf{M}\in\mathbf{N}_{0}^{k\times q}\\ \text{col}(\mathbf{M})=\mathbf{n}-\mathbf{e}_{b}\end{subarray}}\langle \boldsymbol{\alpha}\rangle_{\text{row}(\mathbf{M})}\,\partial_{a}^{b}\frac{ \mathbf{S}^{\mathbf{M}}}{\mathbf{M}!}=\sum_{\begin{subarray}{c}\mathbf{E}_{a}^ {b}\leq_{\text{d}}\mathbf{M}\in\mathbf{N}_{0}^{k\times q}\\ \text{col}(\mathbf{M})=\mathbf{n}\end{subarray}}\alpha_{a}\,\langle \boldsymbol{\alpha}+\mathbf{e}_{a}\rangle_{\text{row}(\mathbf{M}-\mathbf{E}_{ a}^{b})}\,\frac{\mathbf{S}^{\mathbf{M}-\mathbf{E}_{a}^{b}}}{(\mathbf{M}-\mathbf{E}_{ a}^{b})!}\] \[=\sum_{\begin{subarray}{c}\mathbf{M}\in\mathbf{N}_{0}^{k\times q} \\ \text{col}(\mathbf{M})=\mathbf{n}-\mathbf{e}_{b}\end{subarray}}\alpha_{a}\, \langle\boldsymbol{\alpha}+\mathbf{e}_{a}\rangle_{\text{row}(\mathbf{M})}\, \frac{\mathbf{S}^{\mathbf{M}}}{\mathbf{M}!}\] \[=\alpha_{a}\tilde{\nu}_{\mathbf{n}-\mathbf{e}_{b}}[\mathbf{S}; \boldsymbol{\alpha}+\mathbf{e}_{a}]\,. \tag{3.9}\]
Applying the inductive hypothesis to \(\mathbf{n}-\mathbf{e}_{b}\) with \(\boldsymbol{\alpha}+\mathbf{e}_{a}\) in place of \(\boldsymbol{\alpha}\), we have
\[\alpha_{a}\tilde{\nu}_{\mathbf{n}-\mathbf{e}_{j}-\mathbf{e}_{b}}[ \mathbf{S};\boldsymbol{\alpha}+\mathbf{e}_{\ell}+\mathbf{e}_{a}] =\alpha_{a}\sum_{\mathbf{e}_{j}\leq_{\text{e}}\mathbf{h}\leq_{ \text{e}}\mathbf{n}-\mathbf{e}_{b}}\mathbf{S}_{\ell}^{\mathbf{h}-\mathbf{e}_{ j}}\frac{(\mathbf{h}_{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ }}}}}}}}}}}}} \left\left\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \boldsymbol{ \ \boldsymbol{ \boldsymbol{ \ { \boldsymbol{ \boldsymbol{ \boldsymbol{ \ \boldsymbol{ \ \boldsymbol{ \ { \boldsymbol{ \boldsymbol{ \ \boldsymbol{ \ { \boldsymbol{ \boldsymbol{ \ \boldsymbol{ \ { \boldsymbol{ \ \boldsymbol{ \ { \boldsymbol{ \ \boldsymbol{ \ { \boldsymbol{ \ { \boldsymbol{ \ \boldsymbol{ \ { \ \boldsymbol{ \ { \boldsymbol{ \ \ { \boldsymbol{ \ \boldsymbol{ \ { \ \boldsymbol{ \ \ { \boldsymbol{ \ { \ \boldsymbol{ \ { \
The latter is implied by the equality of each of the summands, viz.
\[\frac{(\mathbf{n}_{\bullet}-1)!}{(\mathbf{n}-\mathbf{e}_{j})!}\frac{1}{(\mathbf{n }_{\bullet}-i)!}=\sum_{\begin{subarray}{c}\mathbf{e}_{j}\leq\mathbf{n}\leq \mathbf{n}\leq\mathbf{n}\\ \mathbf{h}_{\bullet}=i\end{subarray}}\frac{(\mathbf{h}_{\bullet}-1)!}{( \mathbf{n}-\mathbf{e}_{j})!}\frac{1}{(\mathbf{n}-\mathbf{h})!}\,,\]
which is in turn a consequence of Lemma 3.9, after relabeling \(\mathbf{n}\) as \(\mathbf{n}-\mathbf{e}_{j}\).
Step 2.We now verify that \(\tilde{\nu}_{\text{n}}=\tilde{\zeta}_{\text{n}}\). We argue by strong induction on \(\mathbf{n}_{\bullet}\) with trivial (i.e. \(1=1\)) base step \(\mathbf{n}_{\bullet}=0\). Inductive step. Assume for every \(\boldsymbol{\alpha}\in\mathds{R}_{+}^{k}\) that \(\tilde{\nu}_{\text{n}-\mathbf{h}}[\mathbf{S};\boldsymbol{\alpha}]=\tilde{\zeta }_{\text{n}-\mathbf{h}}[\mathbf{S};\boldsymbol{\alpha}]\) for every \(\mathbf{h}\leq_{\circ}\mathbf{n}\) with \(\mathbf{h}\neq\mathbf{0}\). Now,
\[\mathbf{n}!\,\partial_{p}^{b}\tilde{\zeta}_{\text{n}}[\mathbf{S}; \boldsymbol{\alpha}] =\sum_{A^{\prime}\mathbf{n}}M_{\mathbf{n}}(A)\,\partial_{p}^{b} \prod_{\begin{subarray}{c}\mathbf{a}\in\text{supp}A\\ =J(\mathbf{S};\boldsymbol{\alpha};\boldsymbol{\Lambda})\end{subarray}}\left( \left(\mathsf{s}_{1}^{\circ a_{1}}\circ\cdots\circ\mathsf{s}_{q}^{\circ a_{q} }\right)\cdot\boldsymbol{\alpha}\right)^{A(\mathbf{a})}\] \[=\sum_{A^{\prime}\mathbf{n}}M_{\mathbf{n}}(A)\,\sum_{ \begin{subarray}{c}\mathbf{a}\in\text{supp}A\\ \mathbf{a}\geq_{\circ}\mathbf{e}_{\text{b}}\end{subarray}}\frac{A(\mathbf{a}) \,a_{b}\,\alpha_{p}\,\mathbf{S}_{p}^{\mathbf{a}-\mathbf{e}_{\text{b}}}}{ \mathsf{s}_{1}^{\circ a_{1}}\circ\cdots\circ\mathsf{s}_{q}^{\circ a_{q}}}\,J( \mathbf{S};\boldsymbol{\alpha};A)\] \[=\alpha_{p}\sum_{\mathbf{e}_{\circ}\leq\mathbf{h}\leq_{\circ} \mathbf{n}}\sum_{A^{\prime}\mathbf{n}}M_{\mathbf{n}}(A)\frac{A(\mathbf{h})\,h _{b}\,\mathbf{S}_{p}^{\mathbf{h}-\mathbf{e}_{\text{b}}}}{\mathsf{s}_{1}^{ \circ h_{1}}\circ\cdots\circ\mathsf{s}_{q}^{\circ h_{q}}}\cdot\boldsymbol{ \alpha}\,J(\mathbf{S};\boldsymbol{\alpha};A)\,. \tag{3.13}\]
For each \(\mathbf{h}\leq_{\circ}\mathbf{n}\) and each \(A\vdash\mathbf{n}\) with \(A\geq\mathbf{1}_{\mathbf{h}}\), set \(C\coloneqq A-\mathbf{1}_{\mathbf{h}}\). Note that
\[M_{\mathbf{n}}(A) =\mathbf{n}!\,\prod_{\mathbf{a}\in\text{supp}A}\frac{\binom{ \mathbf{n}_{\bullet}}{A}^{A(\mathbf{a})}A(\mathbf{a})!}{\mathbf{a}_{\bullet} ^{A(\mathbf{a})}A(\mathbf{a})!}=\frac{\mathbf{n}!\,\mathbf{h}_{\bullet}\, \mathbf{n}!\,A(\mathbf{h})}{\mathbf{n}_{\bullet}\,\mathbf{n}!\,A(\mathbf{h})} \prod_{\mathbf{a}\in\text{supp}C}\frac{\binom{\mathbf{n}_{\bullet}}{A}^{C( \mathbf{a})}C(\mathbf{a})!}{\mathbf{c}^{C(\mathbf{a})}C(\mathbf{a})!}\] \[=\frac{M_{\mathbf{n}}(C)}{A(\mathbf{h})}\frac{(\mathbf{h}_{ \bullet}-1)!}{\mathbf{h}!}\frac{\mathbf{n}!}{(\mathbf{n}-\mathbf{h})!} \tag{3.14}\]
and
\[J(\mathbf{S};\boldsymbol{\alpha};A) =J(\mathbf{S};\boldsymbol{\alpha};C)\left(\mathsf{s}_{1}^{\circ h _{1}}\circ\cdots\circ\mathsf{s}_{q}^{\circ h_{q}}\right)\cdot\boldsymbol{ \alpha}\,. \tag{3.15}\]
Substituting (3.14) and (3.15) in (3.13) above, and simplifying \(\mathbf{n}!\),
\[\partial_{p}^{b}\tilde{\zeta}_{\text{n}}[\mathbf{S};\boldsymbol{ \alpha}] =\alpha_{p}\sum_{\mathbf{e}_{\circ}\leq\mathbf{n}\leq_{\circ} \mathbf{n}}\sum_{C\vdash\mathbf{n}-\mathbf{h}}\frac{M_{\mathbf{n}}(C)}{( \mathbf{n}-\mathbf{h})!}\frac{(\mathbf{h}_{\bullet}-1)!}{(\mathbf{n}-\mathbf{ e}_{\text{b}})!}\,\mathsf{S}_{p}^{\mathbf{h}-\mathbf{e}_{\text{b}}}\,J( \mathbf{S};\boldsymbol{\alpha};C)\] \[=\alpha_{p}\sum_{\mathbf{e}_{\circ}\leq\mathbf{n}\leq_{\circ} \mathbf{n}}\frac{(\mathbf{h}_{\bullet}-1)!}{(\mathbf{n}-\mathbf{e}_{\text{b}})! }\,\mathsf{S}_{p}^{\mathbf{h}-\mathbf{e}_{\text{b}}}\,\tilde{\zeta}_{\text{n}- \mathbf{h}}[\mathbf{S};\boldsymbol{\alpha}]\,.\]
Combining the inductive hypothesis with (3.7) and (3.9) with \(a=\ell\) and \(b=j\),
\[\partial_{p}^{b}\tilde{\zeta}_{\text{n}}[\mathbf{S};\boldsymbol{ \alpha}] =\alpha_{p}\sum_{\mathbf{e}_{\circ}\leq\mathbf{n}\leq_{\circ} \mathbf{n}}\frac{(\mathbf{h}_{\bullet}-1)!}{(\mathbf{n}-\mathbf{e}_{\text{b}})!} \,\mathsf{S}_{p}^{\mathbf{h}-\mathbf{e}_{\text{b}}}\,\tilde{\nu}_{\text{n}- \mathbf{h}}[\mathbf{S};\boldsymbol{\alpha}]=\partial_{p}^{b}\tilde{\nu}_{\text{n}}[ \mathbf{S};\boldsymbol{\alpha}]\,.\]
By arbitrariness of \(p\) and \(b\) we conclude that \(\tilde{\zeta}_{\text{n}}[\mathbf{S};\boldsymbol{\alpha}]-\tilde{\nu}_{\text{n}}[ \mathbf{S};\boldsymbol{\alpha}]\) is constant as a function of \(\mathbf{S}\), hence vanishing by choosing \(\mathbf{S}=\mathbf{0}\).
## 4 A Polychromatic ESF
Let \(r\) be the number of cycles of a random permutation \(\pi\in\mathfrak{S}_{\mathbf{n}_{\bullet}}\). Assume that \(\pi\) is chosen with a probability proportional to \(\theta^{r}\) for some \(\theta>0\). Then, the probability that \(\pi\) has cycle structure \(\boldsymbol{\lambda}\vdash n\) is precisely the Ewens distribution \(E_{\theta}(\boldsymbol{\lambda})\). We provide a generalization of this statement to the case of colored permutations, with coloring and cycle structure indexed by a \(q\)-colored partition.
Let
\[\mathcal{A}_{n}\coloneqq\bigcup_{\mathbf{n}\in\mathbb{N}^{2}:\mathbf{n}_{ \bullet}=n}\mathcal{A}_{\mathbf{n}} \tag{4.1}\]
be the family of all multisets \(A\) on \(\mathbb{N}^{q}_{\bullet}\) with \(\text{shape}(A)\vdash n\).
**Definition 4.1** (Polychromatic ESF).: Fix \(n,q\in\mathds{N}_{1}\), \(\theta>0\), and \(\mathbf{p}\in\Delta^{q-1}\). The polychromatic ESF \(E^{n}_{\theta,\mathbf{p}}\) is the probability distribution on \(\mathcal{A}_{n}\) given by
\[E^{n}_{\theta,\mathbf{p}}(A)\coloneqq\frac{n!}{\langle\theta\rangle_{n}}\, \theta^{\mathsf{card}(A)}\,\frac{\mathbf{p}^{\mathsf{col}(A)}}{\mathsf{col}(A)! }M_{\mathsf{col}(A)}(A)\,,\qquad A\in\mathcal{A}_{n}\,. \tag{4.2}\]
Proof.: Let us verify that \(E^{n}_{\theta,\mathbf{p}}\) is indeed a probability distribution on \(\mathcal{A}_{n}\). For fixed \(k>n\) set \(\mathbf{s}_{j}\coloneqq p_{j}\mathbf{1}^{{{{(k)}}}}\), \(j\in[q]\), and \(\boldsymbol{\alpha}\coloneqq\!\langle\theta/k\rangle\mathbf{1}^{{{{(k)}}}}\). Respectively by: the Multinomial Theorem, Theorem 3.1, and the definition (2.7) of \(Z_{\mathbf{n}}\).
\[1 =\sum_{\mathbf{n}\in\mathds{N}_{0}^{q}:\mathbf{n}_{\bullet}=n} \binom{n}{\mathbf{n}}\mathbf{p}^{\mathbf{n}}=\sum_{\mathbf{n}\in\mathds{N}_{0} ^{q}:\mathbf{n}_{\bullet}=n}\binom{n}{\mathbf{n}}\int_{\Delta^{k-1}}\prod_{j }^{q}(\mathbf{s}_{j}\cdot\mathbf{x})^{n_{j}}\,\mathrm{d}D_{\boldsymbol{ \alpha}}(\mathbf{x})\] \[=\sum_{\mathbf{n}\in\mathds{N}_{0}^{q}:\mathbf{n}_{\bullet}=n} \binom{n}{\mathbf{n}}\frac{\mathbf{n}!}{\langle\theta\rangle_{n}}\frac{1}{ \mathbf{n}!}\sum_{A\vdash\mathbf{n}}M_{\mathbf{n}}(A)\prod_{\mathbf{a}\in \mathsf{supp}A}(\theta\,\mathbf{p}^{\mathbf{a}})^{A(\mathbf{a})}\] \[=\sum_{A\in\mathcal{A}_{n}}\frac{\theta^{\mathsf{card}(A)}}{ \langle\theta\rangle_{n}}\binom{n}{\mathsf{col}(A)}\mathbf{p}^{\mathsf{col}(A )}M_{\mathsf{col}(A)}(A)\,.\qed\]
Remark 4.2 (\(q=1\)).: When \(q=1\), we have \(\mathbf{p}=p=1\) and \(\mathsf{col}(A)=n\) for every \(A\in\mathcal{A}_{n}\), thus (4.2) reduces to the standard ESF by Remark 2.4.
**Lemma 4.3** (Conditioning).: Fix \(\mathbf{n}\in\mathds{N}_{0}^{q}\) with \(\mathbf{n}_{\bullet}=n\). Then, the conditional probability \(E^{n}_{\theta,\mathbf{p}}\left[\,\cdot\,|\mathsf{col}(\,\cdot\,)=\mathbf{n}\right]\) satisfies
\[E^{n}_{\theta,\mathbf{p}}\left[\,\cdot\,|\mathsf{col}(\,\cdot\,)=\mathbf{n} \right]=\frac{\theta^{\mathsf{card}(A)}}{\langle\theta\rangle_{n}}M_{ \mathbf{n}}(A)\,,\qquad A\in\mathcal{A}_{\mathbf{n}}\,. \tag{4.3}\]
Proof.: For fixed \(k>n\) set \(\mathbf{s}_{1}=\cdots=\mathbf{s}_{q}\coloneqq\mathbf{1}^{{{{(k)}}}}\), and \(\boldsymbol{\alpha}\coloneqq\!(\theta/k)\mathbf{1}^{{{{(k)}}}}\). By Theorem 3.1 and by the definition (2.7) of \(Z_{\mathbf{n}}\),
\[1=\int_{\Delta^{k-1}}\prod_{j}^{q}(\mathbf{s}_{j}\cdot\mathbf{x})^{n_{j}}\, \mathrm{d}D_{\boldsymbol{\alpha}}=\frac{\mathbf{n}!}{\langle\theta\rangle_{n} }\frac{1}{\mathbf{n}!}\sum_{A\vdash\mathbf{n}}M_{\mathbf{n}}(A)\prod_{\mathbf{ a}\in\mathsf{supp}A}\theta^{A(\mathbf{a})}\,,\]
hence
\[\sum_{A\vdash\mathbf{n}}\theta^{\mathsf{card}(A)}M_{\mathbf{n}}(A)=\langle \theta\rangle_{n}. \tag{4.4}\]
Now,
\[E^{n}_{\theta,\mathbf{p}}\left[A|\mathsf{col}(A)=\mathbf{n}\right]=\frac{E^{n }_{\theta,\mathbf{p}}(A)}{E^{n}_{\theta,\mathbf{p}}\left[\mathsf{col}(\,\cdot\, )=\mathbf{n}\right]}\quad\text{if}\quad\mathsf{col}(A)=\mathbf{n} \tag{4.5}\]
and \(0\) otherwise. Furthermore,
\[\begin{split} E^{n}_{\theta,\mathbf{p}}\left[\mathsf{col}(\,\cdot\, )=\mathbf{n}\right]=&\sum_{A\vdash\mathbf{n}}E^{\mathbf{n}}_{ \theta,\mathbf{p}}(A)=\sum_{A\vdash\mathbf{n}}\frac{n!}{\langle\theta\rangle_{ n}}\,\theta^{\mathsf{card}(A)}\,\frac{\mathbf{p}^{\mathsf{col}(A)}}{\mathsf{col}(A)! }M_{\mathsf{col}(A)}(A)\\ =&\frac{n!}{\langle\theta\rangle_{n}}\frac{\mathbf{p} ^{\mathbf{n}}}{\mathbf{n}!}\sum_{A\vdash\mathbf{n}}\theta^{\mathsf{card}(A)}M_{ \mathsf{col}(A)}(A)=n!\frac{\mathbf{p}^{\mathbf{n}}}{\mathbf{n}!}\end{split} \tag{4.6}\]
by (4.4). Combining (4.5), (4.6), and (4.2) thus yields
\[E^{n}_{\theta,\mathbf{p}}\left[A|\mathsf{col}(A)=\mathbf{n}\right]=\frac{ \theta^{\mathsf{card}(A)}}{\langle\theta\rangle_{n}}M_{\mathbf{n}}(A)\quad \text{if}\quad\mathsf{col}(A)=\mathbf{n}\]
and \(0\) otherwise.
Since \(E^{n}_{\theta,\mathbf{p}}\left[\,\cdot\,|\mathsf{col}(\,\cdot\,)=\mathbf{n}\right]\) does not depend on \(\mathbf{p}\), let us set
\[E^{\mathbf{n}}_{\theta}\coloneqq E^{n}_{\theta,\mathbf{p}}\left[\,\cdot\,| \mathsf{col}(\,\cdot\,)=\mathbf{n}\right]\quad\text{on}\quad\mathcal{A}_{ \mathbf{n}}\,.\]
In analogy with the standard ESF, the conditional probability \(E^{\mathbf{n}}_{\theta}\) counts \(\theta\)-biased \(q\)-colored permutations, as we now show.
**Proposition 4.4**.: Fix \(\theta>0\) and let \(\pi\in\mathfrak{S}_{\mathbf{n_{s}}}\) be a \(\theta\)-biased random permutation. Then,
\[\mathbf{P}\big{[}\Pi(\pi)=A\big{]}=E_{\theta}^{\mathbf{n}}(A)\,,\qquad A\in \mathcal{A}_{\mathbf{n}}\,. \tag{4.7}\]
Proof.: Let \(r\) be the number of cycles of \(\pi\) including fixed points. Since \(\pi\) is \(\theta\)-biased and applying Proposition 2.13, we have
\[\mathbf{P}\big{[}\Pi(\pi)=A\big{]}=C_{\theta}\,\theta^{r}\,\big{|}\Pi^{-1}(A) \big{|}=C_{\theta}\,\theta^{r}M_{\text{col}(A)}(A)\,.\]
The conclusion follows since \(E_{\theta}^{\mathbf{n}}\) is a probability measure by Lemma 4.3.
_Remark 4.5_.: We can rephrase Proposition 4.4 by saying that \(E_{\theta}^{\mathbf{n}}\) is the push-forward via \(\Pi\) of the law \(\mathbf{P}\) of a \(\theta\)-biased random permutation in \(\mathfrak{S}_{\mathbf{n_{s}}}\). Furthermore, as a consequence of Lemma 4.3 and Corollary 3.3, we see that
\[E_{\theta}(\boldsymbol{\lambda})=\sum_{\begin{subarray}{c}A\vdash\mathbf{p}\\ \text{shape}\mathcal{A}=\boldsymbol{\lambda}\end{subarray}}E_{\theta}^{ \mathbf{n}}(A)\,,\qquad\boldsymbol{\lambda}\vdash n\,. \tag{4.8}\]
That is, \(E_{\theta}\) is the push-forward of \(E_{\theta}^{\mathbf{n}}\) via the function shape. In this sense, the newly defined measure \(E_{\theta}^{\mathbf{n}}\) can be seen as 'intermediate' between \(\mathbf{P}\) and \(E_{\theta}\).
Finally, let us collect here the main properties of \(E_{\theta,\mathbf{p}}^{n}\) with respect to manipulations of \(\mathbf{p}\). For each set partition \(\mathbf{L}\coloneqq\{L_{1},\ldots,L_{r}\}\vdash[q]\) denote by \(s_{\mathbf{L}}\colon[q]\to[r]\) the \(\mathbf{L}\)-_degeneracy map_ defined by \(s_{\mathbf{L}}^{-1}(k)=L_{k}\) for \(k\in[r]\). Further let \(\mathbf{S}_{\mathbf{L}}\in\{0,1\}^{r\times q}\) be the matrix \([\mathbf{S}_{\mathbf{L}}]_{i}^{\cdot}\coloneqq\mathbf{1}_{j\in\mathfrak{S}_{ \mathbf{L}}^{-1}(i)}\) and note that \(\mathbf{S}_{\mathbf{L}}\colon\mathbb{N}_{q}^{q}\to\mathbb{N}_{r}^{*}\) and \(\mathbf{S}_{\mathbf{L}}\colon\Delta^{q-1}\to\Delta^{r-1}\).
Arguing similarly as in the proof of Definition 4.1, choosing \(\mathbf{s}_{j}=\mathbf{s}_{j^{\prime}}\) in (3.2) whenever \(j,j^{\prime}\in L_{i}\) for some \(i\), we have the following.
**Proposition 4.6** (Aggregation).: Let \(n,q\in\mathbb{N}_{1}\), \(\theta>0\), and \(\mathbf{p}\in\Delta^{q-1}\). Then, cf. (2.1),
\[(\mathbf{S}_{\mathbf{L}})_{*_{2}}E_{\theta,\mathbf{p}}^{n}=E_{\theta,\mathbf{S }_{\mathbf{L}\mathbf{p}}}^{n}\,,\qquad\mathbf{L}\vdash[q]\,.\]
### A Hoppe-type urn model
In [13], F. M. Hoppe showed that the ESF \(E_{\theta}\) is the marginal distribution of a discrete-time Markov process \(\left(\Pi_{t}\right)_{t}\) of integer partitions \(\Pi_{t}\vdash t\) obtained from the sampling process \(\left(X_{t}\right)_{t}\) of what is now known as _Hoppe's urn model_. We adapt his construction to a similar urn model, resulting in a Markov process with values in the space of colored integer partitions and with marginal distribution \(E_{\theta,\mathbf{p}}^{t}\) at time \(t\).
Denote by \(\operatorname{Cat}_{\mathbf{p}}\) the categorical distribution on \([q]\) with parameters \(\mathbf{p}\in\Delta^{q-1}\).
Consider a process \(Y_{\circ}\coloneqq\left(Y_{t}\right)_{t}\) generated by sampling from an urn containing one cube and various numbers of labelled colored balls. At time \(0\), the urn contains only the cube. At every (integer) time \(t\), the labels are consecutive and ranging in \(\mathbb{N}_{1}\), while the colors range in \([q]\). The cube has mass \(\theta\) and every ball has mass \(1\). At time \(t\), an object in the urn is selected at random with a probability proportional to its mass. If it is a ball, it is returned together with one additional ball of the same label and of a color chosen according to \(\operatorname{Cat}_{\mathbf{p}}\) independently of the label. If it is the cube, it is returned together with a ball with the smallest label previously not present in the urn and of a color chosen according to \(\operatorname{Cat}_{\mathbf{p}}\). We define random variables \(r_{t}\in\mathbb{N}_{1}\) and \(Y_{t}\in\mathbb{N}_{1}\times[q]\) as the number of distinct labels (i.e. the maximal label) present in the urn, and the label and color of the additional ball returned after the \(t^{\text{th}}\) drawing. Observe that, for every \(T\in\mathbb{N}_{1}\), the process \(Y_{\circ}\) defines a random \(q\)-colored partition \(\mathscr{A}_{T}\) by letting
\[\mathbf{a}_{T}(i)\coloneqq\left(a_{T,1}(i),\ldots,a_{T,q}(i)\right)\,,\;a_{T,j} (i)\coloneqq\left|\{t\in[T]:Y_{t}=(i,j)\}\right|\,,\qquad\mathscr{A}_{T} \coloneqq\sum_{i}^{r_{T}}\mathbf{1}_{a_{T}(i)}. \tag{4.9}\]
As a consequence, in the notation of [13], the first component \(Y_{t,1}\) of \(Y_{t}\) satisfies \(Y_{t,1}=X_{t}\), while \(\mathsf{shape}(\mathscr{A}_{T})\) coincides with \(\Pi_{T}\). We call the Markov process \(Y_{\circ}\) the _polychromatic Hoppe urn_ (PHU), and the process \(\mathscr{A}_{\circ}\coloneqq\left(\mathscr{A}_{T}\right)_{T}\) the _PHU_-partition process.
**Proposition 4.7**.: \(\mathscr{A}_{\circ}\) _is a Markov process with marginal distribution_
\[\mathbf{P}[\mathscr{A}_{T}=A]=E_{\theta,\mathbf{p}}^{T}(A)\,,\qquad A\in \mathcal{A}_{T}\,. \tag{4.10}\]
Proof.: The Markov property is trivially satisfied. With the notation of (4.9), the random variables \(\left(\mathbf{a}_{T}(i)_{\bullet}\right)_{i}\) are \(\left(Y_{t,1}\right)_{t\leq T}\)-measurable. In order to compute the marginal distribution at time \(T\), fix \(A\in\mathcal{A}_{T}\), and set \(\boldsymbol{\lambda}\coloneqq\mathsf{shape}(A)\) and \(r\coloneqq\boldsymbol{\lambda}_{\bullet}\).
We introduce two families of functions:
\[\mathcal{F}\coloneqq\left\{\mathbf{f}:[r]\rightarrow\mathsf{supp }(A):\,\left|\mathbf{f}^{-1}(\mathbf{a})\right|=A(\mathbf{a})\,,\quad \mathbf{a}\in\mathsf{supp}(A)\right\},\] \[\mathcal{G}\coloneqq\left\{g=\left(\,\cdot\,\right)_{\bullet} \circ\mathbf{f}=\mathbf{f}(\,\cdot\,)_{\bullet}:\,\mathbf{f}\in\mathcal{F} \right\}.\]
Since the colors \(Y_{t,2}\) are chosen independently of one another and of the labels \(Y_{t,1}\),
\[\mathbf{P} \big{[}\mathscr{A}_{T}=A\big{|}\left(Y_{t,1}\right)_{t\leq T} \big{]}=\] \[=\sum_{\mathbf{f}\in\mathcal{F}}\mathbf{P}\left[\mathbf{f}(\, \cdot\,)_{\bullet}=\mathbf{a}_{T}(\,\cdot\,)\big{|}\left(Y_{t,1}\right)_{t\leq T }\right]=\sum_{\mathbf{f}\in\mathcal{F}}\prod_{i=1}^{r}\mathbf{P}\big{[} \mathbf{f}(i)=\mathbf{a}_{T}(i)\big{|}\left(Y_{t,1}\right)_{t\leq T}\big{]}\] \[=\left|\left\{\mathbf{f}\in\mathcal{F}:\,\mathbf{f}(\,\cdot\,)_{ \bullet}=\mathbf{a}_{T}(\,\cdot\,)_{\bullet}\right\}\right|\mathbf{p}^{\text{ eq}(A)}\prod_{\mathbf{a}\in\mathsf{supp}(A)}\begin{pmatrix}\mathbf{a}_{ \bullet}\\ \mathbf{a}\end{pmatrix}^{A(\mathbf{a})}.\]
It can be easily checked that for every \(g\in\mathcal{G}\) the following identities hold:
\[\left|\left\{\mathbf{f}\in\mathcal{F}:g=\left(\,\cdot\,\right)_{\bullet}\circ \mathbf{f}\right\}\right|=\prod_{i}\binom{\lambda_{i}}{\left(A(\mathbf{a}) \right)_{\mathbf{a}\in\mathsf{supp}(A)\,:\,\mathbf{a}=i}}=\frac{\boldsymbol{ \lambda}!}{\prod_{\mathbf{a}\in\mathsf{supp}(A)}A(\mathbf{a})!}\,.\]
Thus,
\[\mathbf{P}\big{[}\mathscr{A}_{T}=A\big{|}\left(Y_{t,1}\right)_{t \leq T}\big{]} =\left|\left\{g\in\mathcal{G}:g(\,\cdot\,)=\mathbf{a}_{T}(\,\cdot\,)_{ \bullet}\right\}\right|\lambda!\,\mathbf{p}^{\text{eq}(A)}\prod_{\mathbf{a} \in\mathsf{supp}(A)}\frac{\binom{\mathbf{a}_{\bullet}}{\mathbf{a}}^{A(\mathbf{ a})}}{A(\mathbf{a})!}\] \[=\mathbf{1}_{\left\{\mathsf{shape}(\mathscr{A}_{T})=\boldsymbol{ \lambda}\right\}}\,\lambda!\,\mathbf{p}^{\text{eq}(A)}\prod_{\mathbf{a}\in \mathsf{supp}(A)}\frac{\binom{\mathbf{a}_{\bullet}}{\mathbf{a}}^{A(\mathbf{a})} }{A(\mathbf{a})!}\,. \tag{4.11}\]
Taking the expectation over \(\left(Y_{t}\right)_{t\leq T}\) on both sides of (4.11), we infer that
\[\mathbf{P}[\mathscr{A}_{T}=A]=\mathbf{P}[\mathsf{shape}(\mathscr{A}_{T})= \boldsymbol{\lambda}]\,\lambda!\,\mathbf{p}^{\text{eq}(A)}\prod_{\mathbf{a} \in\mathsf{supp}(A)}\frac{\binom{\mathbf{a}_{\bullet}}{\mathbf{a}}^{A(\mathbf{ a})}A(\mathbf{a})!}{A(\mathbf{a})!}\,. \tag{4.12}\]
By the formula for the marginal distribution of Hoppe's urn model, [13, Eqn. (1)],
\[\mathbf{P}[\mathsf{shape}(\mathscr{A}_{T})=\boldsymbol{\lambda}]=\frac{T!}{ \left\langle\theta\right\rangle_{T}}\prod_{i=1}^{T}\frac{\theta^{\lambda_{i}}}{ \left\langle\theta\right\rangle_{T}}\prod_{\mathbf{a}\in\mathsf{supp}A}\frac{1} {\mathbf{a}_{\bullet}^{A(\mathbf{a})}}. \tag{4.13}\]
Combining (4.12) and (4.13), the identity (4.10) follows.
### Consistency
In [16, 17], J.F.C. Kingman introduced a celebrated notion of consistency for stochastic processes on partitions, and showed that a sequence of random partitions \(\left(\boldsymbol{\lambda}_{n}\right)_{n}\) with \(\boldsymbol{\lambda}_{n}\vdash n\) distributed according to \(E_{\theta}\), satisfies this notion. Precisely, if \(n\) objects are partitioned into classes with sizes given by \(\boldsymbol{\lambda}_{n}\), and one object is deleted uniformly at random, independently
of \(\boldsymbol{\lambda}_{n}\), the partition of the \(n-1\) remaining objects has class sizes distributed as \(\boldsymbol{\lambda}_{n-1}\), cf. e.g. [23, p. 146].
In this section, we show that the polychromatic ESF satisfies a similar consistency property. Denote by \(\mathcal{A}\!:=\bigcup_{n}\mathcal{A}_{n}\) the family of all finite multisets on \(\mathds{N}_{*}^{q}\), and set
\[A_{\setminus\mathbf{a},j}\!:=\!\begin{cases}A-\boldsymbol{1}_{\mathbf{a}}& \text{if }\mathbf{a}=\mathbf{e}_{j}\,,\\ A-\boldsymbol{1}_{\mathbf{a}}+\boldsymbol{1}_{\mathbf{a}-\mathbf{e}_{j}}&\text{ otherwise}\end{cases}\,,\qquad\mathbf{a}\in\mathsf{supp}A\,,\ j\in[q]\,.\]
Following [17], we define a system \(S=S_{nm}\), \(n\in\mathds{N}_{1}\), \(m\leq n\), of probability kernels on \(\mathcal{A}\). Firstly, set
\[S(A,B) :=\,\boldsymbol{1}_{A=B}\, A,B\in\mathcal{A}_{n}\,, \tag{4.14a}\] \[S(A,B) :=\begin{cases}\frac{a_{j}A(\mathbf{a})}{n}&\text{if }B=A_{ \setminus\mathbf{a},j}\,,\\ 0&\text{otherwise}\end{cases}\,, A\in\mathcal{A}_{n}\,,\ B\in\mathcal{A}_{n-1}\,, \tag{4.14b}\]
and note that \(S(A,\,\cdot\,)\) is a probability on \(\mathcal{A}_{n-1}\) for every \(A\in\mathcal{A}_{n}\). Secondly, let \(S\) be the unique system of kernels extending (4.14) and satisfying the cocycle relation
\[S(A,C)=\sum_{B\in\mathcal{A}_{m}}S(A,B)\,S(B,C)\,,\qquad A\in\mathcal{A}_{n}\,\ C\in\mathcal{A}_{\ell}\,,\quad\ell<m<n\,. \tag{4.15}\]
Note that \(S_{nm}(A,\,\cdot\,)\) is a probability on \(\mathcal{A}_{m}\) for every \(m\) and every \(A\in\mathcal{A}_{n}\), since it is so for \(m=n-1\) as noted above, and in light of (4.15).
Remark 4.8. Analogously to the case of usual integer partitions, the system \(S\) may be interpreted as the selection of a random sampling (uniform, without replacement) of \(m\) elements from a given \(q\)-colored partition \(A\in\mathcal{A}_{n}\), resulting in the \(q\)-colored partition \(B\in\mathcal{A}_{m}\). The cocycle relation (4.15) is then a consequence of the consistency of random sub-sampling.
Let us now turn to probability measures on \(\mathcal{A}\). For \(n\in\mathds{N}_{1}\) let \(\mathscr{P}(\mathcal{A}_{n})\) be the set of all probability measures on \(\mathcal{A}_{n}\). Define a system \(\sigma\) of maps \(\sigma_{nm}\colon\mathscr{P}(\mathcal{A}_{n})\to\mathscr{P}(\mathcal{A}_{m})\) by
\[\big{(}\sigma_{nm}\mathbf{P}\big{)}(B)\longmapsto\mathbf{P}[S(\,\cdot\,,B)]\,,\]
and note that \(\sigma\) satisfies the cocycle relation
\[\sigma_{n\ell}=\sigma_{m\ell}\circ\sigma_{nm}\,,\qquad\ell<m<n\,. \tag{4.16}\]
**Definition 4.9** (Consistency).: We say that a family \(\left(\mathbf{P}_{n}\right)_{n}\) of probability measures \(\mathbf{P}_{n}\) on \(\mathcal{A}_{n}\) is _consistent_ (w.r.t. the system \(\sigma\)) if \(\mathbf{P}_{m}=\sigma_{nm}\mathbf{P}_{n}\) for every \(m\leq n\).
**Theorem 4.10**.: _For every \(\theta>0\) and \(\mathbf{p}\in\Delta^{q-1}\) the family \(\big{(}E_{\theta,\mathbf{p}}^{n}\big{)}_{n}\) is consistent._
Proof.: In light of (4.16), it suffices to verify that \(\sigma_{nm}E_{\theta,\mathbf{p}}^{n}=E_{\theta,\mathbf{p}}^{m}\) for \(m=n-1\) and for every \(n\). To this end, let \(\mathbf{Q}\) be the law of the PHU partition \(\mathscr{A}_{\circ}\) on its path space. By Bayes formula, and Proposition 4.7,
\[\mathbf{Q}[\mathscr{A}_{n-1}=B\mid\mathscr{A}_{n}=A] =\frac{\mathbf{Q}[\mathscr{A}_{n}=A\mid\mathscr{A}_{n-1}=B] \mathbf{Q}[\mathscr{A}_{n-1}=B]}{\mathbf{Q}[\mathscr{A}_{n}=A]}\] \[=\frac{\mathbf{Q}[\mathscr{A}_{n}=A\mid\mathscr{A}_{n-1}=B]\,E_{ \theta,\mathbf{p}}^{n-1}(B)}{E_{\theta,\mathbf{p}}^{n}(A)}\,. \tag{4.17}\]
Furthermore, it follows from the definition of \(\mathscr{A}_{\bullet}\) that
\[\mathbf{Q}[\mathscr{A}_{n}=A\mid\mathscr{A}_{n-1}=B] =\sum_{\begin{subarray}{c}\mathbf{a}\in\mathsf{supp}A\\ \mathbf{e}_{j}\leq\mathbf{a},\mathbf{e}_{j}\neq\mathbf{a}\end{subarray}}\, \sum_{\begin{subarray}{c}j\in[q]:\\ \mathbf{e}_{j}\leq\mathbf{a},\mathbf{e}_{j}\neq\mathbf{a}\end{subarray}}\, \mathbf{1}_{A=B+\boldsymbol{1}_{\mathbf{a}}-\boldsymbol{1}_{\mathbf{a}- \boldsymbol{1}_{\mathbf{a}-\boldsymbol{1}_{\mathbf{a}-\boldsymbol{1}_{\mathbf{a}- \boldsymbol{1}_{\mathbf{a}-\boldsymbol{1}_{\mathbf{a}-\boldsymbol{1}_{\mathbf{a}- \boldsymbol{1}_{\mathbf{a}-\boldsymbol{1}_{\mathbf{a}-\boldsymbol{1}_{\mathbf{a}- \boldsymbol{1}_{\mathbf{a}-\boldsymbol{1}_{\mathbf{a}-\boldsymbol{1}_{\mathbf{a}- \boldsymbol{1}_{\mathbf{a}-\boldsymbol{1}_{\mathbf{a}-\boldsymbol{1}_{\mathbf{a}- \boldsymbol{1}_{\mathbf{a}-\boldsymbol{1}_{\mathbf{a}-\boldsymbol{1}_{\mathbf{\boldsymbol{1}- \boldsymbol{1}-\boldsymbol{1}_{\mathbf{a}-\boldsymbol{1}_{\mathbf{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\,\boldsymbol{\,\,\boldsymbol{\,\,\,\boldsymbol{\,
On the other hand, by definition (4.2) of \(E^{n}_{\theta,\mathbf{p}}\),
\[\frac{E^{n-1}_{\theta,\mathbf{p}}(B)}{E^{n}_{\theta,\mathbf{p}}(A)}=\begin{cases} \dfrac{\theta+n-1}{np_{j}}\dfrac{a_{j}}{\mathbf{a}_{\bullet}-1}\dfrac{A( \mathbf{a})}{A(\mathbf{a}-\mathbf{e}_{j})+1}&\text{if }A=B+\mathbf{1}_{\mathbf{a}}-\mathbf{1}_{ \mathbf{e}_{j}}\\ \dfrac{\theta+n-1}{\theta np_{j}}A(\mathbf{e}_{j})&\text{if }A=B+\mathbf{1}_{ \mathbf{e}_{j}}\end{cases} \tag{4.19}\]
Combining (4.17)-(4.19), we thus have
\[\mathbf{Q}[\mathscr{A}_{n-1}=B\mid\mathscr{A}_{n}=A] =\sum_{\mathbf{a}\in\text{supp}A}\sum_{j\in[\theta]:\mathbf{e}_{j }<\mathbf{a}}\mathbf{1}_{A=B+\mathbf{1}_{\mathbf{a}}-\mathbf{1}_{\mathbf{a}- \mathbf{e}_{j}}}\frac{a_{j}A(\mathbf{a})}{n}+\sum_{j=1}^{q}\mathbf{1}_{A=B+ \mathbf{1}_{\mathbf{e}_{j}}}\,\frac{A(\mathbf{e}_{j})}{n}\] \[=\sum_{\mathbf{a}\in\text{supp}A}\sum_{j}^{q}\mathbf{1}_{A_{ \setminus\mathbf{a},j}=B}\,\frac{a_{j}A(\mathbf{a})}{n}=S_{n\ n-1}(A,B)\,.\]
Finally, respectively by: the definition of \(\sigma\), the previous equality and Proposition 4.7, the law of total probability, and again Proposition 4.7,
\[\big{(}\sigma_{n\ n-1}E^{n}_{\theta,\mathbf{p}}\big{)}(B) =\sum_{A\in\mathcal{A}_{n}}S_{n\ n-1}(A,B)\,E^{n}_{\theta,\mathbf{ p}}(A)=\sum_{A\in\mathcal{A}_{n}}\mathbf{Q}[\mathscr{A}_{n-1}=B\mid \mathscr{A}_{n}=A]\,\mathbf{Q}[\mathscr{A}_{n}=A]\] \[=\mathbf{Q}[\mathscr{A}_{n-1}=B]=E^{n-1}_{\theta,\mathbf{p}}(B)\,.\qed\]
|
2307.16737 | Nonstandard Hubbard model and electron pairing | We present a non-standard Hubbard model applicable to arbitrary
single-particle potential profiles and inter-particle interactions. Our
approach involves a novel treatment of Wannier functions, free from the
ambiguities of conventional methods and applicable to finite systems without
periodicity constraints. To ensure the consistent evaluation of Wannier
functions, we develop a perturbative approach, utilizing the barrier
penetration coefficient as a perturbation parameter. With the newly defined
Wannier functions as a basis, we derive the Hubbard Hamiltonian, revealing the
emergence of density-induced and pair tunneling terms alongside standard
contributions. Our investigation demonstrates that long-range inter-particle
interactions can induce a novel mechanism for repulsive particle pairing. This
mechanism relies on the effective suppression of single-particle tunneling due
to density-induced tunneling. Contrary to expectations based on the standard
Hubbard model, an increase in inter-particle interaction does not lead to an
insulating state. Instead, our proposed mechanism implies the coherent motion
of correlated electron pairs, similar to bound states within a multi-well
system, resistant to decay from single-electron tunneling transitions. These
findings carry significant implications for various phenomena, including the
formation of flat bands, the emergence of superconductivity in twisted bilayer
graphene, and the possibility of a novel metal-insulator transition. | M. Zendra, F. Borgonovi, G. L. Celardo, S. Gurvitz | 2023-07-31T15:01:16Z | http://arxiv.org/abs/2307.16737v3 | # Non-standard Hubbard model and two-electron pairing
###### Abstract
We study electron correlations in a multi-well system, focusing on the density-induced and pair tunneling terms of the non-standard Hubbard model. These terms are evaluated analytically using a newly developed, ambiguity-free, perturbative approach to the Wannier functions. We show that the density-induced tunneling generated by finite-range repulsive electron interaction can match and eventually suppress the free single-electron tunneling. However, this suppression does not lead to an insulating state, but rather to the propagation of a correlated electron pair due to the pair tunneling term of the non-standard Hubbard model. This pair can be considered as a bound state in the multi-well system, since it cannot decay due to single-electron tunneling transitions. We illustrate this by analyzing the motion of two electrons in a triple-well potential. We expect that such a pairing mechanism can be realized by finite-range repulsive interaction in many other systems.
The standard Hubbard model is represented by a tight-binding Hamiltonian with a single-particle tunneling coupling (\(\Omega\)) and an on-site two-particle interaction energy (\(U\)). Eventually, it can be extended to include interactions between neighboring sites (called the _extended_ Hubbard model). Apart from these, the standard Hubbard model neglects all other interaction terms, including _density-induced tunneling_ (DT) and _pair tunneling_ (PT) terms. The former, often referred to as the _bond-charge interaction_[1; 2; 3], involves a modification of \(\Omega\) by an effective mean-field created by the other particles. The latter is similar to the Cooper PT and contributes to the elastic two-particle tunneling process (co-tunneling), where the total energy of the initial and final states is conserved.
These two terms are potentially important in electron dynamics, both in terms of their magnitude and their sign. Indeed, the DT term has the ability to even suppress the single-particle coupling, clearly having a crucial impact on electron transport. Moreover, previous studies have mostly treated the co-tunneling process within the framework of the standard Hubbard Hamiltonian [4; 5]. In this context, it arises as a second-order process in \(\Omega\), resulting from two virtual single-particle tunnelings. Consequently, the co-tunneling frequency, \(\sim 2\,\Omega^{2}/U\), vanishes for large interaction \(U\). On the contrary, the PT process keeps the two particles together, preserving their total energy, and it does not vanish with \(U\), becoming the main contribution to the co-tunneling process [6]. For all these reasons, the DT and PT terms cannot be neglected and must be included in the Hubbard Hamiltonian. Such an extension of the standard Hubbard model was investigated in [7], where the possible significant influence of additional terms on the behavior of strongly correlated systems was highlighted. However, it is only in recent years that non-standard Hubbard models have attracted attention, both from experiments with ultracold atoms in optical lattices [1; 3] and from the theoretical side [8].
Currently, the accurate evaluation of non-standard Hubbard terms and their influence on the dynamics of correlated systems remain open problems. Indeed, these terms are closely related to the overlap of Wannier functions from adjacent sites, often accurately well represented by the orbital wave functions within their own sites. However, their overlap strongly depends on the presence of their tails, situated in the neighboring sites. Understanding their precise behavior is crucial, since it can significantly affect both the magnitude and the sign of the non-standard Hubbard terms, on which there is no general consensus yet [1; 7; 9].
In this work, we develop a numerical method for the exact evaluation of the dynamics of few particles in an \(N\)-well potential. Specifically, we include non-standard terms in the Hubbard Hamiltonian, truncating the Hilbert space to the lowest energy band. We propose an effective method for the evaluation of Wannier functions that involves a modification of the orbital functions. It is based on the two-potential approach (TPA) to tunneling problems, originally developed for tunneling to a continuum [10; 11; 12]. Specifically, we consider a particle placed in a \(N\)-sites potential chain, given by \(V(x)=\sum_{j=1}^{N}V_{j}(x)\), with \(V(x)\to 0\) in the limit \(x\rightarrow\pm\infty\). The exact eigenstates of the system are obtained from the Schrodinger equation \((\mathcal{K}+V)|\psi_{k}\rangle=\mathcal{E}_{k}\left|\psi_{k}\right\rangle\), where \(\mathcal{K}\) is the kinetic term. We assume that the \(N\) lowest bound states of the system (\(\mathcal{E}_{k}<0\)) form a band well separated from the other eigenvalues. The tight-binding tunneling Hamiltonian \(H\) describing this
band is given by
\[\hat{H}=\sum_{j=1}^{N}\overline{E}_{j}\hat{a}_{j}^{\dagger}\hat{a}_{j}+\sum_{j=1} ^{N-1}\overline{\Omega}_{j-1}(\hat{a}_{j}^{\dagger}\hat{a}_{j+1}+\hat{a}_{j+1}^ {\dagger}\hat{a}_{j})\,, \tag{1}\]
where \(\hat{a}_{j}^{\dagger}\,(\hat{a}_{j})\) creates (destroys) a particle at site \(j\) in the basis of the Wannier functions \(\Psi_{j}(x)=\langle x|\hat{a}_{j}^{\dagger}|0\rangle\). By diagonalizing the tunneling Hamiltonian in Eq. (1) through a unitary transformation, we obtain its eigenvalues and eigenvectors, which must be identical to \(\mathcal{E}_{k}\) and \(|\psi_{k}\rangle\), respectively. Therefore, the site energies \(\overline{E}_{j}\), the tunneling energies \(\overline{\Omega}_{j}\), and the Wannier functions \(\Psi_{j}(x)\) are uniquely defined.
We emphasize that we do not require any periodicity of the potential \(V(x)\) and that this approach is valid for any \(N\). In the limit \(N\to\infty\), the procedure is similar to the standard one for constructing Wannier states from the continuous spectrum of Bloch functions subjected to periodic boundary conditions. However, our procedure imposes specific boundary conditions on all the eigenstates \(\psi_{k}(x)\) belonging to the bound state spectrum: \(\psi_{k}(x)\sim e^{-\sqrt{-2m\mathcal{E}_{k}}|x|}\) as \(x\to\pm\infty\) (\(\hbar=1\)). These boundary conditions uniquely define the solution of the Schrodinger equation for any energy \(\mathcal{E}_{k}\), thus preventing the ambiguity of the resulting Wannier functions caused by the phase indeterminacy [13].
To determine the tunneling Hamiltonian parameters and the corresponding Wannier functions, we find the eigenvalues and the eigenfunctions of the lowest band spectrum, namely \(\mathcal{E}_{k}\) and \(\psi_{k}(x)\). We perform the calculation using the TPA for tunneling to the continuum [10; 11], extended to a discrete eigenstate spectrum of a multi-well system. For clarity, we demonstrate our method on the symmetric double-well potential \(V(x)=V_{1}(x)+V_{2}(x)\), shown in Fig. 1. Each of the single-well potentials \(V_{1,2}(x)\) vanishes beyond the separation point (\(x_{0}=0\)[12][14]). The spectrum of the double-well system consists of two levels \(\mathcal{E}_{1,2}<0\), while each single well contains only one bound state with energy \(E_{0}<0\). By identifying the spectrum of the Hamiltonian in Eq. (1) with that of the double-well system (\(\mathcal{E}_{1,2}\) and \(\psi_{1,2}(x)\)), we obtain the parameters of the tunneling Hamiltonian and the Wannier functions \(\Psi_{j=L,R}(x)\) via the unitary transformation
\[\overline{E}_{0}=(\mathcal{E}_{1}+\mathcal{E}_{2})/2\,,\qquad \overline{\Omega}_{0}=(\mathcal{E}_{1}-\mathcal{E}_{2})/2\,, \tag{2a}\] \[\Psi_{L,R}(x)=[\psi_{1}(x)\pm\psi_{2}(x)]/\sqrt{2}\,. \tag{2b}\]
In general, the spectrum of a system with an arbitrary \(N\)-well potential is obtained by numerical calculations. However, we perturbatively evaluate the eigenspectrum in terms of single orbital functions. For example, for \(N=2\) the orbitals \(\Phi_{0}^{(1,2)}(x)\) are the ground states of the left- and right-well Hamiltonian, respectively
\[H_{1,2}\,\Phi_{0}^{(1,2)}(x)=E_{0}\,\Phi_{0}^{(1,2)}(x)\,, \tag{3}\]
where \(H_{1,2}=\mathcal{K}+V_{1,2}(x)\). Note that \(\Phi_{0}^{(1,2)}(x)=\Phi_{0}(0)\,e^{\mp\sqrt{-2m\,E_{0}}\,x}\) for \(x\gtrless 0\), since \(V_{1,2}(x)=0\) beyond the separation point \(x=0\), see Fig. 1 (b,c). These orbitals can be used as a basis to obtain the eigenstates and the Wannier functions through a perturbative approach. For example, we can consider the left-site orbital \(\Phi_{0}^{(1)}(x)\) as the unperturbed state and the right-well potential \(V_{2}(x)\) as the perturbation (or vice versa). However, this perturbative approach does not include a small parameter, which makes the corresponding expansion unusable.
However, the problem can be solved using the TPA method, which uses an alternative expansion in powers of the overlap \(\beta\equiv\langle\Phi_{0}^{(1)}|\Phi_{0}^{(2)}\rangle\). In fact, it represents a small parameter, since it is proportional to the barrier penetration coefficient \(T_{0}=\exp\left(-\int_{-\overline{x}}^{\overline{x}}\!\left|p(x^{\prime}) \right|dx^{\prime}\right)\ll 1\), where \(p(x)\) is the (imaginary) momentum under the barrier and \(\pm\,\overline{x}\) are the classical turning points, indicated in Fig. 1 (more details can be found in [15]). Using this approach, we obtain for the tunneling Hamiltonian parameters in Eq. (2a): \(\overline{E}_{0}=E_{0}+\mathcal{O}(\beta^{2})\), where \(E_{0}\) is given by Eq. (3), and \(\overline{\Omega}_{0}=\Omega_{0}+\mathcal{O}(\beta^{2})\), where \(\Omega_{0}=-\sqrt{2|E_{0}|/m}\,[\Phi_{0}(0)]^{2}\propto T_{0}\) is a simplified (1D) version of the well-known Bardeen formula [16]. Similarly, the eigenstate energies \(\mathcal{E}_{1,2}=E_{\pm}+\mathcal{O}(\beta^{2})\), where \(E_{\pm}=E_{0}\pm\Omega_{0}\). Thus, the parameters of the tunneling Hamiltonian are completely determined by the single-well orbitals.
At first sight we might expect that the eigenfunctions \(\psi_{1,2}(x)\equiv\psi_{\pm}(E_{\pm},x)\) can be obtained from Eq. (2b) by replacing the Wannier functions with the corresponding orbitals \(\Phi_{0}^{(1,2)}(x)\equiv\Phi_{0}^{(1,2)}(E_{0},x)\) in Eq. (3), as follows:
\[\psi_{\pm}(E_{\pm},x)\simeq[\Phi_{0}^{(1)}(E_{0},x)\pm\Phi_{0}^{(2)}(E_{0},x)] /\sqrt{2}\,. \tag{4}\]
However, Eq. (4) shows an inconsistency between the energy arguments of \(\psi_{\pm}(E_{\pm},x)\) and \(\Phi_{0}^{(1,2)}(E_{0},x)\). To overcome this problem, we introduce an energy shift in the orbital functions by replacing the ground state energy \(E_{0}\) with a free parameter \(E<0\). The resulting modified orbitals \(\overline{\Phi}^{(1,2)}(E,x)\) (normalized to unity) are obtained
Figure 1: (a) Symmetric double-well potential \(V(x)\), given by the sum of single-well potentials \(V_{1,2}(x)\) in (b) and (c), where \(\pm\,\overline{x}\) are the classical turning points.
from the Schrodinger equation in Eq. (3), with boundary conditions \(\overline{\Phi}^{(1,2)}(E,x\to\mp\infty)\propto e^{\pm\sqrt{-2m\,E}\,x}\). However, unlike \(\Phi_{0}^{(1,2)}(E_{0},x)\), the modified orbitals \(\overline{\Phi}^{(1,2)}(E,x)\) are defined on two different segments \(\mathcal{X}_{1}=(-\infty,0)\) and \(\mathcal{X}_{2}=(0,\infty)\), respectively, and vanish elsewhere. As a result, they are _non-overlapping_, and therefore _orthogonal_. Replacing \(\Phi_{0}^{(1,2)}(E_{0},x)\) in Eq. (4) with \(\overline{\Phi}^{(1,2)}(E_{\pm},x)\), we obtain
\[\psi_{\pm}(E_{\pm},x)=\left[\overline{\Phi}^{(1)}(E_{\pm},x)\pm\overline{\Phi }^{(2)}(E_{\pm},x)\right]/\sqrt{2}\,, \tag{5}\]
which gives the _exact_ result for \(\psi_{\pm}(E_{\pm},x)\), in contrast with Eq. (4). Indeed, the exact treatment of the Schrodinger equation involves solving it on the two segments and combining the results by imposing the continuity condition at the separation point (\(x=0\)). This condition is automatically satisfied if \(E_{\pm}\) are the energies of the symmetric and anti-symmetric states, respectively. Substituting Eq. (5) into Eq. (2b), we obtain the exact Wannier functions in terms of the modified orbitals:
\[\Psi_{L,R}(x)=\left[\overline{\Phi}_{+}^{(1)}(x)+\overline{\Phi}_{+}^{(2)}(x )\pm\overline{\Phi}_{-}^{(1)}(x)\mp\overline{\Phi}_{-}^{(2)}(x)\right]/2\,, \tag{6}\]
where \(\overline{\Phi}_{\pm}^{(1,2)}(x)\equiv\overline{\Phi}^{(1,2)}(E_{0}+\Omega_{ 0},x)\). We can expand these wavefunctions in powers of \(\Omega_{0}\) by neglecting \(\mathcal{O}(\Omega_{0}^{2})\) terms (we remind that \(\Omega_{0}\propto\beta\propto T_{0}\)), finding
\[\overline{\Phi}_{\pm}^{(1,2)}(x)=\overline{\Phi}_{0}^{(1,2)}(x)\pm\Omega_{0} \,\partial_{E}\overline{\Phi}_{0}^{(1,2)}(x)\,, \tag{7}\]
where \(\overline{\Phi}_{0}^{(1,2)}(x)\equiv\Phi_{0}^{(1,2)}(E_{0},x)\) for \(x\in\mathcal{X}_{1,2}\) and \(0\) elsewhere, and \(\partial_{E}\overline{\Phi}_{0}^{(1,2)}(x)\equiv[\partial\overline{\Phi}^{(1,2)}(E,x)/\partial E]_{E\to E_{0}}\). Substituting Eq. (7) into Eq. (6), we get
\[\Psi_{L,R}(x)=\overline{\Phi}_{0}^{(1,2)}(x)+\Omega_{0}\,\partial_{E} \overline{\Phi}_{0}^{(2,1)}(x)\,. \tag{8}\]
Eq. (8) consists of two _non-overlapping_ terms. The first onesecribes the Wannier function inside the respective well (\(x\in\mathcal{X}_{1,2}\)), where it is given by the orbital function. The second one describes the tail penetrating into the neighboring well, and it is proportional to \(\Omega_{0}\) and therefore much smaller than the first. Since \(\overline{\Phi}^{(1,2)}(E,x)\) are normalized to unity for any \(E\), we can explicitly prove the orthogonality of the Wannier functions by using \(\partial_{E}\int_{-\infty}^{0}[\overline{\Phi}^{(1)}(E,x)]^{2}\,dx=0\), obtaining
\[\langle\Psi_{L}|\Psi_{R}\rangle=2\,\Omega_{0}\int\limits_{-\infty}^{0}\, \overline{\Phi}_{0}^{(1)}(x)\,\partial_{E}\overline{\Phi}_{0}^{(1)}(x)\,dx=0\,. \tag{9}\]
Note that the orbitals \(\overline{\Phi}_{0}^{(1,2)}(x)\) are nodeless, while the Wannier function tail must change its sign, Eq. (9). Eq. (8) represents our main result for the Wannier functions, which is valid for a general multi-coupled-well system. For example, for the triple-well potential system shown in the inset of Fig. 2, Eqs. (8) become (up to \(\mathcal{O}(\Omega_{0}^{2})\) terms)
\[\Psi_{L,R}(x)=\overline{\Phi}_{0}^{(1,3)}(x)+\Omega_{0}\,\partial _{E}\overline{\Phi}_{0}^{(2)}(x)\,, \tag{10a}\] \[\Psi_{M}(x)=\overline{\Phi}_{0}^{(2)}(x)+\Omega_{0}\,\partial_{E}[ \overline{\Phi}_{0}^{(1)}(x)+\overline{\Phi}_{0}^{(3)}(x)]\,. \tag{10b}\]
Here \(\overline{\Phi}_{0}^{(1,2,3)}(x)\) denote the left-, middle- and right-well modified orbitals, defined on the intervals \((-\infty,x_{1})\), \((x_{1},x_{2})\) and \((x_{2},\infty)\) respectively, and vanishing elsewhere. The separation points \(x_{1,2}\) (such as \(x=0\) for \(N=2\), see Fig. 1) are taken at the centers of the barriers, see Fig. 2. Looking at Eqs. (10), the terms with derivatives are proportional to \(\Omega_{0}\), and describe the Wannier function tail in the neighboring well, while the contribution in the next-to-neighbor well is neglected, as it is of the order of \(\Omega_{0}^{2}\propto T_{0}^{2}\). A detailed derivation for general multi-well systems will be given in a separate work.
In Fig. 2, we compare the Wannier functions \(\Psi_{L,M}(x)\) obtained from Eqs. (10) with both the orbital functions \(\Phi_{0}^{(1,2)}(x)\) from Eq. (3) and the numerical solution of the Schrodinger equation (for symmetry, \(\Psi_{R}(x)=\Psi_{L}(-x)\)). We see that \(\Phi_{0}^{(1,2)}(x)\) approximate very well the corresponding Wannier functions \(\Psi_{L,M}(x)\) within each well, even if they differ significantly on the tails into the neighboring wells.
Now, let us consider two particles, each with coordinates \(x,y\), interacting through a two-body _repulsive_ potential \(V(x-y)>0\). In the basis of the tunneling Hamiltonian, its matrix elements are given by:
\[V_{i^{\prime}j^{\prime},ij}=\int\Psi_{i^{\prime}}(x)\Psi_{j^{\prime}}(y)V(x-y) \Psi_{i}(x)\Psi_{j}(y)\,dx\,dy\,, \tag{11}\]
Figure 2: Left- and middle-well Wannier functions for the squared triple-well potential shown in the inset. Red solid curves correspond to exact calculations, blue dot-dashed curves show the TPA results in Eqs. (10), and black dashed curves show the orbital functions \(\Phi_{0}^{(1,2)}(x)\). The parameters of the triple-well potential are \(L=2\), \(b=0.5\) and \(V_{0}=5\).
where \(\Psi_{i}(x)\) is the Wannier function at site \(i=L,R\) (for simplicity, we consider the symmetric double-well potential in Fig. 1). The interaction potential in Eq. (11) can be decomposed into Hubbard and non-Hubbard terms, corresponding to diagonal (\(i=i^{\prime}\), \(j=j^{\prime}\)) and off-diagonal (\(ij\neq i^{\prime}j^{\prime}\)) matrix elements, respectively. The Hubbard terms can be further separated into the standard Hubbard on-site interaction term \(V_{ii,ii}\equiv U\) (for \(i=j\)) and the _extended_ Hubbard term \(V_{ii,jj}\equiv\overline{U}\) (for \(i\neq j\)). Similarly, the non-Hubbard terms can be separated into the DT and PT terms, denoted as \(\Omega_{1,2}\):
\[\Omega_{1} =\int[\Psi_{L}(x)]^{2}\,\Psi_{L}(y)\,V(x-y)\,\Psi_{R}(y)\,dx\,dy, \tag{12a}\] \[\Omega_{2} =\int\Psi_{L}(x)\Psi_{L}(y)\,V(x-y)\,\Psi_{R}(x)\Psi_{R}(y)\,dx\,dy. \tag{12b}\]
The physical meaning of these terms is evident. Indeed, the DT term in Eq. (12a) represents a single-particle hopping caused by the interaction with a non-tunneling particle, to be added to the single-particle tunneling, resulting in an effective tunneling \(\widetilde{\Omega}\equiv\Omega_{0}+\Omega_{1}\). The PT term in Eq. (12b) describes the direct (\(\Psi_{LL}\to\Psi_{RR}\)) and exchange (\(\Psi_{LR}\to\Psi_{RL}\)) two-particle hopping.
We first consider a contact interaction, \(V(x-y)=V_{\delta}\,\delta(x-y)>0\), and evaluate the DT term accordingly. Substituting Eq. (8) into Eq. (12a), we get
\[\Omega_{1}=\Omega_{0}V_{\delta}\,\int\limits_{-\infty}^{0}[\overline{\Phi}_{0 }^{(1)}(x)]^{3}\,\partial_{E}\overline{\Phi}_{0}^{(1)}(x)\,dx\,. \tag{13}\]
As expected, the DT term is \(\propto\Omega_{0}\). Similarly, we find that the PT term is \(\propto\Omega_{0}^{2}\). Note that, if \(\Omega_{1}/\Omega_{0}<0\), the effective tunneling coupling \(\widetilde{\Omega}\) would be suppressed by a sufficiently large \(V_{\delta}\). However, this cannot happen for a contact interaction. We prove this by analyzing Eq. (13) and Eq. (9). The latter is the integral of the product of \(\overline{\Phi}_{0}^{(1)}(x)>0\) (the orbital function) and \(\Omega_{0}\partial_{E}\overline{\Phi}_{0}^{(1)}(x)\) (the tail). These terms coincide at \(x=0\), where both are positive. For \(x<0\), the orbital function starts to increase, while the tail becomes negative. From Eq. (9), positive and negative contributions to the integral must cancel each other. Since the difference between Eq. (9) and Eq. (13) is in the third power of the orbital function \([\overline{\Phi}_{0}^{(1)}(x)]^{3}\), the negative contribution to the integral is amplified with respect to the positive one. Indeed, the value of the orbital \(\overline{\Phi}_{0}^{(1)}(x)\) always decreases when \(x\to 0\). Then \(\Omega_{1}<0\), so that the DT term can never suppress the effective single-particle tunneling \(\widetilde{\Omega}\).
This result changes dramatically for _finite-range_ potential \(V(x-y)=\overline{V}\,\Theta(\overline{d}-|x-y|)\), where \(\overline{d}\) is the interaction range and \(\Theta\) is the Heaviside step function (the contact interaction is recovered for \(\overline{d}\to 0\) and \(\overline{V}\to\infty\), at constant \(V_{\delta}=2\overline{d}\,\overline{V}\)). For simplicity, in the following only this potential is considered, even if similar results can be obtained by using a more physical _screened Coulomb_ potential, as in [17]. From Eq. (12a), we see that \(\Omega_{1}\), as a function of the interaction range, becomes positive for \(\overline{d}\sim L/2\), where \(L\) is the width of the well. Indeed, the main contribution to the integral in Eq. (12a) comes from \(x\sim-L/2\), at the maximum of the orbital function. In this case, \(\Omega_{1}\propto\int_{y_{1}}^{y_{2}}\Psi_{L}(y)\Psi_{R}(y)\,dy\), where \(y_{1,2}=-L/2\mp\overline{d}\). As a result, \(\Omega_{1}\simeq 0\) for \(\overline{d}=L/2\) due to orthogonality, see Eq. (9), and then it increases for \(\overline{d}\gtrsim L/2\), since the interaction would connect the central parts of the two Wannier functions.
We analyze the effect of a finite-range potential on two particles placed in a squared double-well potential. The results for \(\Omega_{1,2}\) are shown in Fig. 3 as a function of the interaction range, for fixed \(V_{\delta}\). As expected, \(\Omega_{1}\) changes its sign for \(\overline{d}\sim L/2\). Since \(\Omega_{0}<0\), the effective single-particle tunneling \(\widetilde{\Omega}\) can in principle be suppressed for some finite interaction range \(\overline{d}\gtrsim L/2\).
Obviously, the disappearance of the single-particle tunneling coupling in the Hubbard model would disrupt electron transport. This could be considered as a 1D analog of a flat band in twisted bilayer graphene systems [17; 18; 19], although with a different origin. In our model, this is caused by a combined effect of repulsive electron interaction and lattice potential, which generates the tunneling transition. However, in contrast to other treatments, the transport will occur via the PT of localized electron pairs that cannot "decay" by single-electron tunneling. We expect such a non-standard PT process to occur whenever the single-particle tunneling is suppressed to zero (as in [18]).
A signature of this process would be a vanishing probability of finding a delocalized electron pair initially localized in one of the two wells. However, since the
Figure 3: DT amplitude \(\Omega_{1}\) (blue curve) and PT amplitude \(\Omega_{2}\) (red curve) as a function of the interaction range. The amplitudes are evaluated by using Eqs. (12), considering the finite-range potential. The parameters of the squared double-well potential are: \(L=2\), \(b=0.5\) and \(V_{0}=5\). The interaction strength is \(V_{\delta}=1\).
energy of such a pair is large, \(U\gg\Omega_{0}\), its decay due to single-electron tunneling would be suppressed even with the standard Hubbard model. Therefore, it may be difficult to distinguish the disappearance of the single-electron occupancy due to the DT term.
The situation is different for two electrons with parallel spins, which cannot occupy the same well, as shown in Fig. 4(a). In this case, only the finite-range part (\(\overline{U}\ll U\)) of the diagonal terms of the electron interaction in Eq. (11) survives. As a result, the single-electron tunneling from the middle to the right (and left) well is not suppressed in the standard Hubbard model. However, if \(\Omega_{0}\) is exactly matched by the non-standard DT term, the electron pair occupying two adjacent wells becomes stable and will move coherently due to the PT term. To show this explicitly, let us consider the quantum dynamics described by the Hamiltonian \(\hat{H}+\hat{V}\), where \(\hat{H}\) is given by Eq. (1) for \(N=3\), and the interaction term \(\hat{V}\) can be written as
\[\hat{V}=\overline{U}(\hat{n}_{L}\hat{n}_{M}+\hat{n}_{M}\hat{n}_{ R})+\Omega_{1}[\hat{n}_{L}(\hat{a}_{M}^{\dagger}\hat{a}_{R}+\hat{a}_{R}^{ \dagger}\hat{a}_{M})\] \[+\hat{n}_{R}(\hat{a}_{M}^{\dagger}\hat{a}_{L}+\hat{a}_{L}^{ \dagger}\hat{a}_{M})]+\Omega_{2}[\hat{n}_{M}(\hat{a}_{L}^{\dagger}\hat{a}_{R} +\hat{a}_{R}^{\dagger}\hat{a}_{L})]\,, \tag{14}\]
where \(\hat{n}_{i}=\hat{a}_{i}^{\dagger}\hat{a}_{i}\) with \(i=1,2,3=L,M,R\) (we have omitted spin indices). The first term (\(\overline{U}\)) in Eq. (14) describes the inter-dot Coulomb interaction (_extended_ Hubbard model), while the second and third terms (\(\Omega_{1}\), \(\Omega_{2}\)) describe the DT and PT processes, see Eqs. (12). In the presence of finite-range interaction, the behavior of \(\Omega_{1}\) and \(\Omega_{2}\) for the triple-well is the same as in Fig. 3 for the double-well.
Consider two electrons, initially occupying two neighboring wells \(\overline{j}\) and \(\overline{j}^{\prime}\), with a wave function written in the basis of the Wannier functions, \(|\Psi^{(\overline{j}\,\overline{j}^{\prime})}(t)\rangle=\sum_{j<j^{\prime}}b_ {jj^{\prime}}^{(\overline{j}\,\overline{j}^{\prime})}(t)\,\hat{a}_{j}^{\dagger }\hat{a}_{j^{\prime}}^{\dagger}|0\rangle\), where the upper indices denote the initial state. By solving the time-dependent Schrodinger equation (more details can be found in [20]), we get the different occupancy probabilities \(P_{jj^{\prime}}(t)\). Specifically, the probability of finding both electrons in the wells \(j,j^{\prime}\) is \(P_{jj^{\prime}}(t)=|b_{jj^{\prime}}^{(\overline{j}\,\overline{j}^{\prime})}( t)-b_{jj^{\prime}}^{(\overline{j}\,\overline{j}^{\prime})}(t)|^{2}\), while the probability of finding an electron occupying the well \(j\) is \(P_{j}(t)=\sum_{j^{\prime}}|b_{jj^{\prime}}^{(\overline{j}\,\overline{j}^{ \prime})}(t)|^{2}\). As initial state, we choose two electrons placed in the left and middle wells of the triple-well system in Fig. 4 (a), such that \(\overline{j},\overline{j}^{\prime}=L,M\). In Fig. 4 (b,c) we show these probabilities for two different well and barrier widths. In Fig. 4 (c), we choose the geometry of the system so that \(\Omega_{1}\approx-\Omega_{0}\). Therefore, \(P_{LR}(t)\) (blue curve) results strongly suppressed if compared to Fig. 4 (b).
In conclusion, we have investigated the conditions under which the total single-electron tunneling coupling in periodic systems can be suppressed by repulsive electron-electron interaction in the framework of a generalized Hubbard model, including density-induced and pair tunneling terms. We found that this cancellation can never occur in the case of contact repulsive interaction. However, for a _finite-range_ repulsive interaction, with a sufficiently large interaction range, the single-particle tunneling transitions can be suppressed by the density-induced term. This leads to the appearance of a _stable_ electron pair that can move along the system due to the pair tunneling term of the non-standard Hubbard model. Since such a stable pair creation mechanism is quite general, we might expect it to appear in higher dimensional systems with different geometries [21]. In perspective, it would also be interesting to study how pair tunneling affects Mott-insulator transition in different dimensionalities [22].
FB, MZ and GLC acknowledge the support of the Iniziativa Specifica INFN-DynSysMath. This work has been financially supported by the Catholic University of Sacred Heart and by M.I.U.R. within the Project No. PRIN 20172H2SC4. MZ acknowledges the Ermenegildo Zegna's Group for the financial support.
|
2307.00155 | Modeling, Characterization, and Control of Bacteria-inspired
Bi-flagellated Mechanism with Tumbling | Multi-flagellated bacteria utilize the hydrodynamic interaction between their
filamentary tails, known as flagella, to swim and change their swimming
direction in low Reynolds number flow. This interaction, referred to as
bundling and tumbling, is often overlooked in simplified hydrodynamic models
such as Resistive Force Theories (RFT). However, for the development of
efficient and steerable robots inspired by bacteria, it becomes crucial to
exploit this interaction. In this paper, we present the construction of a
macroscopic bio-inspired robot featuring two rigid flagella arranged as
right-handed helices, along with a cylindrical head. By rotating the flagella
in opposite directions, the robot's body can reorient itself through repeatable
and controllable tumbling. To accurately model this bi-flagellated mechanism in
low Reynolds flow, we employ a coupling of rigid body dynamics and the method
of Regularized Stokeslet Segments (RSS). Unlike RFT, RSS takes into account the
hydrodynamic interaction between distant filamentary structures. Furthermore,
we delve into the exploration of the parameter space to optimize the propulsion
and torque of the system. To achieve the desired reorientation of the robot, we
propose a tumble control scheme that involves modulating the rotation direction
and speed of the two flagella. By implementing this scheme, the robot can
effectively reorient itself to attain the desired attitude. Notably, the
overall scheme boasts a simplified design and control as it only requires two
control inputs. With our macroscopic framework serving as a foundation, we
envision the eventual miniaturization of this technology to construct mobile
and controllable micro-scale bacterial robots. | Zhuonan Hao, Sangmin Lim, M. Khalid Jawed | 2023-06-30T22:12:01Z | http://arxiv.org/abs/2307.00155v1 | # Modeling, Characterization, and Control of Bacteria-inspired Bi-flagellated Mechanism with Tumbling
###### Abstract
Multi-flagellated bacteria utilize the hydrodynamic interaction between their filamentary tails, known as flagella, to swim and change their swimming direction in low Reynolds number flow. This interaction, referred to as bundling and tumbling, is often overlooked in simplified hydrodynamic models such as Resistive Force Theories (RFT). However, for the development of efficient and steerable robots inspired by bacteria, it becomes crucial to exploit this interaction. In this paper, we present the construction of a macroscopic bio-inspired robot featuring two rigid flagella arranged as right-handed helices, along with a cylindrical head. By rotating the flagella in opposite directions, the robot's body can reorient itself through repeatable and controllable tumbling. To accurately model this bi-flagellated mechanism in low Reynolds flow, we employ a coupling of rigid body dynamics and the method of Regularized Stokeslet Segments (RSS). Unlike RFT, RSS takes into account the hydrodynamic interaction between distant filamentary structures. Furthermore, we delve into the exploration of the parameter space to optimize the propulsion and torque of the system. To achieve the desired reorientation of the robot, we propose a tumble control scheme that involves modulating the rotation direction and speed of the two flagella. By implementing this scheme, the robot can effectively reorient itself to attain the desired attitude. Notably, the overall scheme boasts a simplified design and control as it only requires two control inputs. With our macroscopic framework serving as a foundation, we envision the eventual miniaturization of this technology to construct mobile and controllable micro-scale bacterial robots.
bio-inspired robot, tumble control scheme
## I Introduction
The study of flagellated bacteria and microorganisms has provided valuable insights into the development of flagellated robots. [1, 2, 3]. These robots mimic the locomotion of flagellated organisms, which rely on the intricate interaction between their helical structures, known as flagella, and the surrounding viscous fluid. By understanding and replicating these propulsion mechanisms, flagellated robots can achieve functional movements such as running, turning, and stopping. Additionally, natural observations have highlighted that different types of bacteria, including uni-flagellated and multi-flagellated species, rely on distinct propulsion mechanisms to achieve specific forms of locomotion [4, 5].
Uni-flagellated bacteria possess a single flagellum filament protruding from the side of the body, enabling them to swim through the rotary motion of the flagellum relative to the cell body [5, 6]. Notably, previous investigations have revealed that when the rotational frequency of the flagella exceeds a threshold, buckling instability occurs, resulting in highly nonlinear swimming trajectories [7, 8]. The body orientation of these robots can be controlled simply by adjusting the spin speed of the flagellum, a mechanism that has been widely employed in uni-flagellated robot designs to achieve motility [9, 10]. However, research on robotic propulsion inspired by multi-flagellated bacteria is relatively limited.
Multi-flagellated bacteria exhibit locomotion through the interplay of their flagella, involving phenomena such as bundle formation, tumbling, and polymorphic transformations, all of which arise from different flagella actuation [6, 11, 12]. Bundle formation occurs when two or more flagella spin in the same direction, generating efficient longitudinal propulsion. The propulsive force is approximately linearly related to the spin speed, and this observation has been effectively utilized in bi-flagellated robots to enable single-direction mobility [13, 14]. The presence of multiple flagella in these robots offers benefits, suggesting alternative approaches for speed enhancement beyond flagellum geometry optimization. However, a major limitation arises in the area of turning or reorienting the body, preventing these robots from swimming freely in space. Recent research has explored changing the spin direction of one or more flagella, gradually reducing the propulsion thrust and generating a turnover torque [4]. This results in rapid tumble events and seemingly erratic body reorientation. Our study models the tumbling event as a predictable phenomenon and aims to incorporate the tumbling mechanism into a bi-flagellated robot to enhance steerability.
To imitate the fluid-structure interplay between flagellum and low Reynolds number flow, computational fluid dynamics model including Resistive Force Theory (RFT), Slender Body Theory (SBT) [15], and Regularized Stokeslets Segments (RSS) [16] are used to predict the motion of uni- and multi-flagellated robots. RFT introduces the drag coefficient along the tangential and perpendicular directions of the flagellum. The method is computationally inexpensive but neglects the hydrodynamic interactions between flows induced by different parts of the flagellum. An accurate quantitative analysis requires a non-local hydrodynamics force model that accounts for the interaction between the flow induced by distant parts of the filament. Both SBT and RSS rely on the linearity of the Stokes equations for low Reynolds number flow, which
can accurately describe the evolution of flagellum dynamics with long-ranged interactions in a viscous fluid.
To understand the physical phenomenon of flagellated locomotion, we couple the rigid body dynamics with the hydrodynamics model to simulate the robot's trajectory. Position and orientation are observable states in the system, which allow us to study periodical locomotion. As shown in Figure 1, the experiment and simulation show good agreement. The proposed simulation framework successfully reveals the interaction between two flagella from long-ranged hydrodynamics.
Our contributions are as follows. We model and create a macroscopic bi-flagellated robot to study how different actuation modes can switch robot locomotion patterns. A simple bi-flagellated robot that exploits variation in viscosity and structure in its tails can effectively reorient the body. A framework comprising experiments and simulations is developed to study the robot's locomotion. The simulation tool can be used to generate data to explore the parameter space of the tumbling phenomenon. Meanwhile, the robot's dynamics are fully described and can be used to formulate the control scheme. The physics behind the tumbling locomotion is elaborated in detail. The simplicity of the robot and the small number of moving parts can eventually lead to the miniaturization of this robot.
The paper is organized as follows. In section II, we demonstrate the structural design and experimental setup of the bi-flagellated robot. Section III presents a computational framework to describe the robot dynamics in a viscous fluid. Section IV explores the optimal robot geometry for best dynamics performance, and we validate our simulation results against the experiments. Section V concludes our works and proposes the potential directions for future study.
## II Experimental design
### _Robotic structure_
The robot depicted in Figure 2 (a) consists of a cylindrical head and two right-handed helical flagella with plates attached to the motor shaft. The head has a radius of \(r_{h}\) measuring 2.5 cm and a height of \(h\) measuring 4.3 cm. Inside the head, two tiny brushed DC geared motors are located, each rated at 6 V voltage and capable of a stall current of 1.5 A. The motors are equipped with magnetic encoders and an IMU module. The motor shaft protrudes from the head, and its rotation direction and speed are controlled via PWM feed using a microcontroller. The flagella are manufactured using rapid prototyping techniques with Polylactic acid (PLA), a type of 3D printing material. The PLA flagella have a fixed cross-sectional radius of \(r_{0}\) measuring 1.58 mm and a helix radius \(R\) measuring 6.36 mm. To generate sufficient experimental data for investigating the tumbling mechanism, the helix length \(l\) is varied between 63.6 and 127.2 mm, while the helix pitch \(\lambda\) is varied between 15.9 and 63.6 mm. The PLA material used for the flagella is considered to be non-deformable, with a Young's modulus \(E\) of 4.107\(\times 10^{9}\) Pa. These design and parameter variations allow for the exploration of different configurations of the flagellated robot and provide a range of experimental data to study the underlying mechanisms of tumbling.
### _Experimental setup_
Our experiments are designed to serve two main purposes: (i) to explore the optimal structure of the flagellum and robot that generates maximum steering efficiency, and (ii) to achieve controllable direction changes. The details of our experimental setup are as follows.
Figure 2 (b) illustrates the experimental apparatus used to validate the numerical simulations developed in the subsequent section. The platform comprises four components: (i) a glycerin tank with dimensions of 122 cm (length) \(\times\) 45 cm (width) \(\times\) 51.5 cm (height), (ii) the bi-flagellated robot, (iii) a steering joint, and (iv) a positioning frame. Glycerin, with a density \(\rho\) of 1.26 g/ml and viscosity \(\mu\) of 1 Pa-s at 25\({}^{\circ}\) Celsius, is chosen as the surrounding viscous environment. To facilitate quantitative comparison, we restrict the robot's degree of freedom (DOF) to a single-axis rotation using the steering joint. This setup enables us to characterize the tumbling behaviors associated with different flagellum designs. The DC geared motors located inside the robot's head are connected to an external microcontroller, allowing us to adjust the rotational
Fig. 1: Snapshots from (a) experiments and (b) simulations. Side view of the bi-flagellated rotating around an axis at \(t\in\{0,5,10\}\)s. Two identical flagella rotate in opposite directions, i.e., clockwise (CW) and counterclockwise (CCW), at an angular velocity of \(\omega=280\) rpm.
Fig. 2: Robot schematic and experimental setup. (a) The bio-inspired robot is comprised of two components: (i) a cylindrical head with radius \(r_{h}\) and height \(h\), and (ii) two helical flagella tails with radius \(r\), length \(l\), pitch \(\lambda\), and cross-section radius \(r_{0}\). (b) The bi-flagellated robot immerses into glycerin and rotates its body around a steering joint (1-DOF), i.e., y-axis. The rotations around other axes and translations are limited for.
speed and direction. The test platform permits us to vary the rotation speed \(\omega\) within the range of 0 to 30 rad/s, ensuring that the low Reynolds number condition is satisfied, i.e., \(Re=\rho\omega Rr_{0}/\mu\leq 0.37\). This condition guarantees that the fluid flow is predominately governed by viscosity rather than inertia. By operating within the low Reynolds number regime, we can accurately investigate the hydrodynamic interactions between the flagella and the surrounding fluid.
## III Numerical method of bi-flagellated locomotion
The bi-flagellated robot consists of two main components: the helical flagellum and the cylindrical head. In order to simulate the locomotion of this robot in a viscous fluid medium, we develop a numerical model that combines three key components: (i) a kinematic representation for the bi-flagellated mechanism, (ii) Regularized Stokeslet Segments (RSS) to model the long-ranged hydrodynamics forces, and (iii) a forced-head hydrodynamics model to capture the interaction between the flagellum and the head. This section is structured as follows. In Section III-A, we provide a description of the kinematic representation of the helical flagellum and cylindrical head. This representation allows us to characterize the motion of the robot in terms of its shape and orientation. Then, in Section III-B, we explain how we integrate the kinematic representation with the RSS method to compute the hydrodynamic forces exerted on the flagellum. The RSS method considers the interactions between different segments of the flagellum and the surrounding fluid. Next, in Section III-C, we detail the equations of motion (EOM) that govern the dynamics of the bi-flagellated robot. These EOM are derived from the theory of rigid body dynamics, taking into account the forces and torques acting on the flagellum and the head. By solving these equations, we can simulate the motion and behavior of the robot in response to the hydrodynamic forces. Finally, in Section III-E, we discuss the geometric and physical conditions of the problem, including the dimensions and material properties of the flagellum and head. These conditions play a crucial role in determining the behavior and performance of the bi-flagellated robot in the simulated fluid environment.
### _Kinematic representation_
#### Iii-A1 Helical flagellum
We model the flagella filament as a perfect helix with radius \(R\), pitch \(\lambda\), and axial length \(l\) (see Figure 2 (a)). A right-handed helix in the Cartesian coordinate system is parameterized as a function of \(s\), i.e.,
\[\mathbf{r}(s)=\left[R\cos\frac{2\pi s}{L},R\sin\frac{2\pi s}{L},\frac{\lambda s }{L}\right],0\leq s\leq l, \tag{1}\]
where \(L=\sqrt{(2\pi R)^{2}+\lambda^{2}}\) is the contour length of one helical turn. In the case of a left-handed helix, the second term has a negative sign. We employ the discretization method to model the kinematic of helical filament. In the schematic of Figure 3 (a), each discrete helical curve consists of \(N+1\) nodes, i.e., \(\mathbf{n}=[\mathbf{n}_{0},\mathbf{n}_{1},\cdots,\mathbf{n}_{N}]\). We take the first two nodes \(\mathbf{n}_{0}\) and \(\mathbf{n}_{1}\) as the connection to the head. Starting from \(i=2\), the coordinate of the remaining nodes is calculated by taking \(s=(i-2)l/(N-2)\) in Equation 1.
The \(N+1\) nodes correspond to \(N\) edge vector \(\mathbf{e}^{1}\), \(\cdots\),\(\mathbf{e}^{N-1}\), such that \(\mathbf{e}^{i}=\mathbf{x}_{i+1}-\mathbf{x}_{i}\), where \(i=1,\cdots,N-1\). Hereby, we denote the node-associated quantities by superscripts and edge-associated quantities by subscripts. Nodal positions constitute the \(3N\) sized DOF vector, i.e., \(\mathbf{X}=[\mathbf{x}_{0},\cdots,\mathbf{x}_{N}]^{T}\), where the superscripts \(T\) denotes transposition.
Because the rigid flagellum can only rotate around a single fixed axis, i.e., z-axis, the angular velocity vector is specified as \(\boldsymbol{\omega}=[0,0,\omega_{z}]\). By defining the rotation axis \(\mathbf{x}_{\text{rotate}}=[0,0,1]\), we can obtain the linear velocity of each node by \(\dot{\mathbf{x}}_{i}=\boldsymbol{\omega}\times(\mathbf{x}_{i}-\mathbf{x}_{ \text{rotate}})\), where \(\times\) denotes the cross product of two vectors. With nodal velocities, we can update the nodal positions at each time step. Rearranging the derivative of DOF vector as \(\mathbf{U}=\dot{\mathbf{X}}=[\dot{\mathbf{x}}_{0},\cdots,\dot{\mathbf{x}}_{N} ]^{T}\), the variable is used to formulate the drag force in RSS (see details in Section III-B).
#### Iii-A2 Cylindrical head
Without losing the generality, we use a single head node \(\mathbf{n}_{\text{head}}\) in Figure 3 (a.1) to represent the spatial configuration of the bi-flagellated system. Concerning a prescribed fixed coordinate system, denoted as inertial frame \(\mathbf{x}^{I}:x^{I}-y^{I}-z^{I}\), we can describe the translation and rotation on the rotated coordinate system, designated as body frame \(\mathbf{x}^{B}:x^{B}-y^{B}-z^{B}\) attached to the head (see Figure 3 (b)). We take the Euler angles to represent the orientation of the head, which are typically denoted as yaw \(\alpha\), pitch \(\beta\), and roll \(\gamma\) in \(Z-Y-X\) convection. In the steering joint setup, we define pitch angle \(\beta\) as the angle between the axis \(z_{I}\) and axis \(z_{B}\) to describe the orientation of the bi-flagellated system. Further, to model the free swimming motion after removing the steering joint, we introduce quaternion for orientation \(\mathbf{q}=(q_{0},q_{1},q_{2},q_{3})\) (convertable to Euler angle) and axial angular velocities along body frame \(\boldsymbol{\omega}^{B}=(\omega_{x}^{B},\omega_{y}^{B},\omega_{z}^{B})\), and
define DOF vector \(\mathbf{Q}=[\mathbf{x}^{I},\dot{\mathbf{x}}^{I},\mathbf{q},\boldsymbol{\omega}^{B}]\) to represent spatial information.
### _Regularized Stokeslets Segments_
We use Regularized Stokeslets Segments (RSS) methods to model the viscous drag force experienced by a helical flagellum in motion within a viscous fluid. The relation between the velocity vector \(\mathbf{U}\) of (size \(3N\)) at nodes set \(\mathbf{n}\) and the hydrodynamics force vector \(\mathbf{F}\) of (size \(3N\)) applied on them is linearly configured by a geometry-associated matrix \(\mathbf{A}\) (vector of size \(3N\times 3N\)), i.e.,
\[\mathbf{U}=\mathbf{A}\mathbf{F}, \tag{2}\]
We describe the formulation of matrix \(\mathbf{A}\) on the discretized helical flagellum as follows.
The primary Green's function of Stokes flow is the Stokeslets, which describes the flow associated with a singular point force. Referring to Figure 3 (a), RSS provides a relationship between the velocity \(\dot{\mathbf{x}}_{m}\) at a point \(\dot{\mathbf{x}}_{m}\) and the forces applied by each node on the fluid such that
\[8\pi\mu\dot{\mathbf{x}}_{m}=\sum_{k=0}^{N-2}\left(\mathbf{A}_{1}^{k}\mathbf{f} _{k}^{h}+\mathbf{A}_{2}^{k}\mathbf{f}_{k+1}^{h}\right), \tag{3}\]
where \(\mathbf{f}_{k}\) is the force vector of size 3 that represents the force applied by the \(k\)-th node onto the fluid. This is equal and opposite to the hydrodynamics force onto the \(k\)-th node. The matrix \(\mathbf{A}_{1}^{k}\) and \(\mathbf{A}_{2}^{k}\) are
\[\mathbf{A}_{2}^{k} =\left|\mathbf{s}_{k}\right|\left(\left(T_{1,-1}^{k,k+1}+c^{2}T_{ 1,-3}^{k,k+1}\right)\mathbf{I}+T_{1,-3}^{k,k+1}\left(\mathbf{r}_{k}\mathbf{r}_ {k}^{T}\right)+\right.\] \[\left.T_{2,-3}^{k,k+1}\left(\mathbf{r}_{k}\mathbf{s}_{k}^{T}+ \mathbf{s}_{k}\mathbf{r}_{k}^{T}\right)+T_{3,-3}^{k,-3}\left(\mathbf{s}_{k} \mathbf{s}_{k}^{T}\right)\right),\] \[\mathbf{A}_{1}^{k} =\left|\mathbf{s}_{k}\right|\left(\left(T_{0,-1}^{k,k+1}+c^{2}T_{ 0,-3}^{k,k+1}\right)\mathbf{I}+T_{0,-3}^{k,k+1}\left(\mathbf{r}_{k}\mathbf{r} _{k}^{T}\right)+\right.\] \[\left.T_{1,-3}^{k,k+1}\left(\mathbf{r}_{k}\mathbf{s}_{k}^{T}+ \mathbf{s}_{k}\mathbf{r}_{k}^{T}\right)+T_{2,-3}^{k,k+1}\left(\mathbf{s}_{k} \mathbf{s}_{k}^{T}\right)\right)-\mathbf{A}_{2}^{k},\]
where \(\mathbf{x}_{m}\) is the point of measurement, \(c\) is the regularization parameter (from analysis in [16], \(c=1.031\cdot r_{0}\)), \(\mathbf{I}\) is the 3-b-3 identity matrix, \(\mathbf{r}_{k}=\mathbf{x}_{m}-\mathbf{x}_{k}\), \(\mathbf{r}_{k+1}=\mathbf{x}_{m}-\mathbf{x}_{k+1}\), and \(\mathbf{e}_{k}=\mathbf{x}_{k+1}-\mathbf{x}_{k}\) are the position vectors between edge and point of measurement, and the scalar quantities denoted by \(T\), e.g., \(T_{0,-1}^{k,k+1}\) are expressed as follow
\[T_{0,-1}^{k,k+1} =\left.\frac{1}{\left|\mathbf{s}_{k}\right|}\log\left|\mathbf{s}_ {k}\right|R+\left(\mathbf{x}_{\alpha}\cdot\mathbf{s}_{k}\right)\right|\right| _{0}^{1},\] \[T_{0,-3}^{k,k+1} =-\left.\frac{1}{R\left[\left|\mathbf{s}_{k}\right|R+\left( \mathbf{x}_{\alpha}\cdot\mathbf{s}_{k}\right)\right]}\right|_{0}^{1},\] \[T_{1,-1}^{k,k+1} =\left.\frac{R}{\left(\left|\mathbf{s}_{k}\right|\right)^{2}} \right|_{0}^{1}-\frac{\left(\mathbf{x}_{0}\cdot\mathbf{s}_{k}\right)}{\left( \left|\mathbf{s}_{k}\right|\right)^{2}}T_{0,-1}^{k,k+1},\] \[T_{1,-3}^{k,k+1} =-\left.\frac{\alpha}{R\left(\left|\mathbf{s}_{k}\right|\right)^{2} }\right|_{0}^{1}+\frac{1}{\left(\left|\mathbf{s}_{k}\right|\right)^{2}}T_{0,-1}^ {k,k+1}-\frac{\left(\mathbf{x}_{0}\cdot\mathbf{s}_{k}\right)}{\left(\left| \mathbf{s}_{k}\right|\right)^{2}}T_{1,-3}^{k,k+1},\] \[T_{3,-3}^{k,k+1} =-\left.\frac{\alpha^{2}}{R\left(\left|\mathbf{s}_{k}\right|\right) ^{2}}\right|_{0}^{1}+\frac{2}{\left(\left|\mathbf{s}_{k}\right|\right)^{2}}T_{ 1,-1}^{k,k+1}-\frac{\left(\mathbf{x}_{0}\cdot\mathbf{s}_{k}\right)}{\left( \left|\mathbf{s}_{k}\right|\right)^{2}}T_{2,-3}^{k,k+1},\]
where \(\mathbf{x}_{\alpha}=\mathbf{x}_{k}-\alpha\mathbf{s}_{k}\), and \(R=\sqrt{\left|\mathbf{x}_{\alpha}\right|^{2}+c^{2}}\).
The geometry matrix \(\mathbf{A}\) is formulated by a rearrangement of block matrices \(\mathbf{A}_{1}^{k}\) and \(\mathbf{A}_{2}^{k}\) on the corresponding node index. At each time step in the simulation, knowing the position \(\mathbf{X}\) and velocity \(\mathbf{U}\) of the node set \(\mathbf{n}\), we can construct the geometry matrix \(\mathbf{A}\). Then we employ the force-velocity relationship in Equation 2 to evaluate the hydrodynamics forces by \(\hat{\mathbf{F}}=\mathbf{A}\setminus\mathbf{U}\).
### _Forced-head hydrodynamics model_
To study the locomotion of a bi-flagellated system under external forces, we build the forced-head hydrodynamics model by using the aforementioned kinematic representation. A single head node \(\mathbf{n}^{\text{head}}\) and two interconnected nodes from helical flagellum \(\mathbf{n}_{0}^{1}\) and \(\mathbf{n}_{0}^{2}\). in Figure 3 describe the configuration of the dynamical system. Head node \(\mathbf{n}_{\text{head}}\) accounts for the hydrodynamics drag force and torque induced by the translation and rotation of the head. Two connected nodes \(\mathbf{n}_{0}^{1}\) and \(\mathbf{n}_{0}^{2}\) with equal distance \(d/2\) to \(\mathbf{n}^{\text{head}}\) account for the resultant of hydrodynamics forces and torque generated by two helical flagella. In this section, we analyze the forces and torques applied to the system and formulate the equation of motion of bi-flagellated locomotion.
Fig. 3: Kinematic representation of bi-flagellated system. (a) Long-range hydrodynamics. (a.1) Discrete schematic of the bi-flagellated robot. Each helical flagellum is discretized into \(N+1\) nodes. Superscript denotes the helix index, and subscript denotes the node index. \(\mathbf{n}_{0}^{1}\) and \(\mathbf{n}_{0}^{2}\) connect the flagellum with the head node, as the application points of the forces generated by each helix, interacting with the head node in rigid body dynamics. Inset: Notations associated with the flow \(\dot{\mathbf{x}}_{m}\) at point \(\mathbf{x}_{m}\) generated by a line segment \(\mathbf{e}_{k}=\mathbf{x}_{k+1}-\mathbf{x}_{k}\). (a.2) Time series of normalized hydrodynamics forces \(\hat{F}\) along x, y, and z direction with flagella spacing distance \(d=0.022\) m and rotation speed \(\omega_{1}=1\) rad/s and \(\omega_{2}=-1\) rad/s. The sinusoidal wave pattern arises due to the long-ranged hydrodynamics interactions between flows induced by different segments of the flagella. (b) The description of the robot coordinate in the body frame and inertia frame with Euler’s angle representation. The green arrow line \(\mathbf{N}\) indicate the line of nodes.
**Hydrodynamics force on head.** We use the Stokes law to compute the hydrodynamics force on the robot head. As the head translates with velocity \(\mathbf{\dot{x}}^{\text{head}}\), the viscous fluid exerts a drag force to resist the translation. Likewise, when the head rotates with angular velocity \(\mathbf{\omega}^{\text{head}}\), the viscous fluid applies torque to resist that rotation. Stokes' law model the hydrodynamics drag by considering the object as a small sphere. For the cylindrical head, we can introduce two prefactors \(C_{t},C_{r}\) to account for the non-sphericity of the object. Therefore, we model the translation drag force as
\[\mathbf{f}_{t}^{\text{head}}\ =-C_{t}\cdot 6\pi\mu r_{h}\mathbf{\dot{x}}^{\text{ head}}, \tag{4}\]
where \(r_{h}\) is the radius of the head, and the rotation drag torque as
\[\mathbf{T}_{r}^{\text{head}}=-C_{r}\cdot 8\pi d_{r}^{3}\mathbf{\omega}^{\text{head}}, \tag{5}\]
where \(d_{r}\) is reference dimension to account for the non-spherical shape. The value of \(C_{t}\) and \(C_{r}\) are determined by drop and rotation test. In our model, \(\mathbf{f}_{t}^{\text{head}}\) and \(\mathbf{T}_{r}^{\text{head}}\) are applied on the head node \(\mathbf{n}^{\text{head}}\).
**Righting moment due to mass distribution.** In our bi-flagellated system, the head is conditioned to be neutral buoyancy. Therefore, the gravitational force and buoyancy forces are balanced, i.e., \(mg=\rho Vg\), where \(m\) is the mass of the head, \(V\) is the volume of the head, \(\rho\) is the density of fluid medium, and \(g\) is gravitational acceleration. However, the mass is not uniformly distributed along the robot head. When the center of mass (COM) and center of geometry (COG) is shifted by a distance \(\mathbf{r}_{m}\), a righting moment tend to restore the robot to its previous attitude after any rotational displacement. The moment can be modeled as
\[\mathbf{T}_{m}^{\text{head}}=mg\mathbf{r}_{m}\sin\beta, \tag{6}\]
where \(\mathbf{r}_{m}\) is displacement vector pointing from COM to COG, and \(\beta\) is the pitch angle.
**Propulsive force from flagellum.** In Section III-B, we evaluate the hydrodynamics forces of each node along the discrete helical flagellum by the method of RSS. In the bi-flagellated system, two flagella provide the propulsion for the head. The propulsive force is equivalent to the resultant forces of all nodes forces of each flagellum applied on the two connection nodes \(\mathbf{n}_{0}^{1}\) and \(\mathbf{n}_{0}^{2}\), i.e., \(\mathbf{f}_{\mathbf{p}}^{\text{tail}}=\sum_{k=1}^{N}\mathbf{f}_{\mathbf{k}}\).
For simplicity, we denote the resultant forces for two flagella as \(\mathbf{f}_{\mathbf{p}}^{1}\) and \(\mathbf{f}_{\mathbf{p}}^{2}\). Figure 3(b) provides a time evolution of the resultant forces when two flagella rotate in the opposite direction. The force amplitude shows a sinusoidal pattern resulting from the long-ranged coupling between two flagella. The opposite sign of forces along the z-axis can cancel the propulsive effect but instead generate a turn-over torque because of the spacing distance \(d\), which is the fundamental mechanism of the tumbling phenomenon. The torque applied equivalently on head node \(\mathbf{n}^{\text{head}}\) is given by
\[\mathbf{T}^{\text{tail}}=(\mathbf{f}_{\mathbf{p}}^{1}-\mathbf{f}_{\mathbf{p}} ^{2})\times\frac{d}{2}, \tag{7}\]
where \(\mathbf{r}_{1}=\mathbf{x}_{0}^{1}-\mathbf{x}^{\text{head}}\) and \(\mathbf{r}_{2}=\mathbf{x}_{0}^{2}-\mathbf{x}^{\text{head}}\).
In summary, the external forces and torques applied on the head includes \(\mathbf{f}_{t}^{\text{head}},\mathbf{T}_{r}^{\text{head}},\mathbf{T}_{m}^{ \text{head}},\mathbf{f}_{\mathbf{p}}^{\text{tail}},\mathbf{T}^{\text{tail}}\). The governing equations of pivot steering in terms of pitch angle \(\beta\) is:
\[I_{y}\vec{\beta}=T_{rz}^{\text{head}}+T_{mz}^{\text{head}}+T_{z}^{\text{tail}}, \tag{8}\]
where the subscript \(z\) represents the torch component along z direction, and the governing equation of free swimming in terms of the DOF vector \(\mathbf{Q}\) is
\[m\begin{bmatrix}0\\ \ddot{\mathbf{x}}^{I}\end{bmatrix} =\mathbf{q}\otimes\begin{bmatrix}0\\ \mathbf{f}_{t}^{\text{head}}+\mathbf{f}_{\mathbf{p}}^{1}+\mathbf{f}_{\mathbf{p} }^{2}\end{bmatrix}\otimes\mathbf{q}^{*}, \tag{9}\] \[\dot{\mathbf{q}} =\frac{1}{2}\mathbf{q}\otimes\begin{bmatrix}0\\ \omega^{B}\end{bmatrix},\] \[\mathbf{J}\dot{\omega}^{B} =-\omega^{B}\times\mathbf{J}\omega^{B}+\mathbf{T}_{r}^{\text{head }}+\mathbf{T}_{m}^{\text{head}}+\mathbf{T}^{\text{tail}},\]
where \(\mathbf{q}^{*}\) is the conjugate of \(\mathbf{q}\), and \(\mathbf{J}=\text{diag}(I_{x},I_{y},I_{z})\) is matrix of moment of inertia.
### _Control scheme of pivot tumbling_
To study the controllability of tumbling, we rewrite Equation 8 as the state space model by defining the state vector \(\mathbf{x}\triangleq[\beta,\dot{\beta}]\) and control input vector \(u\triangleq[\mathbf{f}_{\mathbf{p}}^{1},\mathbf{f}_{\mathbf{p}}^{2}]\) (assuming small \(\beta\), such that \(\sin(\beta)\approx\beta\) holds):
\[\dot{\mathbf{x}}(t)=\mathbf{A}\mathbf{x}(t)+\mathbf{B}\mathbf{u}(t),\quad \mathbf{x}(\mathbf{0})=\mathbf{x}_{0},\]
where
\[\mathbf{A}=\begin{bmatrix}0&1\\ -\dfrac{mgr}{I_{y}}&-\dfrac{8\pi C_{r}\mu h^{3}}{I_{y}}\end{bmatrix},\mathbf{B}= \frac{d}{2I_{y}}\begin{bmatrix}0&0\\ 1&-1\end{bmatrix}\]
The system is structurally stable because the eigenvalues of matrix \(\mathbf{A}\) have negative real part. The states of the system asymptotically converges to the steady condition
\[\beta_{\text{ss}}=\frac{(\mathbf{f}_{\mathbf{p}}^{1}-\mathbf{f}_{\mathbf{p}}^{2 })d}{2mg\mathbf{r}_{m}},\quad\dot{\beta}_{\text{ss}}=0 \tag{10}\]
Therefore, to realize the desired pitch angle \(\beta_{\text{ref}}\), we require \(\mathbf{f}_{\mathbf{p}}^{1}\) and \(\mathbf{f}_{\mathbf{p}}^{2}\) to satisfy below conditions
\[\begin{split}\mathbf{f}_{\mathbf{p}}^{1}+\mathbf{f}_{\mathbf{p}}^{2 }&=0,\quad\text{(max torque)}\\ \dfrac{(\mathbf{f}_{\mathbf{p}}^{1}-\mathbf{f}_{\mathbf{p}}^{2})d}{ 2mg\mathbf{r}_{m}}&=\beta_{\text{ref}},\quad\text{(steady state value)}\\ \|\mathbf{f}_{\mathbf{p}}^{1}\|,\|\mathbf{f}_{\mathbf{p}}^{2}\|&\leq \mathbf{f}_{\max}.\quad\text{(effective propulsion)}\end{split} \tag{11}\]
However, to implement a actual control for the propulsion \(\mathbf{f}_{\mathbf{p}}^{1}\) and \(\mathbf{f}_{\mathbf{p}}^{2}\), we need more knowledge on the mechanism of flagellated propulsion. In Section IV, we characterize the propulsion with flagellum geometry and rotation speed.
### _Definition of the problem_
The general framework introduced above for the forced-head dynamics is now applied to generating bi-flagellated locomotion. We provide specifics on the geometry and physical parameters of this problem. The flagellum is chosen to be a rigid right-handed helical filament with Young's modulus \(E\). The geometrical parameters describing the helix structure
include helix pitch \(\lambda\), helix radius \(R\), axial length \(l\), and cross-sectional radius \(r_{0}\). The values of the parameters in Table I are chosen to match the laboratory experiments described in Section II. We use a dimensionless scheme to generalize the results, except for several fundamental variables, to make a valid comparison between macroscopic and microscopic mechanisms [17]. The procedures are introduced as follows.
The helical flagellum is connected to the head at one extremity, where it is rotated counterclockwise with a prescribed angular velocity \(\omega\). Two flagella are spaced with a specific distance \(d\). Hereafter, we normalize the spacing distance by the helix radius, \(R\), such that the normalized distance is \(\bar{d}=d/R\), where the overbar symbol \(\bar{\cdot}\) denotes the normalized variables. Likewise, the geometrical parameters that describes the flagellum structure include \(\bar{\lambda}=\lambda/l\) and \(\bar{R}=R/\lambda\). The input for the bi-flagellated system is angular velocity \(\omega\) as a function of time. The propulsive thrust \(F\) is the z component of \(\mathbf{f}_{\mathbf{p}}^{\text{tail}}\) and turnover torque \(T\) is the y component of \(\mathbf{T}^{\text{tail}}\), with the normalization as \(\bar{F}=F/(\mu\omega RL)\) and \(\bar{T}=T/(\mu\omega R^{2}L)\), where \(\mu\) denote the viscosity of fluid. This dimensionless representation allows for generality across length scales in interpreting our findings.
## IV Results and discussion
The bi-flagellated system can undergo different motions by varying the actuation modes of the two motors. In this section, we study the mechanism of tumbling behavior. We first explore the parameter space that enhances the direction change from flagellum geometry and robot structure. Then we show a bi-flagellated robot that can efficiently reorient the body by tumbling.
### _Four distinct locomotion patterns_
Since two flagella are rotated by two motors separately, we can propose four different actuation combinations according to the rotation direction of the two motors, i.e.,
1. \(\omega_{1}>0\), \(\omega_{2}<0\)
2. \(\omega_{1}<0\), \(\omega_{2}>0\)
3. \(\omega_{1}<0\), \(\omega_{2}<0\)
4. \(\omega_{1}>0\), \(\omega_{2}>0\)
where we denote \(\omega_{i}>0\) as the counterclockwise rotation from the overlook view. The magnitude of angular velocities is equal regardless of the rotation direction, i.e., \(|\omega_{1}|=|\omega_{2}|\). By the numerical simulation tool in Section III, we explore all the possible locomotion patterns from the above combinations.
Figure 4 (a) demonstrates four different locomotion induced by two flagella, including left turn, right turn, upward translation, and downward translation. The corresponding plots of the force field can interpret the dynamics mechanism behind each motion in Figure 4 (b). For the right-handed helix, a counterclockwise rotation yield a resultant force pointing obliquely downward, while the clockwise rotation yield an upward force, as the dark red arrows show. Rotating two flagella with the same direction ensures the two resultant forces point up or down simultaneously but not precisely in the same direction due to the hydrodynamics coupling. On the contrary, rotating two flagella in the opposite direction make the resultant forces work as a force couple that is subject to generating a turnover torque. If the torque is large enough to overcome the intrinsic inertial of the head, directional change takes place with time evolution.
Fig. 4: Locomotion patterns of the bi-flagellated robot. The robot can turn and translate when rotating two flagella with different modes. Denote \(\omega>0\) when flagellum rotating counterclockwise. (a.1) Left turn when \(\omega_{1}>0,\omega_{2}<0\), (a.2) Right turn when \(\omega_{1}<0,\omega_{2}>0\), (a.3) upward translation when \(\omega_{1}<0,\omega_{2}<0\), (a.4) downward translation when \(\omega_{1}>0,\omega_{2}>0\). (b.1)-(b.4) The hydrodynamics force applied at two flagella with corresponding flagella rotation modes (a.1)-(a.4). The light red arrows represent the force direction applied on the node, and the dark red arrows represent the resultant force direction applied on the extremity of the flagellum.
Fig. 5: Geometrical parameters determine the magnitude of propulsive thrust and turnover torque. (a) Dependence of normalized propulsive force. \(\bar{F}\) (log-scaled) on both normalized pitch \(\lambda/l\) and normalized radius \(R/\lambda\). (b) Dependence of normalized turnover torque \(\bar{T}\) (log-scaled) on both normalized pitch \(\lambda/l\) and normalized radius \(R/\lambda\). The upward triangle symbols in (a) and (b) correspond to Table II. The cycled red marker corresponds to the bi-flagellated robot. The red and green pentagram markers corresponds to \(l=1600R\) and \(l=6.25R\), respectively. (c) Normalized turnover torque \(\bar{T}\) as a function of normalized flagella spacing distance \(d/R\). (d) Normalized propulsive thrust \(\bar{F}\) as a function of normalized flagella spacing distance \(d/R\). Inset in (c) and (d): the normalized resultant force of two flagella as a function of normalized flagella spacing distance \(d/R\). The non-linear relationship emerges when two flagella are in proximity, i.e., \(d/R<5\), due to hydrodynamics coupling between two flagella.
### _Design space for optimal flagellum geometry_
Thus far, our findings on the bi-flagellated actuation and corresponding locomotion patterns have brought insight into the run-and-turn behaviors of bacteria. Next, we perform a broader exploration of the parameter space for the structure of the helical flagellum and robot, emphasizing the ranges relevant to natural multi-flagellated cells. We use normalized propulsive thrust \(\bar{F}\) and turnover torque \(\bar{T}\) to represent propulsion and steering efficiency. \(\bar{F}\) characterize propulsion ability by a single helical flagellum. In contrast, \(\bar{T}\) represents the ability of direction change formed by the force couple accounting for the long-ranged hydrodynamics effect. A systematic parametric study is performed to quantify the dependence of these variables on the geometry and structure parameters.
In Figure 5 (a) and (b), we plot the magnitude of \(\bar{T}\) and \(\bar{F}\) (colorbar) in log-scale on the normalized pitch and normalized radius at \(\omega=20.94\) rad/s. We set helix radius \(R\) as a constant value of 0.0064 m and compute the helix pitch \(\lambda\) and length \(l\) per the corresponding value of normalized values. In the plot of torque, we make the distance between two flagella as constant \(d=3R\) to eliminate the effect of spacing distance \(d\). Two phase diagrams have commonalities in the geometrical parameters. We find that the two quantities increases as the normalized pitch and radius increase. The highest value locate at the left-bottom region, indicating the maximum torque and thrust. However, the region represents an elongated flagellum concerning the radius, i.e., \(L=1600R\), which is less common in bacteria due to its poor maneuverability and high energy cost. From the parameter distribution of several species of bacteria in Table II, as the upward triangles, we learn that the optimal geometry locates on the right-bottom region is a trade-off between dynamic performance and dimensionality. Therefore, as the red marker shows, we take \(\lambda/l=0.33\) and \(R/\lambda=0.2\) as representative values.
In Figure 5 (c) and (d), we plot the magnitude of \(\bar{T}\) and \(\bar{F}\) as a function of normalized space distance \(d/R\), at the representative geometrical values. As \(d/R\) increases, \(\bar{T}\) and \(\bar{F}\) monotonically increase, but non-linearity of the curve due to hydrodynamics occurs when two flagellum is in proximity. Intriguingly, from the insets of two plots, we see that the long-ranged hydrodynamics exert different effects when two flagella rotate differently. Hydrodynamics escalates the force magnitude when they rotate in the opposite direction but decreases the magnitude when they rotate in the same direction. Given torque is a product of force and arm, the magnitude almost remains the same when the normalized spacing distance \(d/R\) increases from 2 to 3.5. The result implies the maximum propulsive force and turnover torque occur when two flagella are spaced an infinite distance. However, the long spacing distance is not preferable for either the bacteria or bio-inspired robot. The multi-flagellated system must compromise with dimension and propulsion efficiency. Here, we take \(d/R=3.5\) in our bi-flagellated system to ensure proper size of the head.
### _Analysis on tumbling process_
Toward validating the numerical simulations presented in Section III, we now perform a direct quantitative comparison with experimental results using the apparatus described in Section II. Emphases are given to the evolution of pitch angle \(\beta\), as an accumulative result of applied forces and torques.
In this section, we investigate how pitch angle \(\beta\) evolved with different flagellum parameters, including normalized pitch \(\lambda/l\) and rotation speed \(\omega\). In Figure 6(a), we plot \(\beta\) as a function of time \(t\), with the reference assumed \(\lambda/l=3,R/\lambda=5,d/R=3.5\). The pitch angle keeps increasing with time and reaches a maximum value, denoted as \(\beta_{\max}\). The magnitude of \(\beta_{max}\) is proportional to the rotation speed of the flagella, which is ensured by the steady condition in Equation 10. Therefore, though without measurement the value of turnover torque in experiment, we can use the \(\beta_{\max}\), a.k.a., \(\beta_{\rm ss}\) as an indicator of torque magnitude. Figure 6 (b) plot the relationship between \(\beta_{\rm ss}\) and rotation speed \(\omega\), and show an excellent agreement between experiments and simulations. We learn that the magnitude of torque generated by forces \(\mathbf{f_{p}^{1}}\) and \(\mathbf{f_{p}^{2}}\) is linear with the rotation speed of the flagellum.
We employ our numerical simulations to explore the effect of \(\lambda/l\) with a comparison with experiments when keeping the angular velocity of flagellum \(\omega=20.94\) rad/s, for which we showed in the previous results that there is a significant direction change effect. Figure 6(c) show a good match between the experiment and simulation of the tendency of \(\beta_{\max}\) on \(\lambda/l\). The agreement validate the result in Figure 5(a) about the relationship between turnover torque and flagellum geometry.
### _Attitude control of bi-flagellated robot_
The previous sections shows that the propulsion force and turnover torque are associated with flagellum structure \(d/R,R/\lambda,\lambda/l\) and rotation speed \(\omega\). As for the control problem, it is not feasible to change the structure related parameters to vary the propulsion force and torque in process. Therefore, we take the rotation speed of two flagella \(\omega_{1},\omega_{2}\) as the actual control variable. Figure 6(b) illustrate the torque is linear with the rotation speed for given flagellum structure. Through our computation framework, we evaluate the propulsion force by \(\mathbf{f_{p}^{1}}=K_{1}\omega_{1},\mathbf{f_{p}^{2}}=K_{2}\omega_{2}\) as Figure 7(a). This allows us to realize the control scheme mentioned in Section III-D. We showcase a tracking for a constant attitude angle \(\beta_{\text{ref}}=35^{\circ}\) in both simulation and experiment. By solving Equation 11, we obtain \(\omega_{1}=201.6\text{rpm},\omega_{2}=-201.6\text{rpm}\). Then we set the
rotation speed of two flagellum with the values and the pitch angle \(\beta\) evolved as Figure 7(b).
## V Conclusions and future work
In conclusion, we present a bi-flagellated mechanism and a numerical simulation framework for studying bacteria tumbling behavior. A dimensionless scheme generalize our results to the bacteria level. The framework are used to explore the relationship between the steering ability and the structural parameters of the bi-flagellated system. The attitude control scheme ensure us to control the orientation of the robot.
Directions for future work include: (i) formulation of optimal control policy for free swimming robot, and (ii) develop the simulation tools for soft and elastic flagellum, accounting for the contact effect between flagella.
|
2306.17700 | Beyond Neural-on-Neural Approaches to Speaker Gender Protection | Recent research has proposed approaches that modify speech to defend against
gender inference attacks. The goal of these protection algorithms is to control
the availability of information about a speaker's gender, a privacy-sensitive
attribute. Currently, the common practice for developing and testing gender
protection algorithms is "neural-on-neural", i.e., perturbations are generated
and tested with a neural network. In this paper, we propose to go beyond this
practice to strengthen the study of gender protection. First, we demonstrate
the importance of testing gender inference attacks that are based on speech
features historically developed by speech scientists, alongside the
conventionally used neural classifiers. Next, we argue that researchers should
use speech features to gain insight into how protective modifications change
the speech signal. Finally, we point out that gender-protection algorithms
should be compared with novel "vocal adversaries", human-executed voice
adaptations, in order to improve interpretability and enable before-the-mic
protection. | Loes van Bemmel, Zhuoran Liu, Nik Vaessen, Martha Larson | 2023-06-30T14:26:49Z | http://arxiv.org/abs/2306.17700v1 | # Beyond Neural-on-Neural Approaches to Speaker Gender Protection
###### Abstract
Recent research has proposed approaches that modify speech to defend against gender inference attacks. The goal of these protection algorithms is to control the availability of information about a speaker's gender, a privacy-sensitive attribute. Currently, the common practice for developing and testing gender protection algorithms is "neural-on-neural", i.e., perturbations are generated and tested with a neural network. In this paper, we propose to go beyond this practice to strengthen the study of gender protection. First, we demonstrate the importance of testing gender inference attacks that are based on speech features historically developed by speech scientists, alongside the conventionally used neural classifiers. Next, we argue that researchers should use speech features to gain insight into how protective modifications change the speech signal. Finally, we point out that gender-protection algorithms should be compared with novel "vocal adversaries", human-executed voice adaptations, in order to improve interpretability and enable before-the-mic protection. Code is available at [https://github.com/Loes5307/VocalAdversary2022](https://github.com/Loes5307/VocalAdversary2022)
Loes van Bemmel\({}^{1,2,3}\), Zhuoran Liu\({}^{1}\), Nik Vaessen\({}^{1}\), Martha Larson\({}^{1,3}\)\({}^{1}\)Institute for Computing and Information Sciences, \({}^{2}\)Department of Artificial Intelligence,
\({}^{3}\)Center for Language Studies
Radboud University Nijmegen, the Netherlands adversarial speech, gender inference, neural classifiers, interpretability, attribute inference
## 1 Introduction
Recently, researchers have proposed approaches to protect spoken audio against gender inference attacks [1, 2, 3]. The aim of these approaches is to protect speakers' privacy by modifying their speech signal in a way that impedes the ability of a classifier to infer their gender. The modified speech must retain its utility, which is generally taken to mean that it must remain understandable to people and have minimal impact on automatic speech recognition (ASR).
This paper is motivated by our observation that current research devoted to developing and testing gender protection algorithms is overwhelmingly _neural-on-neural_. In other words, protective perturbations are generated with a neural approach and also tested against neural gender inference attacks. The main contribution of this paper are experimental results demonstrating the importance and usefulness of classic, non-neural speech features in attacking and analyzing gender protecting approaches, and motivating "vocal adversaries", non-neural protection created by the human voice.
The importance of going beyond neural-on-neural approaches is illustrated by the gender inference attacks in Fig. 1. For the original, unperturbed data (left side), a simple classifier based on a single feature (mean pitch) is outperformed by a neural classifier (WavLM). However, for data perturbed with a neural adversary (right side) the situation changes dramatically. The neural classifier can no longer correctly classify the speech samples, while the simple classifier maintains its performance. If researchers only test a neural classifier, as is common practice, e.g., [1, 2, 3], the vulnerability of the gender protection to a simple classifier based on speech features would go unnoticed.
The context of our work is the privacy threat represented by paralinguistic information inherent in speech, which has been gaining widespread attention recently [1, 4]. Gender is a privacy-sensitive attribute and concerns about gender classification being used for invasive or harmful targeting are growing [5, 6]. Further, targeting on the basis of binary notions of gender represents a grave reduction of the sociocultural concept of gender [7, 8]. We aim to move research on gender privacy protection closer to the real world. This aim motivates our focus on protecting the raw signal, rather than speech representations, as has been studied by [9, 10, 11]. It also pushes
Figure 1: Gender prediction accuracy on the VoxCeleb2 test set for 1) a single-feature classifier (linear ridge using mean pitch), and 2) a neural classifier (WavLM). Neural perturbations (reference model WavLM) do not affect the simple classifier, but highly impact the performance of the neural classifier (rightmost bar). Classifiers are trained on LibriSpeech.
us to look beyond neural attacks to extend the threat models under investigation to include non-neural modifications and before-the-mic scenarios. Initial efforts in these directions, and the closest work to our own, is [3], who tested a simple non-neural pitch shifting gender protection approach, and [12], who investigate before-the-mic speech protection created by having speakers speak through a tube. In contrast, in our work we propose, for the first time, "vocal adversaries", speakers speaking with adaptations created using their own voices.
The paper is structured around three sets of experiments corresponding to the three ways in which we propose that it is important for speaker gender protection research to go beyond current neural-on-neural approaches. In Sec. 2, we argue that researchers should be aware of the effect shown in Fig. 1 and investigate gender inference attacks that use speech features historically developed by speech scientists, alongside neural classifiers. In Sec. 3, we demonstrate how researchers can make use of speech features to seek insight into the ways in which protective modifications change the speech signal. In Sec. 4, we argue that neural approaches should not be considered the sole source of gender protection. Rather, we propose and introduce the promising "vocal adversaries", human executed voice adaptations.
## 2 Beyond Neural Attacks
In this section, we demonstrate the vulnerability of typical neural gender protection to both neural classifiers as well as a classifier that uses classic speech features.
### Experimental Setup
**Data** Table 1 presents the data for Male (M) and Female (F) speech. For the LibriSpeech [13] data set, we use the train-clean-100 (LS100h) subset as well as all training data (LS960h). LibriSpeech is known to be relatively 'clean', meaning that the recordings do not contain significant background noise. For Voxceleb [14], which is known as a noisier dataset, the Voxceleb2 development set (designated as 'vox') is used for training and the Voxceleb2 test set ('Voxtest') is used for testing. All audio recordings are in English, have a sample rate of 16kHz and are normalized to the [0,1] range. For testing we pad or cut each utterance to the first 6 seconds, similar to previous work [1].
**Neural classifiers** We train two neural models for gender classification, which are later also used to create the neural perturbations. M5 [15] is a convolutional neural network consisting of four convolutional layers with batch normalization, ReLU activation and a pooling layer, followed by a fully connected (FC) layer to classify gender. WavLM [16] is a self-supervised pre-trained transformer network that achieves state of the art results on the SUPERB benchmark. 1 To perform gender classification with WavLM, we take the average of the output sequence from the transformer, and add three FC layers for classification. M5 is trained from scratch on either LS100h, LS960h or vox for gender classification. WavLM's backbone is pre-trained on LS960h,2 then it is fine-tuned on either LS100h, LS-960h, or vox for the gender classification task. We used Adam with a maximum learning rate (LR) of 1e-4 for M5 and 1e-5 for WavLM. All models are trained for 50k steps, with a cyclic learning rate schedule (triangular, 12.5 k steps per cycle, minimum LR 1e-8). We use a batch size of 32 audio files and use raw waveforms as input. From each file, we randomly select a 3 second chunk for training. During evaluation, the first 6 seconds of the fragment is used. For WavLM, we freeze everything but the last classification layers for the first cycle, before we unfreeze the transformer network. The feature extraction CNN of WavLM is kept frozen for the whole training duration.
Footnote 1: [https://superbbenchmark.org/leaderboard](https://superbbenchmark.org/leaderboard)
Footnote 2: [https://huggingface.co/microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base)
**Classic classifiers using speech features** Traditionally, speech research uses handcrafted features extracted from audio [17]. These features have the advantage of having been designed by speech scientists, and, as such, reflect the underlying characteristics of speech. We extract 35 features with Praat [18], including number of Pulses, Periods and Voicebreaks. The degree of Voicebreaks, the fraction of Unvoiced parts, jitter (local, local absolute, rap, ppq5), shimmer (local, local dB, apq3, apq5, apq11), mean of the autocorrelation, Noise-to-Harmonics-Ratio (NHR), Harmonics-to-Noise-Ratio (HNR), mean and standard deviation of period and the min, max, mean, median and standard deviation of pitch. We also included duration, intensity (min, max, mean, standard deviation), the fundamental frequency F0, first three formants and the centre of gravity. These features were chosen for their widespread use in the acoustic community, as well as their interpretability with regards to speech production. We use the speech features in an SVM classifier trained on LS100h with a linear kernel to reduce overfitting.
**Feature selection** We select the top-10 features for the Female vs. Male classification using Recursive Feature Elimination (RFE) [19] with a linear SVM. Any SVM used in feature selection is not used for final classification. SVM-RFE is a wrapper feature selection that utilizes the support vectors to discard the least important feature in each iteration, until a top-\(n\) is left. The top-10 features can be seen in Table 2. These features and the selection method will also be used in Sec 3 for analysis of neural perturbations.
**Neural gender protection** We protect speakers' genders with neural perturbations created by applying Projected Gradient
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline \multirow{2}{*}{**data**} & \multirow{2}{*}{**avg. duration**} & \multicolumn{2}{c}{**\#speakers**} & \multicolumn{2}{c}{**\#utterances**} \\ \cline{3-6} & & **F** & **M** & **F** & **M** \\ \hline LS100h (training) & 12.7 s & 125 & 126 & 14 342 & 14 197 \\ LS960h (training) & 12.3 s & 1 128 & 1 1210 & 135 889 & 143 352 \\ vox (training) & 7.8 s & 2 312 & 3 682 & 397 032 & 694 977 \\ \hline Voxtest (testing) & 7.9 s & 39 & 79 & 10 711 & 25 526 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of datasets
Descent (PGD) [20] to trained neural networks ('reference models') in order to generate adversarial speech examples. Compared to the single-step approach FGSM [21] used in [1], PGD is a stronger iterative version widely adopted in the adversarial machine learning community [22]. PGD updates the perturbations iteratively:
\[\mathbf{x}_{i+1}=\mathbf{x}_{i}+\alpha\cdot\texttt{sign}(\nabla J(\mathbf{x}_{i},y)) \tag{1}\]
where \(\mathbf{x}_{i}\) is the perturbed waveform in iteration \(i\), and \(y\) is the label. \(J\) denotes the Cross-Entropy loss. Perturbations are generated by calculating \(J\) on the reference model. In each iteration, we clip values to \(0.1\) to ensure that the perturbed speech is in a valid range to preserve utility on ASR systems. We use a perturbation rate \(\alpha=0.0005\), 100 iterations for perturbations generated with the M5 networks and 10 iterations for WavLM networks. The perturbations are generated using the first 6 second fragments of the audio.
Neural perturbations are considered to make a contribution to privacy if they reduce the accuracy of classifiers carrying out gender inference attacks either to random (0.5 in the case of balanced data), which protects a group of speakers, or to zero, which allows individual speakers to 'flip' their gender. The utility of the perturbed speech on ASR systems is measured by Word Error Rate (WER) of transcriptions. Here, we compare the transcripts of Original speech to transcripts of protected speech using DeepSpeech 2 [23].
### Results of Neural Attacks
Table 3 shows the gender classification accuracy of the different neural models against conventional neural adversaries. We use M5 and WavLM trained on different data sets as reference models to generate adversarial speech. When the reference model has the same architecture and training data as the attack classifier (i.e., white-box adversaries), the classification accuracy on perturbed speech (on the diagonal and marked with underline) is low, as expected. In other words, neural perturbations are effective against gender classification in a white-box setting. Interestingly, in quite a few cases with differences between the reference model and attack classifier (i.e., grey-box adversaries off the diagonal, which are more relevant for real-world settings) some protection is still possible. Code and more results of different adversarial speech on different classifiers can be found in our GitHub repository.3
Footnote 3: [https://github.com/Loes5307/VocalAdversary2022](https://github.com/Loes5307/VocalAdversary2022)
### Results of Speech Feature Attack
Table 4 demonstrates SVMs with speech features (both top-10 and the full 35 features) are effective gender classifiers with an accuracy somewhat lower than neural models on the original data. We also see that neural perturbations do not provide effective protection against speech-feature-based SVMs. Neural-on-neural approaches would have missed this important vulnerability. Interestingly, neural perturbations boost SVM accuracy slightly, suggesting enhancement of feature robustness. Additional experiments on LibriSpeech's test set indicate that this effect is data set specific, as can be seen on the GitHub page.[3]
## 3 Analyzing Neural Perturbations
The high accuracies in Table 4 suggest that the underlying aspects of speech that are important for speech-feature-based gender classifiers are missed by neural perturbations, or are not sufficiently modified. In this section, we analyze the
\begin{table}
\begin{tabular}{l c c} \hline \hline & \multicolumn{2}{c}{**Attack classifier**} \\ \cline{2-3}
**Ref Model** & \multicolumn{1}{c}{**SVM top-10**} & \multicolumn{1}{c}{**SVM full**} \\ \hline Original & 87.6 (91.3 / 86) & 79.7 (95.9 / 72.9) \\ \hline M5-LS100h & 88.1 (90.7 / 87) & 80.9 (95.2 / 74.9) \\ M5-LS960h & 88.9 (90.7 / 86.9) & 82.7 (94 / 77.9) \\ M5-vox & 88.3 (90.9 / 87.2) & 82.9 (94 / 78.3) \\ WavLM-LS100h & 87.6 (91.1 / 86.1) & 82.7 (94.4 / 77.8) \\ WavLM-LS960h & 87.2 (91.2 / 85.5) & 81.8 (94.8 / 76.3) \\ WavLM-vox & 87.8 (91.3 / 86.3) & 81.7 (94.8 / 76.1) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Gender prediction accuracy of an SVM using speech features on Voxtest protected with neural perturbations. Format: All (F/M). “Ref model” specifies the reference model used to generate the perturbations. Left column: SVM with top-10 features selected with SVM-RFE; Right column: SVM with all 35 features.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Feature** & **Description** & **Higher for:** \\ \hline pitch\_mean & mean of pitch & Female \\ autocor\_mean & mean of autocorrelation & Female \\ nhr\_mean & mean of Noise-to-Harmonics-Ratio & Male \\ pitch\_std & standard deviation of pitch & Female \\ pitch\_max & max of pitch & Female \\ intensity\_mean & mean of intensity & Male \\ shimmer\_apq11 & shimmer computed with 11 neighbours & Male \\ shimmer\_apq3 & shimmer computed with 2 neighbours & Male \\ intensity\_max & max of intensity & Male \\ jitter\_local\_absolute & absolute jitter & Male \\ \hline \hline \end{tabular}
\end{table}
Table 2: The top-10 features ordered by importance for M vs. F classification obtained with SVM-RFE for LS100h.
\begin{table}
\begin{tabular}{l c c} \hline \hline & \multicolumn{2}{c}{**Attack classifier**} \\ \cline{2-3}
**Ref Model** & \multicolumn{1}{c}{\(\rightarrow\)**M5-LS960h**} & \multicolumn{1}{c}{\(\rightarrow\)**M5-vox**} & **red WER** \\ \hline Original & 92.5 (95.5 / 91.3) & 95.5 / 89.6 / 96.1 & 97.1 (94.6 / 98.1) \\ \hline M5-LS100h & 02.2 (3.0 / 2.2) & 11.7 (21.56) & 51.1 (37.7 / 56.6) & 28 \\ M5-LS960h & 14.2 (4.1 / 18.3) & 05.8 (01.1 / 1.2) & 31.9 (16.5 / 38.2) & 27 \\ M5-vox & 59.9 (49.5 / 64.1) & 48.8 (13.3 / 61.2) & 03.2 (02.2 / 33) & 27 \\ \hline \multicolumn{2}{l}{**WavLM-M5100h**} & \multicolumn{1}{c}{**WavLM-M5100h**} & \multicolumn{1}{c}{**WavLM-vox**} & \multicolumn{1}{c}{**WavLM-vox**} \\ \hline Original & 97.97 (97.6 / 98.9) & 99.7 (97.6 / 98.8) & 99.8 (98.2 / 99.4) \\ \hline WavLM-M5100h & 3.2 (5.2 / 1.8) & 26.7 (26.6 / 26.7) & 35.4 (22.7 / 31.9) & 32 \\ WavLM-M5960h & 47.1 (81.4 / 32.7) & 19.9 (15.5 / 3.2) & 57.6 (76.6/49.7) & 27 \\ WavLM-vox & 85.1 (86.9 / 84.3) & 81.1 (73.8 / 84.1) & 63.2 (04.4 / 4.2) & 22 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Gender prediction accuracy on Voxtest protected with neural perturbations. Format: All (F/M). “Ref model” specifies the reference model used to generate the perturbations. Left column: SVM with top-10 features selected with SVM-RFE; Right column: SVM with all 35 features.
overlap between the features that are changed by neural-perturbation and the features that discriminate between genders.
We use SVM-RFE to obtain the top-10 contributing features distinguishing perturbed vs. non-perturbed speech. Table 5 presents the intersection of these features and the top-10 features relevant for classifying gender given in Table 2. This list provides insight into which features are relevant for protecting against both neural and speech-feature-based attacks.
## 4 Vocal adversaries
Since intensity (a measure of loudness) and pitch were found to be important for protection of gender in speech in Sec. 3, our vocal adversaries leverage these features. As a proof of concept, twenty voice adaptations inspired by the voice disguise literature [24] were recorded by two speakers (one male, one female) reading the same passage of a story ("The Patchwork Girl of Oz" [25]). Four of these adaptations are reported in Table 6, where 'default' refers to speaking normally, 'lowrobot' and 'highrobot' refer to speaking monotone like a robot in low and high pitch respectively and 'overlyhappy' refers to speaking with a high pitch while smiling broadly. We found that some vocal adversaries, e.g., 'overlyhappy', can impede a gender inference attack. These results establish the viability of future interest on vocal adversaries. Note that the WER for the vocal adversary is computed with DeepSpeech 2 against a manual transcription, and is relatively high for all speech reflecting out-of-vocabulary words in the story. For some adaptations, the WER does not drop, indicating that the vocal adversary does not necessarily have the privacy-utility trade off characteristic of neural perturbations. A full overview and more details of the vocal adversaries can be found in the GitHub repository.[3]
## 5 Discussion and outlook
In this paper, we have presented three sets of experiments whose results show three ways in which researchers should strive to move beyond neural-on-neural approaches when developing and testing speaker gender protection algorithms. In Sec. 2, we have shown that neural perturbations are often effective against neural gender inference. This point is not particularly surprising, given the effectiveness of neural perturbations in the image [22] and the non-speech audio domain [26]. In many cases, neural perturbations transfer from the reference model to a different attack classifier, which makes them seem highly effective at first consideration. However, speech has an important distinction from images and non-speech audio: it is produced by the human body and enjoys regularities in underlying structure that reflect phonation and the resonances of the vocal tract. The consequence is that a single feature can already be used to draw a useful decision boundary (Fig. 1), and an SVM based on classic speech features (Sec. 2) can perform nearly on par with a neural classifier. Neural perturbations that protect gender apparently fail to influence the underlying characteristics of speech sufficiently, since they provide little protection against gender inference attacks based on speech features. This conclusion is supported by the fact that SVMs are less successful in defeating neural perturbations in non-speech audio [26].
Moving forward, researchers in the area of attribute inference are well served to attempt to analyze the impact of perturbations on speech in more detail. Some papers have moved in this direction by providing spectrograms of protected speech or by describing how it sounds [2, 3]. Here, we suggest that speech features are useful for explaining the effect of perturbations in a way that is closely related to speech production. Future work should further evolve the analysis presented in Sec. 3.
Finally, we have shown in Sec. 4 that a vocal adversary has potential to defend against both neural and speech-feature-based attacks. The human-executed voice adaptations that we tested required no special voice training, and open a new avenue for research on real-world protection against gender inference attacks in, i.e., smart speakers.
## 6 Acknowledgements
The first author was supported by a master-level ELLIS excellence fellowship at Radboud University Nijmegen.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline
**WER** & **data** & **M5-L5100h** & **M5-L5960h** & **M5-vox** & **WavLM-L5100h** & **WavLM-960h** & **WavLM-vox** & **SVM-full** \\ \hline
35 / 31 & default & 100 (100 / 100) & 96 (92 / 100) & 63 (25 / 100) & 100 (100 / 100) & 100 (100 / 100) & 100 (100 / 100) & 100 (100 / 100) \\
67 / 89 & whisper & 50 (96 / 4) & 50 (0 / 100) & 50 (0 / 100) & 94 (100 / 88) & 94 (100 / 88) & 98 (96 / 100) & 50 (100 / 0) \\
25 / 38 & lowrobot & 100 (100 / 100) & 63 (25 / 100) & 50 (0 / 100) & 100 (100 / 100) & 100 (100 / 100) & 100 (100 / 100) & 100 (100 / 100) \\
27 / 42 & highrobot & 100 (100 / 100) & 98 (96 / 100) & 71 (42 / 100) & 100 (100 / 100) & 100 (100 / 100) & 100 (100 / 100) \\
39 / 51 & overlyhappy & 50 (100 / 0) & 52 (88 / 17) & 63 (33 / 92) & 56 (100 / 13) & 71 (100 / 42) & 67 (100 / 33) & 50 (100 / 0) \\ \hline \end{tabular}
\end{table}
Table 6: Gender prediction accuracy of different models for four voice adaptations of vocal adversaries. Format: All (F/M). The WER per voice adaptation is reported on the left as ‘F/M’ w.r.t. manual transcription.
\begin{table}
\begin{tabular}{l l} \hline
**Ref model** & **intersection of top-10 features** \\ \hline
**WavLM-LS100h** & autocor\_mean, nhr\_mean, intensity\_mean, shimmer\_apq3** \\
**WavLM-L5960h** & pitch\_mean, nhr\_mean, intensity\_mean, shimmer\_apq3** \\
**WavLM-vox** & pitch\_mean, nhr\_mean, intensity\_mean \\
**M5-L5100h** & shimmer\_apq3** \\
**M5-L5960h** & autocor\_mean, nhr\_mean, shimmer\_apq3** \\
**M5-vox** & pitch\_mean, autocor\_mean, nhr\_mean \\ \hline \end{tabular}
\end{table}
Table 5: The intersection of the top-10 Male vs. Female features (listed in Table 2) and Non-perturbed vs. Perturbed features (on Voxtest perturbed with different models, listed in ‘Ref model’) |
2307.16723 | Hybrid quantum transfer learning for crack image classification on NISQ
hardware | Quantum computers possess the potential to process data using a remarkably
reduced number of qubits compared to conventional bits, as per theoretical
foundations. However, recent experiments have indicated that the practical
feasibility of retrieving an image from its quantum encoded version is
currently limited to very small image sizes. Despite this constraint,
variational quantum machine learning algorithms can still be employed in the
current noisy intermediate scale quantum (NISQ) era. An example is a hybrid
quantum machine learning approach for edge detection. In our study, we present
an application of quantum transfer learning for detecting cracks in gray value
images. We compare the performance and training time of PennyLane's standard
qubits with IBM's qasm\_simulator and real backends, offering insights into
their execution efficiency. | Alexander Geng, Ali Moghiseh, Claudia Redenbach, Katja Schladitz | 2023-07-31T14:45:29Z | http://arxiv.org/abs/2307.16723v1 | # Hybrid quantum transfer learning for crack image classification on NISQ hardware
###### Abstract
Quantum computers possess the potential to process data using a remarkably reduced number of qubits compared to conventional bits, as per theoretical foundations. However, recent experiments [1] have indicated that the practical feasibility of retrieving an image from its quantum encoded version is currently limited to very small image sizes. Despite this constraint, variational quantum machine learning algorithms can still be employed in the current noisy intermediate scale quantum (NISQ) era. An example is a hybrid quantum machine learning approach for edge detection [2]. In our study, we present an application of quantum transfer learning for detecting cracks in gray value images. We compare the performance and training time of PennyLane's standard qubits with IBM's qasm_simulator and real backends, offering insights into their execution efficiency.
## 1 Introduction
Image based crack detection in concrete is a fundamental technique for monitoring buildings like houses or bridges. A particular focus is on early crack detection when crack thickness is not more than one pixel such that the crack is hardly visible in the image.We solved this task by using classical deep learning methods [3].
Quantum computing promises many advantages. Superposition refers to the ability of a quantum system to exist in multiple states simultaneously. Moreover, two or more qubits can be entangled and affect each other regardless of the distance between them. This allows us to change the state of a qubit without applying operations to it. Furthermore, quantum computers can perform multiple computations simultaneously across different quantum states, leveraging the principles of superposition and entanglement. Another useful property of quantum computing is the creation of a larger search space, which gives us, for example, the possibility to separate data points or features in a higher dimensional space [4, 5].
However, the current noisy intermediate scale quantum (NISQ) era still poses a lot of challenges like hardware errors and the fact that quantum states cannot be measured without collapsing them (Copenhagen interpretation [6]). A consequence of the latter is that an estimate of a qubit's quantum state after running a quantum circuit can only be derived from the frequencies of measuring the basis states 0 and 1 after repeated runs of the circuit. This is a particular bottleneck when combining machine learning with quantum computing algorithms (quantum machine learning). In the
training phase, multiple runs are needed to calculate the gradients and to update the parameters as in the classical case. We cannot store intermediate results that are needed, for example, for backpropagation [7].
This paper shows how to deal with these problems in a specific use case. We review the current possibilities of how to update parameters in a quantum machine learning setting. Taking the limitations of current quantum hardware into account, we propose a quantum transfer learning method for crack classification using PennyLane software [5] and IBM's quantum computers [8].
The paper is organized as follows. Section 2 provides some basics of quantum computing and provides information on the software and hardware used. In Section 3, we describe the currently available differentiation methods. We explain our method for classifying cracks in Section 4. In Section 5, experiments for a given image dataset and the results for three differentiation methods on PennyLane's simulator are reported. Additionally, we run the approach on IBM's current quantum computers. Section 6 describes modifications of our algorithm with results on PennyLane's simulator and Section 7 concludes the paper and gives a short outlook.
## 2 Quantum computing basics, software, and hardware
Before we discuss quantum hardware and crack classification methods, we summarize some basic concepts of quantum computing [4]. Quantum computing follows different laws than classical computing. The difference starts with the basic elements. Classically, we have so-called bits, which can be either 0 or 1. The quantum analogues are quantum bits (qubits) - two-state quantum systems that allow for more flexibility. Analogously to 0 and 1, there are two basis states of a qubit: \(\ket{0}=(1,0)^{T}\) and \(\ket{1}=(0,1)^{T}\). However, any linear combination (superposition)
\[\ket{\psi}=\alpha\ket{0}+\beta\ket{1}, \tag{1}\]
of the basis states with \(\alpha,\beta\in\mathbb{C}\) and \(\abs{\alpha}^{2}+\abs{\beta}^{2}=1\) defines a possible state, too. The overall phase of a quantum state is unobservable [4]. That is, \(\ket{\psi}\) and \(e^{i\xi}\ket{\psi}\) for \(\xi\in[0,2\pi]\) define the same state. Therefore, it is sufficient to consider \(\alpha\in\mathbb{R}\).
As a consequence, the state of a single qubit can be visualized as a point on the unit sphere in \(\mathbb{R}^{3}\) (Bloch sphere) with spherical coordinates \(\phi\) and \(\theta\), where \(\alpha=\cos(\theta/2)\) and \(\beta=e^{i\phi}\sin(\theta/2)\). Figure 1 shows the Bloch sphere with the spherical coordinates.
All operations on a qubit must preserve the condition \(\abs{\alpha}^{2}+\abs{\beta}^{2}=1\), and thus can be represented by \(2\times 2\) unitary matrices. Standard operations (so-called gates) acting on single qubits are
\[X=\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right),\quad H=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&1 \\ 1&-1\end{array}\right),\quad R_{y}(\theta)=\left(\begin{array}{cc}\cos( \theta/2)&-\sin(\theta/2)\\ \sin(\theta/2)&\cos(\theta/2)\end{array}\right), \tag{2}\]
where the X-gate acts like a classical NOT operator and the Hadamard gate (H) superposes the basic states of a single qubit. The single-qubit rotation gate (R\({}_{\mathrm{y}}\)) rotates by \(\theta\) about the y-axis of the Bloch sphere. In imaging applications, rotation gates can be used to encode gray values.
Additionally, we need operations that link two or more qubits. The most common operation in quantum computing is the controlled NOT-gate (CX-gate) taking two input qubits. The target qubit's state is changed depending on the state of
Figure 1: Visualization of a quantum state \(\ket{\psi}\) in the Bloch sphere.
the control qubit:
\[CX=\left(\begin{array}{cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\end{array}\right). \tag{3}\]
That means, if the control qubit is in state \(\ket{1}\), we apply an X-gate to the target qubit. Otherwise, we do nothing. For example, assume our two-qubit system has the state \(\ket{10}=\ket{1}\otimes\ket{0}\), where the first qubit is the control, the second the target qubit, and \(\otimes\) is the tensor product. Then, the application of the CX-gate results in the state
\[\ket{11}=\ket{1}\otimes\ket{1}=(0,0,0,1)^{T}. \tag{4}\]
So basically, the application of quantum gates can be formulated in terms of linear algebra.
In general, we can apply any unitary operation to the target qubit. For example, a controlled rotation gate around the y-axis applies an \(\mathrm{R}_{\mathrm{y}}\)-gate to the target qubit if and only if the control qubit is in state \(\ket{1}\). We can also increase the number of control qubits even further.
Applying such controlled operations to two or more qubits with the control qubits in superposition results in entanglement of the qubits involved. In terms of linear algebra, an entangled state of several qubits cannot be written as a tensor product of the states of the individual qubits. Entanglement is exactly where we benefit from the quantum computing properties. Together with superposition, entanglement allows using a logarithmically lower number of qubits compared to the number of classical bits.
While all bits are connected to each other in classical computers, the qubits in IBM's quantum computers [8] are arranged in a special, the so-called heavy-hexagonal scheme (see the honeycomb structure in Figure 2). That is, each qubit is directly connected to at most three other qubits. To apply two-qubit gates to unconnected qubits, the information has to be swapped to neighboring qubits by the application of additional CX-gates. Each CX-gate, however, increases the overall error considerably such that an algorithm should employ as few CX-gates as possible.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Backend & Processor type & \# qubits & QV & CLOPS \\ \hline ibmq\_kolkata & Falcon r5 & 27 & 128 & 2,000 \\ ibmq\_ehningen & Falcon r5 & 27 & 64 & 1,900 \\ ibmq\_lima & Falcon r4T & 5 & 8 & 2,700 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Processor type and actual performance of the used backends as measured in June 2023. The term quantum volume (QV) represents the quality of the backend. CLOPS is the number of circuit layer operations per second and gives a measure for the speed of the backend.
Figure 2: Coupling maps of the backends used in this paper. Every circle represents a qubit, and lines represent connections between the qubits. Colors code the readout errors (circles) and the CX-errors for the connections (lines). Dark blue indicates a small error, and purple a large one. Errors are shown for ibmq\_ehningen and ibmq\_lima. IBM’s backend ibmq\_kolkata has the same coupling map as ibmq\_ehningen, but actual performance errors differ slightly (see Tables 1 and 2).
Lastly, the readout is also completely different for classical and quantum computing. On classical computers, one can always read the current state of the bits, copy them, or just continue running an algorithm with the same state of the bits as before the readout. Unfortunately, this is not possible on quantum computers. First, according to the no-cloning theorem [4], a state cannot be copied. Second, when measuring (reading out the state of) a qubit, its state collapses to one of the basis states \(\ket{0}\) or \(\ket{1}\). Hence, it is impossible to continue the algorithm after reading out. Additionally, measuring a qubit does not immediately yield the values of \(\alpha\) and \(\beta\) in Equation (1). Rather, the probability of collapsing to \(\ket{0}\) is given by \(\ket{\alpha}^{2}\) while the state \(\ket{1}\) is obtained with probability \(\ket{\beta}^{2}\). Repeated measurements (shots) of the same state allow for an estimation of these probabilities, and thus of the values \(\alpha\) and \(\beta\). For further reading on quantum computing basics, we recommend the book of Nielsen and Chuang [4].
For implementing our methods, we use the open-source software framework PennyLane [5], which is a Python 3 [9] software framework for differentiable programming of quantum computers that enables seamless integration with machine learning tools.
## 3 Differentiation methods
Training a machine learning approach basically consists in solving an optimization problem. This is typically done by gradient descent methods that require the computation of derivatives. PennyLane offers in total six differentiation methods [5], where we focus on numerical finite-differences ('finite-diff'), the parameter-shift rule ('parameter-shift'), and backpropagation ('backprop'). On PennyLane's default simulator, all options are available, whereas backpropagation cannot be used on IBM's qasm_simulator or real backends. The reason behind this is that on a real quantum computer, we do not have access to the internal computations without measuring them. Hence, intermediate results required in backpropagation are not accessible. In contrast, the parameter-shift rule and finite-differences only require evaluations of the cost function which can also be obtained on quantum computers. More information can be found in the papers by Bergholm _et al._[5], or Izaac [7].
Let us assume that we have some input \(x\in\mathbb{R}^{n}\) and some parameters \(\theta\in\mathbb{R}^{m}\), where \(n,m\in\mathbb{N}\). Then, a quantum circuit can be expressed by a function
\[f(x,\theta)\coloneqq\bra{\hat{B}}=\bra{0}U^{\dagger}(x,\theta)\hat{B}U(x, \theta)\ket{0}, \tag{5}\]
where \(\hat{B}\) is an observable and \(U(x,\theta)\) is the gate sequence of the quantum circuit. In the scheme of machine learning, we optimize the parameters in the training process. For that, we have to calculate the gradient of Equation (5). The finite-difference method [10] is based on approximating the gradient by
\[\partial_{\theta}f(x,\theta)\approx\frac{f(x,\theta+\Delta\theta)-f(x,\theta- \Delta\theta)}{2\Delta\theta}, \tag{6}\]
where \(\Delta\theta\) is a small shift in the parameters. The true gradient is obtained in the limit \(\Delta\theta\to 0\).
In order to avoid this approximation, the parameter-shift rule is usually used on quantum hardware
\[\partial_{\theta}f(x,\theta)=c\left[f(x,\theta+s)-f(x,\theta-s)\right], \tag{7}\]
where the parameters \(c\) and \(s\) depend on the specific function \(f\) (note that only a very restricted set of functions is considered in the context of quantum computing). It was first introduced to quantum machine learning in Mitarai et al. [11], and extended in Schuld et al. [12]. Note that the shift values \(s\) depend on the properties of the function and are usually much larger than the shift values in the finite-difference method [5, 12].
The third and commonly used method for classical machine learning is backpropagation. We just need a single forward pass at the expense of increased memory usage and no further calculations. During the forward pass all intermediate steps are stored and traversed in reverse using the chain rule for adapting the parameters [7].
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Backend & CX-error & Single qubit gate error & Readout error & T1 & T2 \\ & in \% & in \% & in \% & in \% & in \% \\ \hline ibmq\_kolkata & 0.835 & 0.021 & 1.000 & 122.23 & 64.40 \\ ibmq\_ehningen & 0.740 & 0.024 & 0.880 & 172.28 & 172.45 \\ ibmq\_lima & 1.097 & 0.048 & 2.580 & 75.00 & 103.41 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Typical average calibration data of the three chosen backends. The values are from June 2023.
The number of calls, so the number of times we have to execute the quantum circuit to obtain the results, varies significantly between the three differentiation methods. Generally, we have
\[N_{\text{calls}}=N_{\text{forward}}+N_{\text{backward}}, \tag{8}\]
where \(N_{\text{calls}}\) is the total amount of calls needed, \(N_{\text{forward}}\) the amount for the forward pass and \(N_{\text{backward}}\) the amount for the backward pass in the training. Table 3 shows the amount of calls required by the three methods.
Consequently, every training image yields \(L\cdot Q\) or \(2\cdot L\cdot Q\) additional evaluations on the quantum computer for the finite-difference method or the parameter-shift rule, respectively. Depending on the number of layers \(L\) and the number of qubits \(Q\), this can make a huge difference, especially in the current NISQ era.
## 4 Method
For solving the image classification task, we use a quantum transfer learning algorithm [5, 13]. That means that we combine a classical neural network with some quantum part. Figure 3 shows the schematic view. We insert the \(224\times 224\) input images into a classical algorithm and combine the resulting features into 4 features by application of a linear layer. These four features form the input to a quantum circuit whose output is reduced from four features to two by another linear layer. Based on these two features, we can decide whether the image contains a crack or not.
In the following, we will describe the classical algorithm and the quantum circuit part in more detail. The network is visualized in Figure 4.
As classical algorithm, we use a ResNet18 network [14] pre-trained on the ImageNet dataset [15]. ResNet18 itself contains blocks of convolution layers together with batch normalization and ReLU activation layers (see the first green block in Figure 4). ResNet18 then usually applies a fully-connected layer which we replace by a hybrid method (green and red blocks after the first block).
With the first linear layer, we reduce the number of features that we have to load into the quantum circuit. To keep the error coming from the hardware and the number of calls \(N_{\text{calls}}\) in Equation (8) small, we decided to reduce from 512 features to 4 features in the first linear layer.
The structure of the quantum circuit follows a typical variational quantum circuit with encoding, variational, and measurement part. These parts are visualized by two barriers in the red block in Figure 4. First, we have the encoding part, also called embedding part. We start with all qubits in state \(\ket{0}\). Consequently, our initial state is \(\ket{0}^{\otimes Q}\), where \(Q\) is the number of qubits. Afterwards, we use Hadamard gates (see Equation (2)) to put all qubits into superposition.
Figure 3: Schematic overview of the algorithm. The green part highlights the part for the classical computer, while the red area represents the quantum part. The variable \(x\) in the first linear layer represents the number of input features. It is adaptable depending on the number of features coming from the classical algorithm. The second value is for the number of output features.
\begin{table}
\begin{tabular}{l l l l} \hline Method & Forward pass & Backward pass & Total \\ \hline Backpropagation & \(T+V\) & \(-\) & \(T+V\) \\ Finite-difference & \(T+V\) & \(T\cdot L\cdot Q\) & \(T+V+T\cdot L\cdot Q\) \\ Parameter-shift & \(T+V\) & \(2\cdot T\cdot L\cdot Q\) & \(T+V+2\cdot T\cdot L\cdot Q\) \\ \hline \end{tabular}
\end{table}
Table 3: Number of calls \(N_{\text{calls}}\) needed in the training process for one epoch using three numerical methods for gradient computation. \(T\) is the number of training images, \(V\) the number of validation images, \(Q\) the number of qubits required in the circuit for representing the features, and \(L\) the number of layers (encoding and variational layers).
Then, we use R\({}_{y}\)-gates to encode the feature information into some rotations around the y-axis in the Bloch sphere. Hence, the number of Hadamard and R\({}_{y}\)-gates equals the number of qubits used for the encoding part. Here, we use a simple approach and represent each feature by one qubit. Theoretically, alternative encoding methods for representing the features in a quantum circuit could be used [5, 16, 1].
For the variational part, there are basically two types of gates. One of them is entangling gates, usually CX-gates, the other is further rotations with trainable parameters. Here, we use a parallelized entangling scheme and R\({}_{y}\)-gates for the parameters. These two elements of the variational part can be repeated \(q_{depth}\) times in the circuit. By that we generate more trainable parameters and also further entanglement but also more noise and more evaluations, since backpropagation is not available for adapting the parameters (see Section 3). If not stated otherwise, we take \(q_{depth}=1\) and use \(Q=4\) qubits as visualized in Figure 4 to keep the number of evaluations small. Finally, we measure all qubits in the last part of the quantum circuit.
## 5 Application and results
### Dataset
Our goal is to find cracks in gray value images of concrete, which have a size of around \(16,000\times 32,000\) pixels. In these types of images, the cracks have a width of about 1-2 pixels. It is a challenging task to see such cracks in the images with the human eye and also some machine learning methods are not able to track them directly [17]. For training, we split the images into patches of \(224\times 224\) pixels and classify each patch based on whether it contains a crack or not. We used 1,223 patches as our complete dataset, 723 patches with cracks and 500 without cracks. Figure 5 shows exemplary patches that contain cracks. Cracks are also visualized by masks that were obtained by manual annotation. These masks are not used for the training.
Figure 4: Proposed method for crack classification. The green part highlights the part for the classical computer, while the red area represents the quantum part. We chose ResNet18 and a quantum circuit with one variational layer.
Figure 5: Sample images (\(224\times 224\)) of cracks with corresponding masks. The masks are purely for visualisation, they are not used for the training.
We use splits of the dataset according to the ratios \(70\%/15\%/15\%\) or \(4\%/4\%/92\%\) for training, validation, and testing. Table 4 shows the total amount of images in the three subsets.
As loss function we use cross entropy and optimize by using Adam [18] with the default parameter setting. An optimization of this setting as in Geng et al.[19] is beyond the scope of this paper.
The experiments in Sections 5.2 and 6 are conducted on PennyLane's default simulator. In Section 5.3, we use IBM's qasm_simulator and the real backends ibmq_kolkata and ibmq_ehningen and in Section 5.4 IBM's real backend ibmq_lima.
### Comparison of differentiation methods
Figure 6 shows the loss and the accuracy boxplots for ten runs of each of the three differentiation methods using the \(70\%/15\%/15\%\) splitting of the dataset. The ten seed values are chosen randomly in the training step. Up to slight differences in the variance, the three differentiation methods yield similar results.
However, there are significant differences in terms of training time, i.e. the time needed for the \(100\) epochs of training. For the calculation, we used a computer with an Intel Xeon E5-2670 processor running at \(2.60\) GHz, a total RAM of \(64\) GB, and Red Hat Enterprise Linux 7.9. While training with backpropagation requires in total approximately \(36\) minutes, training takes much longer when using finite-differences (\(\sim 45\) minutes) and the parameter-shift rule (\(\sim 70\) minutes). Training times are related to the number of calls (see Table 3). With \(T=856\), \(V=184\), \(L=2\), and \(Q=4\), the total number of calls per epoch is
\[N_{\text{calls}}=\left\{\begin{array}{ll}1,040,\,\text{backpropagation}\\ 7,888,\,\text{finite-differences}\\ 14,736,\,\text{parameter-shift rule}.\end{array}\right. \tag{9}\]
### Application on two simulators and IBM's real backend
Now we would like to compare PennyLane's default simulator with IBM's qasm_simulator and IBM's real backends. To do so, we adapt the splitting of the dataset. The number of calculations required when using the splitting from the
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Ratio & \multicolumn{2}{c}{Training} & \multicolumn{2}{c}{Validation} & \multicolumn{2}{c}{Testing} \\ Train/Val/Test & cracks & no cracks & cracks & no cracks & cracks & no cracks \\ \hline \(70\%/15\%/15\%\) & 506 & 350 & 109 & 75 & 108 & 75 \\ \(4\%/\ \ 4\%/92\%\) & 29 & 20 & 29 & 20 & 665 & 460 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Splitting of the complete dataset into training, validation, and testing dataset with two different ratios.
Figure 6: Loss and accuracy on the test dataset for ten runs of training with random data shuffling and 100 epochs. Performed on PennyLane’s default simulator using \(70\%/15\%/15\%\) splitting of the dataset for training, validation, and testing.
previous section presents a challenge when working on IBM's real backends. For those, fair-share queuing applies [8], i.e., the jobs for the different training images have to wait in the queue for being processed. In total, the waiting time exceeds the execution time.
As backpropagation is not applicable on the real backends, we use the parameter-shift rule for all devices. To limit the required training time, we limit the number of shots to \(1,000\) and the number of epochs to \(10\). Thus, we have \(N_{\text{calls}}=8,820\). Table 5 shows the test losses, test accuracies, and training times for the four devices.
The higher loss and lower accuracy values compared to Figure 5(b) can be explained by the lower number of training and validation images. Table 5 shows a higher test accuracy for PennyLane's default simulator compared to the other three methods. This is due to the absence of sampling noise coming from multiple executions and the noise introduced by the real backends. The latter ones yield quite similar results to the simulation on IBM's qasm_simulator, which shows that the current NISQ devices are usable for this application.
Again, training time is the main issue. While the PennyLane's default simulator is optimized for quantum machine learning tasks, IBM's qasm_simulator needs around six times more time to finish the calculations. The difference is mainly caused by the transition from code written in PennyLane and executed on IBM's qasm_simulator. An even larger gap is observed between qasm_simulator and IBM's backends. The differences are caused by the transition to the real device (initialization, transpilation, and validation of the circuit), but mainly by the waiting time in the queue. Execution time on the real backends is only about \(2.5\) hours. Nevertheless, our experiments show that the classification of crack images is indeed achievable even on the current NISQ hardware.
### \(70\%/15\%/15\%\) dataset splitting on real backend
To validate the claim that the differences in accuracies in the previous experiments are due to the different numbers of training images used, we also trained with the \(70\%/15\%/15\%\) splitting on a real backend. The experiment was run on the backend ibmq_lima which is less frequently used than ibmq_kolkata and ibmq_ehningen such that waiting times are smaller. Figure 6(a) shows the confusion matrix after one epoch training. We see that the test performance with about \(97.81\%\) is comparable to the simulated results after 100 epochs (see Figure 5(b)). The four false positives are visualized in Figure 6(b). In the images we see some pores and also some artifacts (for example, bottom left image in Figure 6(b)) which leads to false classification.
For the training, we needed multiple executions on the real backend (in total \(14,736\), see Equation (9)), which yields to a total training time of about 4 days 6 hours and 30 minutes for one epoch. As already stated in the previous section, the main amount of time we wait in the queue. The actual execution time on ibmq_lima was 4 hours, so only about \(4\%\) of the total training time. In comparison, a classical deep learning approach would only need 3 hours for 100 epochs of training.
## 6 Modifications of the algorithm
There are various options to modify our approach. One option is to choose an alternative classical network to replace the existing classical part, like, for example, VGG16 [20] instead of ResNet18. The green part in Figure 8 shows the scheme of the alternative method using VGG16. Again, the output of the network, in this case \(4,096\) features, is reduced to 4 features. By that, we have then the same input for the quantum circuit as in the previous section.
Another option is to change the quantum circuit by using another entanglement strategy (like all-to-all, linear, or circular entanglement), another encoding method (like amplitude, or angle encoding [1]), or a higher \(q_{depth}\) value. We cover the last option in this paper by varying it between 1 and 6. The red framed part in Figure 8 shows the scheme of a possible quantum circuit with \(q_{depth}=6\) and 24 trainable parameters. For the options with changing entanglement or encoding, we refer to Aleksandrowicz _et al.[21]_ and Bergholm _et al._[5].
\begin{table}
\begin{tabular}{l c c c} \hline \hline Device & Test loss & Test accuracy & Training time \\ \hline PennyLane simulator & \(0.5466\) & \(0.7707\) & 2 min 26s \\ qasm\_simulator & \(0.5520\) & \(0.7280\) & 12 min 05s \\ ibmq\_kolkata & \(0.5245\) & \(0.7102\) & 17 h 01 min 30s \\ ibmq\_ehningen & \(0.5239\) & \(0.7186\) & 16 h 45 min 28s \\ \hline \hline \end{tabular}
\end{table}
Table 5: Loss and accuracy obtained on PennyLane’s default simulator, IBM’s qasm_simulator, and IBM’s real backends ibmq_kolkata and ibmq_ehningen. Experiments on the real backend were executed on June 24 and 25, 2022.
In all cases, we use the \(70\%/15\%/15\%\) splitting of the dataset. Figure 9 shows that changing the classical network affects the results slightly. In terms of variational layers, we see a minimal improvement when using more layers. However, this comes at the price of a larger number of calls. Especially, for finite-difference and parameter-shift rule, this makes a difference since the number of variational layers \(L=q_{depth}+1\) is one factor for the backward pass (see Table 3).
Table 6 compares the training times using ResNet18 and VGG16 with \(q_{depth}\in\{1,2,\ldots,6\}\). The training time only differs slightly over the ten runs, which is why we just show the mean value here. Since VGG16 is bigger than ResNet18 in terms of parameters and in our case the first linear layer decreases the number of features from \(4,096\) to \(4\) instead of \(512\) to \(4\), we see a higher training time for VGG16. By increasing the number of variational layers, we have a linear increase in the training times.
## 7 Conclusion and Outlook
In this paper, we demonstrate that the task of crack detection in gray value images can be solved by using current IBM's superconducting quantum computers. At the moment, backpropagation is not available on real devices. So, using
\begin{table}
\begin{tabular}{l c c c c c c} \hline Pre-trained & & & \(q_{depth}\) & & \\ network & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline ResNet18 & 36 min 20s & 44 min 28s & 55 min 22s & 65 min 55s & 77 min 28s & 83 min 08s \\ VGG16 & 37 min 53s & 49 min 39s & 60 min 10s & 68 min 34s & 82 min 16s & 90 min 22s \\ \hline \end{tabular}
\end{table}
Table 6: Mean training times for the results shown in Figure 9.
Figure 8: Alternative method for crack classification. The green part highlights the part for the classical computer, while the red area represents the quantum part. We chose the VGG16 network and a quantum circuit with six variational layers (three of which are drawn here).
Figure 7: Confusion matrix and false positives obtained on IBM’s backend ibmq_lima. The code was executed between February 18 and 23, 2023. We have a test accuracy of \(97.81\%\), which is similar to the simulator results in Figure (b)b.
finite-difference or parameter-shift rule are the options currently used to calculate the gradients. A larger number of evaluations, longer training times, and more jobs to submit are the consequences. In general, we achieved similar results on PennyLane's and IBM's simulators and IBM's superconducting quantum computers. Once trained, it is possible to use the trained models for real data from an industrial setting.
Our experiments show that the architecture of the quantum circuit is an important factor for the execution time. The more parameters we have within the variational part, the more time is needed for the backward pass in the quantum machine learning setting. However, we demonstrate, that increasing the number of parameters by using more variational layers does not automatically improve the results.
Our proposed algorithm can be adapted to other use-cases. Quantum transfer learning is not limited to 2D gray value images and we can also change it to 3D images as those processed by Barisin _et al._[17] in a classical regime. Currently, it seems to be more promising to change the classical part to adapt the algorithm to 3D images, but with further improvement of the quantum hardware, a higher proportion of the algorithm could be run in the quantum part.
Since classical machine learning benefits from the fast gradient calculation using backpropagation, optimizing gradient computation is in the focus of current quantum computing research. With structured quantum circuits, the gradient estimation requires fewer circuits [22]. Looking more into this direction and also adapting the variational ansatz to it, could be a promising way to overcome the drawback of calculating as many circuits on the quantum hardware as done in this paper.
Figure 9: Loss and accuracy obtained for training on PennyLane’s default simulator by using backpropagation. Results of ten runs with random data shuffling, ResNet18 and VGG16 as pre-trained classical networks, and varying number of variational layers \(q_{depth}\in\{1,2,\dots,6\}\). |
2309.17191 | $B\to K^* M_X$ vs $B\to K M_X$ as a probe of a scalar-mediator dark
matter scenario | Recently, Belle II reported the observation of the decay $B\to K M_X$, $M_X$
the missing mass, with the branching ratio much exceeding ${\cal B}(B\to K
\nu\bar\nu)$ which is the only Standard Model (SM) process contributing to this
reaction. If confirmed, this might be an indication of new nonSM particles
produced in this decay. One of the possible explanations of the observed effect
could be light dark-matter (DM) particles produced via a scalar mediator field.
We give simple arguments, that a combined analysis of the $B\to K M_X$ and
$B\to K^* M_X$ reactions would be a clean probe of the scalar mediator
scenario: (i) making use of an observed value ${\cal B}(B\to K M_X)\simeq 5.4\,
{\cal B}(B\to K \nu\bar\nu)_{\rm SM}$ and (ii) assuming that the effect is due
to the light dark matter coupling to the top quark via a {\it scalar} mediator
field, one finds an upper limit ${\cal B}(B\to K^* M_X) < 2.8 \, {\cal B}(B\to
K^* \nu\bar\nu)_{\rm SM}$. Within the discussed scenario, this upper limit does
not depend on the mass of the scalar mediator nor on the specific details of
the unobserved dark-matter particles in the final state. | Alexander Berezhnoy, Dmitri Melikhov | 2023-09-29T12:40:59Z | http://arxiv.org/abs/2309.17191v3 | # \(B\to K^{*}M_{X}\) vs \(B\to KM_{X}\) as a probe of a scalar-mediator dark matter scenario
###### Abstract
Recently, Belle II reported the observation of the decay \(B\to KM_{X}\), \(M_{X}\) the missing mass, with the branching ratio much exceeding \({\cal B}(B\to K\nu\bar{\nu})\) which is the only Standard Model (SM) process contributing to this reaction. If confirmed, this might be an indication of new nonSM particles produced in this decay. One of the possible explanations of the observed effect could be light dark-matter (DM) particles produced via a scalar mediator field. We give simple arguments, that a combined analysis of the \(B\to KM_{X}\) and \(B\to K^{*}M_{X}\) reactions would be a clean probe of the scalar mediator scenario: (i) making use of an observed value \({\cal B}(B\to KM_{X})\simeq 5.4\,{\cal B}(B\to K\nu\bar{\nu})_{\rm SM}\) and (ii) assuming that the effect is due to the light dark matter coupling to the top quark via a _scalar_ mediator field, one finds an upper limit \({\cal B}(B\to K^{*}M_{X})<2.8\,{\cal B}(B\to K^{*}\nu\bar{\nu})_{\rm SM}\). Within the discussed scenario, this upper limit does not depend on the mass of the scalar mediator nor on the specific details of the unobserved dark-matter particles in the final state.
_1. Introduction._ A recent Belle II observation [1] of \(B\to KM_{X}\) (for which also the notation \(B\to K\not{E}\), \(\not{E}\) the missing energy, is used) at the level much exceeding the Standard model (SM) prediction for \(B\to K\nu\bar{\nu}\),
\[{\cal B}(B^{+}\to K^{+}M_{X})=(2.4\pm 0.7)\times 10^{-5}\simeq(5.4\pm 1.5)\,{ \cal B}(B^{+}\to K^{+}\nu\bar{\nu})_{\rm SM}, \tag{1}\]
opened a window for immediate discussions of possible new physics effects capable to explain this result (see e.g. recent publications [2; 3; 4; 5; 6; 7; 8]). One of the discussed options is the decay into Dark Matter (DM) particles [6] with multiple scenarios for the content of these particles and the possible mediators. The goal of this short communication is to remark that combining the present Belle II result for the \(B\to KM_{X}\) with the hypothesis of a DM origin of the enhancement of \(\Gamma(B\to KM_{X})\) coupled to the SM particles via a _scalar mediator field_ leads to interesting and rather robust constraints independent of further details of the DM model.
2. _The \(B\to(K,K^{*})\bar{\chi}\chi\) decays via scalar mediator \(\phi\)_. As an example we consider a simple but rather general Lagrangian describing the iteration of DM particles \(\chi\) with the top quark via a scalar mediator field \(\phi\)[9; 10]:
\[{\cal L}_{\rm int}=-\frac{ym_{t}}{v}\phi\,\bar{t}t-\kappa\phi\bar{\chi}\chi\;, \tag{2}\]
The corresponding effective Lagrangian describing the flavour-changing neutral current (FCNC) \(b\to s\phi\) vertex has the form [9; 10]
\[{\cal L}_{b\to s\phi}=g_{b\to s\phi}\,\phi\,\bar{s}_{L}b_{R}+{\rm h.c.},\qquad g _{b\to s\phi}=\frac{ym_{b}}{v}\frac{3\sqrt{2}G_{F}m_{q}^{2}V_{qs}^{*}V_{qb}}{16 \pi^{2}} \tag{3}\]
This interaction leads to the following amplitudes of the \(B\to(K,K^{*})\bar{\chi}\chi\) decay via the \(\phi\) mediator:
\[A(B(p)\to K(p-q)\bar{\chi}(k)\chi(q-k)) = -i\langle K(p-q)\bar{\chi}(k)\chi(q-k)|L_{b\to s\phi}|B(p)\rangle \tag{4}\] \[= \langle\bar{\chi}\chi|\bar{\chi}\chi|0\rangle\kappa\frac{1}{M_{ \phi}^{2}-q^{2}}g_{b\to s\phi}\langle K(p-q)|\bar{s}_{L}b_{R}|B(p)\rangle\] \[A(B(p)\to K^{*}(p-q)\bar{\chi}(k)\chi(q-k)) = -i\langle K^{*}(p-q)\bar{\chi}(k)\chi(q-k)|L_{b\to s\phi}|B(p)\rangle\] (5) \[= \langle\bar{\chi}\chi|\bar{\chi}\chi|0\rangle\kappa\frac{1}{M_{ \phi}^{2}-q^{2}}g_{b\to s\phi}\langle K^{*}(p-q)|\bar{s}_{L}b_{R}|B(p),\]
where
\[\langle\bar{\chi}\chi|\bar{\chi}\chi|0\rangle\equiv\langle\bar{\chi}(k)\chi(q -k)|\bar{\chi}(0)\chi(0)|0\rangle, \tag{6}\]
and \(q\) is the momentum of the outgoing \(\bar{\chi}\chi\) pair of unobserved DM particles, \(M_{X}^{2}\equiv q^{2}\) is the missing mass squared. Using these amplitudes, one can calculate \(d\Gamma(B\to(K,K^{*})\bar{\chi}\chi)/dq^{2}\), where the summation over polarizations of the DM particles \(\chi\) and the integration over their phase space are performed leading to
\[\sum_{\chi\,{\rm polar}}|\langle\bar{\chi}(k_{1})\chi(k_{2})|\bar{\chi}(0)\chi (0)|0\rangle|^{2}\delta(m_{\chi}^{2}-k_{1}^{2})\delta(m_{\chi}^{2}-k_{2}^{2}) \theta(k_{10})\theta(k_{20})\delta(q-k_{1}-k_{2})dk_{1}dk_{2}=\Pi_{\chi}(q^{2}). \tag{7}\]
We are interested in the ratio of the differential distributions in two reactions proceedings via the scalar mediator \(\phi\):
\[R^{(\phi)}_{K^{*}/K}(q^{2})=\frac{d\Gamma(B\to K^{*}\bar{\chi}\chi)/dq^{2}}{d \Gamma(B\to K\bar{\chi}\chi)/dq^{2}}. \tag{8}\]
For this ratio, the explicit form of \(\Pi_{\chi}(q^{2})\) is of no importance: it is the same function in \(d\Gamma(B\to K\bar{\chi}\chi)/dq^{2}\) and \(d\Gamma(B\to K^{*}\bar{\chi}\chi)/dq^{2}\) and therefore drops out in the ratio. It is also obvious that the specific properties of the DM particles \(\chi\) are of no importance too: the final estimate holds for the DM particles independent of their spins. Moreover, the ratio is not sensitive to the properties of the scalar mediator \(\phi\) as its propagator also cancels in the ratio. Essential for the ratio (8) is merely _the operator structure of the vertex_, \(\phi\,\bar{s}_{L}b_{R}\). Using the QCD equations of motion, it is straigtforward to calculate the amplitudes
\[\langle K|\bar{s}_{L}b_{R}|B\rangle = \frac{1}{2}\langle K|\bar{s}(1-\gamma_{5})b|B\rangle=\frac{1}{2} \langle K|\bar{s}b|B\rangle=\frac{1}{2}\frac{M_{B}^{2}-M_{K}^{2}}{m_{b}-m_{s} }f_{0}^{B\to K}(q^{2}),\] \[\langle K^{*}|\bar{s}_{L}b_{R}|B\rangle = \frac{1}{2}\langle K^{*}|\bar{s}(1-\gamma_{5})b|B\rangle=-\frac{1 }{2}\langle K^{*}|\bar{s}\gamma_{5}b|B\rangle=-i(\epsilon q)\frac{M_{K^{*}}} {m_{b}+m_{s}}A_{0}^{B\to K^{*}}(q^{2}), \tag{9}\]
where the dimensionless form factors \(f_{0}\) and \(A_{0}\) are well-known quantities parametrizing the \(\langle K|\bar{s}\gamma_{p}b|B\rangle\) and \(\langle K^{*}|\bar{s}\gamma_{\mu}\gamma_{5}b|B\rangle\) amplitudes [11]. The decay rates of interest then take the form:
\[\frac{d\Gamma\left(B\to K\bar{\chi}\chi\right)}{dq^{2}} = \left|\frac{g_{b\to\phi\,\kappa}}{M_{\phi}^{2}-q^{2}}\right|^{2} \frac{\lambda^{1/2}(M_{B}^{2},M_{K}^{2},q^{2})}{16\pi M_{B}^{3}}|\langle K|\bar {s}_{L}b_{R}|B\rangle|^{2}, \tag{10}\] \[\frac{d\Gamma\left(B\to K^{*}\bar{\chi}\chi\right)}{dq^{2}} = \left|\frac{g_{b\to\phi\,\kappa}}{M_{\phi}^{2}-q^{2}}\right|^{2} \frac{\lambda^{1/2}(M_{B}^{2},M_{K}^{2},q^{2})}{16\pi M_{B}^{3}}\sum_{K^{*} \mbox{-polar}}|\langle K^{*}|\bar{s}_{L}b_{R}|B\rangle|^{2}. \tag{11}\]
Performing summation over the \(K^{*}\) polarizations, we find for the ratio (8):
\[R^{(\phi)}_{K^{*}/K}(q^{2})=\frac{\lambda^{3/2}(M_{B}^{2},M_{K^{*}}^{2},q^{2}) }{\lambda^{1/2}(M_{B}^{2},M_{K}^{2},q^{2})(M_{B}^{2}-M_{K}^{2})^{2}}\left[\frac {m_{b}-m_{s}}{m_{b}+m_{s}}\right]^{2}\left|\frac{A_{0}(q^{2})}{f_{0}(q^{2})} \right|^{2}. \tag{12}\]
_3. Numerical estimates._ For the form factors, we make use of the results from [12] and [13] in the form of convenient parametrizations of [14]:
\[f_{0}(q^{2}) = \frac{0.33}{1-0.7\,r_{V}+0.27\,r_{V}^{2}},\quad r_{V}=q^{2}/m_{B _{s}^{*}}^{*} \tag{13}\] \[A_{0}(q^{2}) = \frac{0.37}{(1-0.46\,r_{P})(1-r_{P})},\quad r_{P}=q^{2}/m_{B_{s} ^{*}}^{2}. \tag{14}\]
We then come to the following numerical prediction for the ratio of the differential distributions in the reactions \(B\to K\phi\to K\chi\chi\) and \(B\to K^{*}\phi\to K^{*}\chi\chi\) shown in Fig. 1. Notice that using more recent parametrizations for the form factors, e.g., those from [15; 16], does not lead to any visible changes of the results in Fig. 1.
_4. Discussion_. As clear from the plot, one can obtain an upper limit:
\[\delta\Gamma(B\to K^{*}M_{X})_{\phi}<1.05\,\delta\Gamma(B\to KM_{X})_{\phi}. \tag{15}\]
This upper limit is obtained from Fig. 1 at \(M_{X}^{2}=q^{2}\to 0\). The upper limit of Eq. (15) is a rather conservative estimate and in practice one may expect a larger suppression of \(\delta\Gamma(B\to K^{*}M_{X})_{\phi}\) compared to \(\delta\Gamma(B\to KM_{X})_{\phi}\): The \(q^{2}\)-region providing the dominant contribution to the \(B\to(K,K^{*})\bar{\chi}\chi\) cross sections is mainly determined by the resonant structure of the \(\phi\)-propagator and corresponds to nonzero \(q^{2}\) thus leading to a further suppression \(\delta\Gamma(B\to K^{*}M_{X})_{\phi}<\delta\Gamma(B\to KM_{X})_{\phi}\).
Now, assuming that the observed enhancement of \(\Gamma(B\to K\nu\bar{\nu})_{\rm SM}\) is due to unobserved DM particles interacting via a scalar mediator we can write
\[\Gamma(B\to KM_{X}) = \Gamma(B\to K\nu\bar{\nu})_{\rm SM}+\delta\Gamma(B\to KM_{X})_{\phi}, \tag{16}\] \[\Gamma(B\to K^{*}M_{X}) = \Gamma(B\to K^{*}\nu\bar{\nu})_{\rm SM}+\delta\Gamma(B\to K^{*}M_{X})_{ \phi}. \tag{17}\]
For \({\cal B}(B\to K^{*}\nu\bar{\nu})/{\cal B}(B\to K\nu\bar{\nu})\) one obtains \(2.5\pm 0.7\) making use of old estimates [19; 20; 21] or \(2.23\pm 0.40\) using the recent analysis of [15]. Within the quoted uncertainties these two values are in excellent agreement with each other.
Collecting together the central values of the discussed constraints:
\[\Gamma(B\to K^{*}\nu\bar{\nu})_{\rm SM} \simeq 2.5\,\Gamma(B\to K\nu\bar{\nu})_{\rm SM}\quad\quad[\mbox{ Theoretical estimates}] \tag{18}\] \[\delta\Gamma(B\to KM_{X})_{\phi} \simeq 4.4\,\Gamma(B\to K\nu\bar{\nu})_{\rm SM}\quad\quad[\mbox{ Belle measurement}]\] (19) \[\delta\Gamma(B\to K^{*}M_{X})_{\phi} < 1.05\,\delta\Gamma(B\to KM_{X})_{\phi}\quad[\mbox{Our result here}], \tag{20}\]
we then obtain the following upper bound for the \(\Gamma(B\to K^{*}M_{X})\) within the _scalar mediator scenario_:
\[\Gamma(B\to K^{*}M_{X})<2.8\,\Gamma(B\to K^{*}\nu\bar{\nu})_{\rm SM}. \tag{21}\]
Therefore, for the case of scalar mediator, an expected enhancement of the branching ratio \(\Gamma(B\to K^{*}M_{X})\) compared to the SM value is approximately two times smaller than the enhancement for \(\Gamma(B\to KM_{X})\), Eq. (1). This constraint is well compatible with the present experimental limit [17]
\[{\cal B}(B\to K^{*}M_{X})\leq 2.7\times 10^{-5}\leq 2.7\,{\cal B}(B\to K^{*}\nu \bar{\nu})_{\rm SM}, \tag{22}\]
where at the second step we have used a recently updated estimate \({\cal B}(B\to K^{*}\nu\bar{\nu})_{\rm SM}=(1.03\pm 0.15)\times 10^{-5}\)[15].
On the other hand, if the "mediator" is a spin-1 field, it is natural to expect a similar enhancement for both \(K\) and \(K^{*}\) mesons in the final state, \({\cal B}(B\to K^{*}M_{X})/{\cal B}(B\to K^{*}\nu\bar{\nu})_{\rm SM}\simeq{ \cal B}(B\to KM_{X})/{\cal B}(B\to K\nu\bar{\nu})_{\rm SM}\).
We therefore conclude that the combined analysis of \({\cal B}(B\to K^{*}M_{X})\) and \({\cal B}(B\to K\,M_{X})\) may provide a useful probe of the scalar mediator scenario [e.g., may exclude this scenario if the obtained upper limit is violated]. Since the obtained constraints are not sensitive to the scalar mediator mass \(M_{\phi}\), the latter should then be determined from other observables.
_Acknowledgments._ The research was carried out within the framework of the program "Particle Physics and Cosmology" of the National Center for Physics and Mathematics.
|
2309.04853 | Level-crossings reveal organized coherent structures in a turbulent time
series | In turbulent flows, energy production is associated with highly organized
structures, known as coherent structures. Since these structures are
three-dimensional, their detection remains challenging in the most common
situation, when single-point temporal measurements are considered. While
previous research on coherent structure detection from time series employs a
thresholding approach, the thresholds are ad-hoc and vary significantly from
one study to another. To eliminate this subjective bias, we introduce the
level-crossing method and show how specific features of a turbulent time series
associated with coherent structures can be objectively identified, without
assigning a prior any arbitrary threshold. By using two wall-bounded turbulence
time series datasets, we successfully extract through level-crossing analysis
the impacts of coherent structures on turbulent dynamics, and therefore, open
an alternative avenue in experimental turbulence research. By utilizing this
framework further we identify a new metric, characterized by a statistical
asymmetry between peaks and troughs of a turbulent signal, to quantify
inner-outer interaction in wall turbulence. Moreover, a connection is
established between extreme value statistics and level-crossing analysis,
thereby allowing additional possibilities to study extreme events in other
dynamical systems. | Subharthi Chowdhuri, Tirtha Banerjee | 2023-09-09T17:58:19Z | http://arxiv.org/abs/2309.04853v1 | # Level-crossings reveal organized coherent structures in a turbulent time series
###### Abstract
In turbulent flows, energy production is associated with highly organized structures, known as coherent structures. Since these structures are three-dimensional, their detection remains challenging in the most common situation, when single-point temporal measurements are considered. While previous research on coherent structure detection from time series employs a thresholding approach, the thresholds are ad-hoc and vary significantly from one study to another. To eliminate this subjective bias, we introduce the level-crossing method and show how specific features of a turbulent time series associated with coherent structures can be objectively identified, without assigning a prior any arbitrary threshold. By using two wall-bounded turbulence time series datasets, we successfully extract through level-crossing analysis the impacts of coherent structures on turbulent dynamics, and therefore, open an alternative avenue in experimental turbulence research. By utilizing this framework further we identify a new metric, characterized by a statistical asymmetry between peaks and troughs of a turbulent signal, to quantify inner-outer interaction in wall turbulence. Moreover, a connection is established between extreme value statistics and level-crossing analysis, thereby allowing additional possibilities to study extreme events in other dynamical systems.
Introduction
Coherent structures in turbulent flows, ranging from astrophysical to engineered, to atmospheric systems, are best described by their phenomenology, such as: (a) their characteristic scales are comparable to the integral scales [1]; (b) they induce non-Gaussian fluctuations in the turbulent variables [2]; and (c) they have large contributions to turbulent fluxes and kinetic energy [3]. Geometrically, these structures are three-dimensional and can take various shapes based on the types of turbulent flow. Examples include, granular patterns in astrophysical flows [4]; hairpin structures in neutral wall-bounded flows [5]; and counter-rotating roll vortices in atmospheric turbulence [6]. Notwithstanding their significance in drag reduction [7], consideration of coherent structures are also important for weather and climate models since disregarding those can cause significant uncertainties in turbulence parameterization [8].
Despite these structures can be visually recognized from three-dimensional numerical simulations, smoke visualization experiments, particle velocimetry measurements, or satellite images, it remains challenging to detect them from the most common form of turbulent experiments where the variables are measured at a single point in time. Previous studies on coherent structure detection from turbulent time series employ a thresholding approach, where the thresholds are set either in the temporal or spectral domain.
Regarding spectral domain, Perry and Chong [9] demonstrated how the spectra of streamwise velocity fluctuations in time encoded the information about hairpin eddy structures by displaying a \(-1\) spectral power law. Therefore, by choosing an appropriate cut-off wavelength (\(\lambda\)), the effect of hairpin eddy structures on the velocity statistics could be inferred from a single point time series [10]. Apart from hairpin eddies, the contributions from very large scale motions (VLSMs) on velocity statistics are detected by setting \(\lambda\) comparable to the boundary layer depth [11]. By contrast, in the temporal domain, thresholds are applied directly on the time series and historically were chosen in a manner so that the frequency of the detected structures matched with the smoke visualization experiments [12].
However, the thresholding procedure in temporal domain suffers from subjectivity as the threshold values differ significantly from one study to another [13]. Additionally, the rationale behind their choices also varies, since some studies consider the thresholds where the probability density functions (PDFs) of the time series differ from a Gaussian, whereas others choose them from a quadrant perspective [14]. Conversely, the thresholds in the spectral do
main often require information about certain parameters (such as boundary layer height) whose measurements are rarely available [15]. To eliminate these difficulties, we introduce a level-crossing technique through which the coherent structures can be detected from time series without assigning a prior any tunable thresholds or external parameters.
Although a handful of previous research, such as the ones by Tardu and Bauer [16] and Poggi and Katul [17], have used level-crossing analysis to study the Reynolds stress production and dissipation of turbulence kinetic energy in wall turbulence, we demonstrate that this approach could be generalized further to detect coherent structures. In a level-crossing method [18], one seeks a statistical description of time scales \(t_{p}|_{\alpha}\) up to which a stochastic variable \(f(t)\) remains larger or smaller than \(\overline{f}\pm(\alpha\times\sigma_{f})\), where \(t\) is time, \(\overline{f}\) is the temporal mean, \(\sigma_{f}\) is the standard deviation, and \(\alpha\) is a given threshold. A brief review of level-crossing approach is provided by Friedrich _et al._ [19], which, in other words, is a generalization of the zero-crossing or persistence analysis where \(\alpha\) level is set at zero [16; 20]. For many different turbulent flows, the PDFs (\(P(t_{p}|_{\alpha=0})\)) of \(t_{p}|_{\alpha=0}\) are power-laws with an exponential cutoff [21; 22; 23]. On the one hand, the exponential cutoff represents a Poisson distribution, associated with \(t_{p}\) values larger than the integral scales \(\gamma\)[24]. On the other hand, Blake and Lindsey [18] shows \(P(t_{p}|_{\alpha})\) becomes a Poisson distribution when \(\alpha\) values are substantially large.
Several points are now considered. First, the Poisson distribution is associated with a stochastic process for which the autocorrelation function stays zero at all time scales [20]. Second, in a turbulent time series, the measurements become weakly correlated with each other at scales larger than \(\gamma\), since the autocorrelation functions drop to zero [24]. Third, the characteristic scales of coherent structures are comparable to \(\gamma^{1}\). Fourth, in a randomly-shuffled (RS) signal, the autocorrelation functions cease to exist [25]. By combining all these aspects, we hypothesize the thresholds to detect coherent structures could be objectively determined as that particular \(\alpha\) for which \(P(t_{p}|_{\alpha})\) of the original signal matches with its RS counterpart. Intrigued by this possibility, we ask: (1) By changing \(\alpha\) what turbulent flow features are revealed? (2) Do the detected coherent structures from the critical \(\alpha\) value obey the flow physics? (3) Can we identify the organizational aspects of coherent structures through the level-crossing approach?
We focus our attention on wall-bounded turbulence, since in such flows the properties of coherent structures are well-established [3]. We use two hot-wire temporal datasets, collected from a zero-pressure gradient turbulent boundary layer generated in the Melbourne wind
tunnel [26]. During our presentation, we arrange the article in three different sections. In Section II, we provide brief descriptions of the experimental datasets and methodology, in Section III we introduce the results and discuss them, and lastly in Section IV we summarize the key takeaways and provide the scope for further research.
## II Dataset and Methodology
Corresponding to both hot-wire datasets, the friction Reynolds numbers (\(Re\)) are of the order of \(10^{4}\) as illustrated in Baars _et al._[27]. The wall-normal heights are normalized by friction velocity (\(u_{*}\)) and kinematic viscosity (\(\nu\)) and denoted as \(y^{+}\), where \({}^{+}\) refers to wall-scaling. We restrict \(y^{+}\leq 10^{4}\), up to which the flow is fully turbulent [28]. Out of the two datasets, one of them were sampled at a frequency (\(f_{s}\)) of 20 kHz (T1 dataset), while the other at 44 kHz (T2 dataset). Note that for both datasets, one probe is fixed (reference probe) while the others traverse across heights and are synchronized with the reference probe [27]. Moreover, for the T1 dataset, the time series of streamwise velocity were collected over three acquisition cycles, with each of 120-s duration. Therefore, the results presented for the T1 dataset are ensemble-averaged over these three measurement cycles. However, for the T2 dataset, only a single cycle of 360-s duration was used. Regarding our purposes, we consider the streamwise velocity fluctuations (\(u^{\prime}\)) after subtracting the temporal mean (\(\overline{u}\)). Subsequently, on these \(u^{\prime}\) time-series we apply level-crossing and event-synchronization analysis, whose rationale are discussed below.
### Level-crossing analysis
To demonstrate the philosophy behind level-crossing analysis, we use a segment of a \(u^{\prime}\) time series (normalized by its standard deviation \(\sigma_{u}\)) from the T1 dataset at height \(y^{+}=66.84\) (Fig.1a). Corresponding to this time series, one can generate its telegraphic approximations (TA) by denoting the values above a threshold to be 1 and 0 otherwise [29]. In the bottom panels of Fig.1a, we show three TA sequences at threshold levels \(\alpha=0,2,-2\). One can clearly see as the threshold levels are increased, the timescales \(t_{p}|_{\alpha}\) of the TA patterns become substantially large. In fact, if \(t_{p}\) values become comparable to the integral scales of \(u^{\prime}\) (\(\gamma_{u}\)), one would expect the TA patterns associated with those \(\alpha\) levels to resemble
a random configuration.
This indeed appears to be the case when one compares the PDFs of \(t_{p}|_{\alpha}\) for the three \(\alpha\) values between the original and randomly-shuffled (RS) signals (Fig.1b). As opposed to \(\alpha=0\), for \(\alpha=2,-2\), \(P(t_{p}|_{\alpha})\) of \(u^{\prime}\) signal has an excellent agreement with its RS counterpart. This can be further confirmed through the q-q plots, where the \(t_{p}|_{\alpha}\) values between the original and RS signals follow a straight line with \(45^{\circ}\) slope for \(\alpha=-2,2\), thereby indicating they are both sampled from similar distributions (not shown here).
It is also interesting to investigate how the energy spectrum of the TA patterns change as \(\alpha\) is varied systematically. Sreenivasan and Bershadskii (2012) showed that the energy spectrum
Figure 1: (a) A segment of \(u^{\prime}\) time series and it’s telegraphic approximations (TA) at different threshold levels (\(\alpha\)) are shown from \(y^{+}=66.84\) of T1 dataset. The level-crossing time scales at different \(\alpha\) values are denoted as \(t_{p}|_{\alpha}\). (b) The PDFs of \(t_{p}^{+}|_{\alpha}\) are shown for \(\alpha=0,2,-2\), corresponding to the original and randomly-shuffled (RS) signals. (c) The energy spectrum of the time series at \(y^{+}=66.84\) is compared with the TA series at different \(\alpha\) levels. The power-laws \(-1\) and \(-5/3\) are shown in dash-dotted gray lines. (d) Normalized mean time scales (\(\overline{t_{p}|_{\alpha}}/\gamma_{u}\)) are plotted against \(\alpha\) values, by considering all the heights from the T1 dataset. The gray shaded colors indicate different heights as per the color-bar denoting \(\log_{10}(y^{+})\) in Fig. 3. The horizontal blue dash-dotted line indicates \(\overline{t_{p}|_{\alpha}}=\gamma_{u}\).
of TA patterns corresponding to \(\alpha=0\) level, preserve the information about the spectral power laws albeit with some change depending on the Reynolds number of the flow. In Fig.1c we show the energy spectra of the TA patterns with different \(\alpha\) values and compare the same with the original \(u^{\prime}\) signal at \(y^{+}=66.84\) (green line with circular markers). One can see at \(\alpha=0\) level (black line with circles), the energy spectrum shows a \(-1\) spectral scaling at smaller frequencies similar to original signal, but at larger frequencies the \(-5/3\) scaling law appears to be a little different. However, at large enough \(\alpha\) values (indicated by deep red or blue colors), the scaling laws disappear from the TA energy spectra and they nearly attain a flat shape as expected for a random signal.
More importantly, for all the available heights from the T1 dataset, if one plots the mean time scales (\(\overline{t_{p}|_{\alpha}}\)) against the \(\alpha\) values, then \(\overline{t_{p}|_{\alpha}}\) exceeds \(\gamma_{u}\) considerably as \(\alpha\) increases (Fig.1d). Therefore, one can conclude that by increasing \(\alpha\) a critical value is reached, using which one could study certain flow features whose characteristic scales are comparable to \(\gamma_{u}\).
### Event-Synchronization analysis
By conducting an event-synchronization analysis, one seeks to describe how the positive and negative patterns (with respect to \(\alpha=0\) level) in a turbulent signal are coupled with each other across different wall-normal heights. This information is important to establish
Figure 2: Shannon entropies of the synchronized event lengths are plotted against different time lags (\(\Delta t^{+}\)) for the (a) T1 and (b) T2 datasets, where the values are normalized by the entropies of the synchronized event lengths computed for the full signal.
how the non-local influences impact the organization of turbulent events in wall-bounded flows.
Before explaining this analysis it is prudent to mention briefly about the probe arrangements for the T1 and T2 datasets. For both datasets, one probe is fixed at a location (either in the inner or outer layer), whereas the other probes travel across \(y^{+}\) while at all times being synchronized with the reference probe. Specifically, for the T1 dataset, the reference probe is located at \(y^{+}=4.3\), and for T2 dataset it is at \(y^{+}=474\) where the outer peaks appear in the turbulence spectra [27].
For event synchronization analysis, we consider a joint distribution between the positive and negative patterns corresponding to the velocity signals from the reference probe (\(u^{\prime}_{\text{ref}}\)) and from a traveling probe situated at any particular height (\(u^{\prime}\)). This joint distribution is studied in terms of a binary sequence whose values are 1 when \(u^{\prime}_{\text{ref}}\) and \(u^{\prime}\) are simultaneously positive or negative. On the other hand, when the signs mismatch, the sequence attains zero. We refer to this as an overlap binary sequence and compute its time scales (\(t_{p}\)) based on the duration where it stays at 1 or 0. To quantify the synchronization, Shannon entropies of the overlap event lengths (\(N_{p}\)) are considered and compared with a RS sequence (by taking a ratio), which is supposedly devoid of any coupling effect. Mathematically, Shannon entropy of the overlap event lengths is bounded within, \(0\leq H_{\text{n}}^{x_{\text{ref}},x}(N_{p})\leq 1\), where \(x_{\text{ref}}\) is the reference signal, \(x\) is the signal from the travelling probe, and 1 (0) indicates no (complete) synchronization between the two signals.
To incorporate the effect of turbulent scales, the aforementioned procedure is operated on \(\Delta u_{\text{ref}}\) and \(\Delta u\) signals, where \(\Delta u\) denotes velocity differences at a time lag \(\Delta t\). The time lags are normalized with wall-scaling and denoted as \(\Delta t^{+}\). The synchronized entropy values at any \(\Delta t^{+}\) are scaled with the entropy values for the full signal (\(H_{\text{n}}^{u^{\prime}_{\text{ref}},u^{\prime}}(N_{p})\)). From Fig. 2 one can see that irrespective of the datasets or \(y^{+}\) values, the scaled entropies decrease with increasing scales and approach unity at \(\Delta t^{+}=10^{3}\), which physically represents the time scales of the outer- layer structures [27]. This implies at scales comparable to outer-layer structures the synchronized entropies of the velocity differences become equal to the full signal values. Therefore, one could infer that the positive and negative patterns in the \(u^{\prime}\) signals at any \(y^{+}\) value carry the signatures of the structures residing in the outer-layer. Further implications of this phenomenon are discussed in Section III.
## III Results and Discussion
### Level crossings and extreme values
To address the research questions, we begin with the probability distributions of event lengths (\(P(N_{p}|_{\alpha})\)) as \(\alpha\) is varied. We consider \(N_{p}|_{\alpha}\) since it is a discrete variable and represented through probability mass functions whose computation is insensitive to binning. Note that \(N_{p}|_{\alpha}\) and \(t_{p}|_{\alpha}\) are interchangeable through \(t_{p}|_{\alpha}=N_{p}|_{\alpha}/f_{s}\). To characterize \(P(N_{p}|_{\alpha})\), we consider its Shannon entropy compared with a RS sequence of \(u^{\prime}\). The entropy is denoted as \(H_{\text{n}}^{u^{\prime}}(N_{p}|_{\alpha})\), whose mathematical expression is provided in Eq. (A1) of Appendix A. Furthermore, \(H_{\text{n}}^{u^{\prime}}(N_{p}|_{\alpha})\) is bounded between \(0\leq H_{\text{n}}^{u^{\prime}}(N_{p}|_{\alpha})\leq 1\) with 1 indicating a random configuration. From Fig. 3a, one observes that the effect of changing \(\alpha\) either from positive or negative side on \(H_{\text{n}}^{u^{\prime}}(N_{p}|_{\alpha})\) is asymmetric. The vertical profiles of \(H_{\text{n}}^{u^{\prime}}(N_{p}|_{\alpha\geq 0})\) show an in
flection point around \(y^{+}\approx 70\), while when \(\alpha\) is approached from the negative side an another inflection point appears at \(y^{+}\approx 12\). The position \(y^{+}=70\) indicates the location where the outer layer begins [30], whereas at \(y^{+}=12\) the inner-layer structures are active [27]. As shown later in Section III.3, this asymmetrical progression is related to inner-outer interaction in wall turbulence.
However, these inflection points disappear with increasing \(\alpha\). In fact, \(H_{\mathrm{n}}^{u^{\prime}}(N_{p}|_{\alpha})\) tend towards unity at large \(\alpha\) values (Fig. 3b). This apparent randomness in \(N_{p}\) is associated with the fact that with increasing \(\alpha\), \(t_{p}|_{\alpha}\) become statistically comparable to \(\gamma_{u}\) (Fig. 1d). For accuracy purposes, we consider those \(\alpha\) values as the critical ones where \(H_{\mathrm{n}}^{u^{\prime}}(N_{p}|_{\alpha})\) crosses 0.8 (see Appendix A). These values are denoted as \(\alpha_{\mathrm{th}}^{P}\) and \(\alpha_{\mathrm{th}}^{N}\) (Fig. 3b), respectively, and any difference between them is correlated to the skewness of \(u^{\prime}\) (\(\mathcal{S}(u^{\prime})\), Fig. 5b).
Upon considering \(P(u^{\prime})\), one can see the samples that exceed these critical values reside in the PDF tails (Fig. 3c). For visualization purposes, before plotting \(P(u^{\prime})\), we scale the positive and negative \(u^{\prime}\) values with \(\alpha_{\mathrm{th}}^{P}\sigma_{u}\) and \(\alpha_{\mathrm{th}}^{N}\sigma_{u}\), respectively. Under this scaling, the values beyond \(\pm 1\) in Fig. 3c, indicate those critical \(u^{\prime}\) samples exceeding either \(\alpha_{\mathrm{th}}^{P}\sigma_{u}\) (red-shaded regions) or \(\alpha_{\mathrm{th}}^{N}\sigma_{u}\) (blue-shaded regions). Specifically, from Fig. 3d, the time fractions (\(\mathcal{T}_{f}\)) associated with these critical samples (\(u^{\prime}_{\mathrm{th}}\)) are nearly 1-3% of the total sample length, and their values differ significantly from the ones obtained through a Gaussian distribution of \(u^{\prime}\) (\(\mathcal{T}_{f,\mathrm{G}}\)). Accordingly, \(P(|u^{\prime}_{\mathrm{th}}|/|\overline{u^{\prime}_{\mathrm{th}}}|)\) follow an exponential distribution (Fig. 3e), compliant with the theory of extreme value statistics [31; 32]. Note that we consider the absolute values of \(u^{\prime}_{\mathrm{th}}\), since their PDFs remain same irrespective of the sign.
### Identifying coherent structures
Next, we establish that \(u^{\prime}_{\mathrm{th}}\) carry the signatures of the outer-layer coherent structures. In wall turbulence, the presence of hairpin structures organize the streamwise velocity field into alternating high- and low-speed streaks [5]. This, in turn, induces positive and negative fluctuations in \(u^{\prime}\) signals. Through an event synchronization analysis (see Section II.2), we identify how well these positive and negative patterns are coupled with each other across \(y^{+}\). This is achieved through a scale-wise analysis of the Shannon entropies of overlap event lengths, normalized with a RS sequence devoid of any synchronization (\(H_{\mathrm{n}}^{\Delta u_{\mathrm{ref}},\Delta u}(N_{p})\), \(\Delta u=u^{\prime}(t+\Delta t)-u^{\prime}(t)\)). Note that \(H_{\mathrm{n}}^{\Delta u_{\mathrm{ref}},\Delta u}(N_{p})\) is bounded between 0 to 1, where 1
(0) indicates no (complete) synchronization. From Fig. 3f, one observes that the positive and negative events across all \(y^{+}\) values are most strongly coupled at scales \(\Delta t^{+}\approx 1000\) (\(\Delta t^{+}\) is the normalized time lag), representing the outer-layer structures [27]. This signifies the events in \(u^{\prime}\), occurring at heights deep within the inner layer, preserve information about the outer-layer structures.
To extract that information, we conditionally sample the events based on whether they contain the samples satisfying \(u^{\prime}\geq\alpha_{\rm th}^{P}\sigma_{u}\) and \(u^{\prime}\leq\alpha_{\rm th}^{N}\sigma_{u}\) (large, or l-type events) or not (small, or s-type events). This concept is graphically illustrated in Appendix B. The time scales of l- and s-type events are denoted as \(t_{p}|_{l}/\gamma_{u}\) and \(t_{p}|_{s}/\gamma_{u}\), respectively. In Figs. 4a-b, the contributions of l- or s-type events (\(\langle A_{uu}^{+}\rangle\), see Eq. (14) in Appendix B) against their time scales to streamwise velocity variance (\(\sigma_{u}^{2}\)) are plotted separately. Quite remarkably, most
Figure 4: The contours of event amplitudes (\(\langle A_{uu}^{+}\rangle\)) are plotted separately for the (a) l- and (b) s-type events from T1 dataset. The time scales of these events are denoted as \(t_{p}|_{l}/\gamma_{u}\) and \(t_{p}|_{s}/\gamma_{u}\), where \(\gamma_{u}\) is the integral time scale. (c) Fractional contributions of l-type events to the variance (\(\mathcal{V}_{f}|_{l}\)) and occupation time (\(\mathcal{T}_{f}|_{l}\)) are shown. The PDFs of (d) \(t_{p}|_{l}/\gamma_{u}\) and (e) \(t_{p}|_{s}/\gamma_{u}\) are shown from T1 dataset. (f) Shannon entropy ratios of \(N_{p}\) corresponding to l- and s-type events are plotted.
of the contributions of l-type events to \(\sigma_{u}^{2}\) come from the heights in and around \(y^{+}=474\), where the influence of the outer-layer structures are the strongest [27]. Conversely, s-type events contribute the most at heights \(y^{+}=12\), where the inner-layer structures reside [27]. Precisely, at heights \(y^{+}\geq 70\), the total contributions of l-type events to the velocity variance remain between 40-60%, although they only occupy \(\approx 20\%\) of the time (Fig. 4c). These contributions compare well with those from VLSMs in wall turbulence [11].
Moreover, the PDFs of \(t_{p}|_{l}/\gamma_{u}\) and \(t_{p}|_{s}/\gamma_{u}\) appear to be quite different (Figs. 4d-e). Specifically, \(P(t_{p}|_{l}/\gamma_{u})\) follows a log-normal distribution (verified with q-q plots), while \(P(t_{p}|_{s}/\gamma_{u})\) is a power-law of exponent \(-1.6\) with an exponential cut off at scales comparable to \(\gamma_{u}\). Sreenivasan and Bershadskii [29] demonstrated that the log-normal distribution describes the size distributions of the dissipative structures, while we associate it with the l-type event sizes. Furthermore, by considering the Shannon entropies of event lengths, l-type events are more organized than the s-type ones (since \(H_{\rm n}^{u^{\prime}}(N_{p}|_{l})\ll H_{\rm n}^{u^{\prime}}(N_{p}|_{s})\)) as \(y^{+}\) approaches the outer layer (Fig. 4f). These outcomes confirm that the detected extremes in \(u^{\prime}\) carry the signatures of the outer-layer structures and are further utilized to infer about the velocity scales and inner-outer interaction in wall turbulence. Although in Figs. 4a-b and d-e, T1 dataset is considered, similar findings are obtained for T2 dataset also (see Fig. S1 in Supplementary material).
### Connections to the turbulent dynamics
We construct a velocity scale (\(\alpha_{\rm th}\sigma_{u}\)) for the outer-layer structures and plot their profiles against \(y^{+}\) in Fig. 5a. For \(70<y^{+}<10^{3}\) (i.e., log-layer), this scale attains a near-constant value of \(\approx 5u_{*}\). It remains interesting to see whether this velocity scale, as obtained from the critical \(\alpha\) values, can better collapse the turbulence statistics among different experiments. This exercise is, however, out of scope of the present study. On the other hand, to quantify the influence of outer-layer structures on turbulence organization from \(u^{\prime}\) time series, we consider the mean time scales at \(\alpha=\alpha_{\rm th}\) level (\(T_{\rm th}=\overline{t_{p}|_{\alpha_{\rm th}}}\)). The wall-normalized mean time scales for the positive and negative side are denoted as \(T_{\rm th}^{P+}\) and \(T_{\rm th}^{N+}\), respectively, with their behaviors being very different. For instance, in the log-layer, \(T_{\rm th}^{P+}\) increases as \(\left(y^{+}\right)^{1/2}\), while \(T_{\rm th}^{N+}\) is nearly constant at \(10^{3}\). This increase of \(T_{\rm th}^{P+}\) can be explained by considering how the hairpin structures merge progressively to form VLSMs [5], whose characteristic scales
(\(\approx 10^{3}\) wall units,[33]) match with \(T_{\rm th}^{N+}\) values.
Unlike \(\alpha_{\rm th}^{N}/\alpha_{\rm th}^{P}\), the difference between \(T_{\rm th}^{P}\) and \(T_{\rm th}^{N}\) is anti-correlated to \(\mathcal{S}(u^{\prime})\) (Fig. 5b). In fact, for both datasets, the largest values of \(T_{\rm th}^{N}/T_{\rm th}^{P}\) are obtained when the skewness of \(u^{\prime}\) is nearly zero. Instead, we propose that the non-unity values of \(T_{\rm th}^{N}/T_{\rm th}^{P}\) are caused by coherent structures and would disappear for a phase-randomized (PR) surrogate, since randomizing the Fourier phases destroys any organizational aspects associated with coherent structures[34]. Clearly, from Fig. 5c, \(T_{\rm th}^{N}/T_{\rm th}^{P}\) approaches 1 for a PR time series (see red dash-dotted line). We use an IAAFT (iteratively adjusted amplitude Fourier transform) model for PR purposes, which preserves the signal PDFs and autocorrelation functions[25]. In fact, if only 10-50% of the Fourier phases are randomized that itself has a significant effect on \(T_{\rm th}^{N}/T_{\rm th}^{P}\) (shown as
dash-dotted lines with lighter red shades). Contrarily, if \(P(u^{\prime})\) are transformed to Gaussian while maintaining the temporal structure (otherwise known as Gaussian rank surrogate [35]), then \(T_{\rm th}^{N}/T_{\rm th}^{P}\) overlaps with the original (gray dash-dotted line indicates the Gaussian rank surrogate in Fig. 5c). Hence, the temporal organization of the signal sets the values of \(T_{\rm th}^{N}/T_{\rm th}^{P}\). Accordingly, if one removes the outer-layer influences by choosing a Fourier cut-off filter at \(\lambda^{+}=7000\) and apply inverse Fourier transform [27], it changes \(T_{\rm th}^{N}/T_{\rm th}^{P}\) considerably (see blue dash-dotted line in Fig. 5c). By repeating the analysis on the ratios of the standard deviations or any other higher-order statistics (for instance, skewness and kurtosis) of \(t_{p}|_{\alpha_{\rm th}}\), the outcome remains the same. However, for illustration purposes, we only show the results in Fig. 5d corresponding to the standard deviations of \(t_{p}|_{\alpha_{\rm th}}\) (\(\sigma_{\rm th}^{N}/\sigma_{\rm th}^{P}\)). Thus, the statistical asymmetry between \(t_{p}|_{\alpha_{\rm th}^{P}}\) and \(t_{p}|_{\alpha_{\rm th}^{N}}\) quantifies inner-outer interaction in wall turbulence, as an alternative to the amplitude modulation coefficient proposed by Mathis, Hutchins, and Marusic [36].
Additionally, a PR procedure destroys non-linear dependencies in a signal [25], which indicates that the non-unity values of \(T_{\rm th}^{N}/T_{\rm th}^{P}\) are related to non-linear dynamics. This is at odds with persistence or zero-crossing analysis, where the time scale statistics depend only on the autocorrelation functions accounting for the signals' linear structure [20; 37]. Therefore, the level-crossing statistics unveil hidden non-linearities in a stochastic signal. To establish this feature more convincingly, in Fig. 5e, we show how the mean time scales change between the original and PR signal, as \(\alpha\) is varied systematically. This is quantified through a timescale ratio, \(R|_{\alpha/\alpha_{\rm th}}\), as illustrated in Eq. (C1) of Appendix C. Further this ratio \(R|_{\alpha/\alpha_{\rm th}}\) deviates from unity, strong non-linear dependencies regulate the timescale statistics.
Apparently, for heights within the inner layer, non-linear dependencies have the strongest effects on \(\overline{t_{p}|_{\alpha}}\) at \(\alpha_{\rm th}\)-level. More importantly, this non-linearity influences \(\overline{t_{p}|_{\alpha}}\) the most when the threshold is approached from the negative side (\(\alpha_{\rm th}^{N}\)). Since \(\alpha_{\rm th}^{N}\) carry the signatures of the low-speed streaks (\(u^{\prime}<0\)), this implies that the outer-layer influences on the inner-layer dynamics are governed through a non-linear interaction associated with low-speed streaks. This mechanism was earlier hypothesized by Schoppa and Hussain [38], but our results demonstrate it for the first time through an experimental dataset. Interestingly, such non-linear effects on \(\overline{t_{p}|_{\alpha}}\) become irrelevant when the absolute values of \(u^{\prime}\) are considered (see Appendix C).
To further investigate the influence of these outer-layer structures on the energy cas
cading process, we consider a \(u^{\prime}\) time series where only the values exceeding \(\alpha_{\rm th}^{P}\) and \(\alpha_{\rm th}^{N}\) are randomly-shuffled while the others are kept intact. This operation selectively destroys the turbulence organization associated with outer-layer structures. We subsequently calculate the third-order structure function skewness (\(D_{uuu}/(D_{uu})^{3/2}\)), as its non-zero values are related to the turbulence kinetic energy (TKE) cascading from large to small scales [39]. If \(D_{uuu}/(D_{uu})^{3/2}\) are compared between the original and conditionally-shuffled signal, at scales smaller than \(\gamma_{u}\), \(D_{uuu}/(D_{uu})^{3/2}\) of the conditionally-shuffled signals decreases significantly with increasing \(y^{+}\) (Fig. 5f). Since within the inner-layer the TKE is also carried by the inner-layer structures, \(D_{uuu}/(D_{uu})^{3/2}\) values remain slightly larger for the conditionally-shuffled signal. Apart from \(D_{uuu}/(D_{uu})^{3/2}\) approaching zero, this conditional-shuffling procedure destroys the inertial subrange scaling in second-order structure functions (Fig. S2 in Supplementary material). Therefore, we establish the impact of outer-layer coherent structures on the energy cascade in wall turbulence. It is important to note that these outcomes from Fig. 5 remain unchanged whether T1 or T2 datasets are considered (Fig. S3 in Supplementary material).
## IV Conclusion
To summarize, our method of coherent structure detection is entirely data driven and the inferences being obtained from the two datasets match with the existing knowledge of wall-bounded flows. In particular, this detection scheme does not require any external inputs or arbitrary thresholds, thereby making it an attractive choice in experimental turbulence research. This flexibility offers a great advantage in case of atmospheric flows, since coherent structures in such high-\(Re\) flows scale with boundary layer height whose measurements are rarely available. Moreover, through level-crossing approach, we provide compelling evidence that the inner-outer interaction in wall turbulence can be quantified by only considering the statistical asymmetry between the peaks and troughs of a turbulent signal. For future research endeavors, it would be interesting to compare this asymmetry parameter among different experiments in wall-bounded turbulence, spanning both internal and external flows. On the interdisciplinary front, the level-crossing framework can be used to detect extremes in other dynamical systems (hydroology, stock markets, etc.), or to generate training datasets for state-of-the-art machine learning models which often fail to predict
the extreme occurrences [40].
## Conflict of Interest Statement
The authors have no conflicts to disclose.
## Author's Contributions
SC and TB designed and conceptualized this study. SC wrote the manuscript and prepared the figures, while TB provided comments and corrections.
## Supplementary Material
The Supplementary figures relevant to this article are provided in a separate document.
## Acknowledgements
SC and TB acknowledge the funding support from the University of California Office of the President (UCOP) grant LFR-20-653572 (UC Lab-Fees); the National Science Foundation (NSF) grants NSF-AGS-PDM-2146520 (CAREER), NSF-OISE-2114740 (AccelNet) and NSF-CPS-2209695 ; the United States Department of Agriculture (USDA) grant 2021-67022-35908 (NIFA); and a cost reimbursable agreement with the USDA Forest Service 20-CR-11242306-072.
## Availability of Data
The data that support the findings of this study are openly available at [https://doi.org/10.26188/5e919e62e0dac](https://doi.org/10.26188/5e919e62e0dac).
## Appendix
### Appendix A: Statistical robustness of event entropy curves
We begin by plotting the scaled Shannon entropies of the event lengths (with respect to a RS signal) where \(\alpha\) values are normalized with either \(\alpha_{\mathrm{th}}^{P}\) or \(\alpha_{\mathrm{th}}^{N}\) (depending on the sign), denoted together as \(\alpha_{\mathrm{th}}\). Owing to how \(\alpha_{\mathrm{th}}\) is defined, this normalization ensures that the scaled Shannon entropy curves collapse at 0.8 for all the \(y^{+}\) values (Fig. 6a). However, it raises a question of why we consider 0.8 as our choice instead of 1.
This choice is influenced by the statistical accuracy associated with \(H_{\mathrm{n}}^{u^{\prime}}\) values. If we
Figure 6: (a) The Shannon entropy curves of event lengths are plotted with respect to the scaled threshold \(\alpha/\alpha_{\mathrm{th}}\). This scaling ensures that the Shannon entropy curves converge towards the 0.8 value. (b) The number of zero-crossings (\(\mathcal{Z}\)) are plotted against \(\alpha/\alpha_{\mathrm{th}}\). The blue horizontal dash-dotted line indicates the number \(\mathcal{Z}=10^{3}\). (c) The cumulative distribution functions of the event lengths are shown for different levels of \(\alpha/\alpha_{\mathrm{th}}\). (d) For \(y^{+}=66.84\), the Shannon entropy curve of event lengths (black line) is compared with a randomly-shuffled model of 50 realizations (gray shaded lines). (e) For \(y^{+}=4.3\), the Shannon entropy curves are compared between individual ensembles of the measured time series (gray shaded lines) and the averaged one (black line). (f) The impacts of sampling frequencies (\(f_{s}\)) and the length of the time series (\(N\)) on the entropy curve are investigated by systematically varying \(f_{s}\) and \(N\) (see the legend).
consider the mathematical expression of \(H_{\rm n}^{u^{\prime}}\),
\[{\cal H}_{\rm n}^{u^{\prime}}(N_{p})=\frac{\sum_{i=1}^{\cal Z}P(N_{p,i}^{\rm RS}) \ln[P(N_{p,i}^{\rm RS})]}{\sum_{i=1}^{\cal Z}P(N_{p,i})\ln[P(N_{p,i})]}, \tag{10}\]
where \({\cal Z}\) is the number of times the signal crosses \(\alpha\)-level, we can clearly see the estimation of \(H_{\rm n}^{u^{\prime}}\) is dependent on \({\cal Z}\). Our intuition suggests that as \(\alpha\) increases, the number of level-crossings would decrease given the rareness in the occurrences of large values in the signal. In Fig. 6b, we plot the number of level-crossings against \(\alpha/\alpha_{\rm th}\) values. As one may note, \({\cal Z}\) values decrease beyond 1000 when the \(\alpha_{\rm th}\) level is crossed. Since 1000 is a large number to ensure the estimates are statistically robust, we consider the \(H_{\rm n}^{u^{\prime}}\) values to be 0.8. This can be further confirmed by plotting the cumulative distribution functions (CDFs) of event lengths. For visualization purposes, we only show the results corresponding to the \(u^{\prime}\) signal at \(y^{+}=66.84\). Quite clearly, the CDFs display abrupt jumps as \(\alpha\) becomes larger than \(\alpha_{\rm th}\), due to the lesser number of samples being used to compute their distributions (Fig. 6c).
It is important to take into account whether the entropy curves when compared with a RS signal change if different realizations of random sequences are used. We test this by generating 50 different realizations of RS sequences and compute the entropy curves for each of such realizations. In Fig. 6d we show such comparisons using \(u^{\prime}\) signal at \(y^{+}=66.84\) as the test case. No difference is noted in the results. Moreover, in the figures discussed in the main text, we show only the ensemble-averaged results by combining all the three measurement cycles over which the turbulent time series were collected at each \(y^{+}\) value [27]. In Fig. 6e, we compare the entropy curves for each ensemble member with the averaged one. We consider the \(u^{\prime}\) signal at \(y^{+}=4.3\) from the T1 dataset, since at this height the number of ensemble members remains the largest (120). It can be seen that the ensemble-averaged and individual entropy curves almost overlap with no major differences (Fig. 6e).
As a last measure, we investigate the influence of the length of the time series (\(N\)) and sampling frequencies (\(f_{s}\)) on the Shannon entropy curves. We artificially change the sampling frequencies by block averaging the \(u^{\prime}\) signal values and by doing so we reduce the sampling frequencies as low as 0.05 times the original. Although the entropy curves do change under this operation, their overall shapes remain the same and therefore only appear as a scaled version of the original (Fig. 6f). This change mainly occurs since by block averaging we alter the standard deviations of the signal and thus the \(\alpha\) levels. Potentially it is also possible to increase the sampling frequencies by incorporating an interpolation model, namely piecewise
cubic Hermite interpolating polynomial. By utilizing this model, we increase the sampling frequencies two and four times the original, and study its effects on the entropy curves. Similar as before, the curves preserve their shapes and scale according to the \(f_{s}\) values (not shown). On the other hand, if we sub-sample the time series at different lengths compared to the original, \(H_{\text{n}}^{u^{\prime}}\) remains nearly the same even when sub-sampling reduces the original signal length by 95% (Fig. 6f). Hence, we conclude that the estimation of the Shannon entropy curves are statistically robust, placing confidence in the computed \(\alpha\) values used later to detect coherent structures.
## Appendix B l- and s-type events
In this section we illustrate the concepts of l- and s-type events and establish their importance in turbulent dynamics. For this purpose, we use the same segment of the \(u^{\prime}\) time
Figure 7: The concept of l- and s-type events are illustrated through a segment of a \(u^{\prime}\) time-series at \(y^{+}=66.84\) from T1 dataset. The thresholds \(\alpha_{\text{th}}^{P}\) and \(\alpha_{\text{th}}^{N}\) identified from the entropy curves (see Fig. 3b) are shown as horizontal red and blue dash-dotted lines, respectively. The red-colored events (l-type events) contain at least one of these thresholds, whereas the blue-colored ones (s-type events) do not contain any of these. The time scales associated with l- and s-type events are denoted as \(t_{p}|_{l}\) and \(t_{p}|_{s}\), respectively.
series at \(y^{+}=66.84\), as done earlier. In Fig. 7, the three dash-dotted horizontal lines indicate \(\alpha=0\) (black), \(\alpha_{\text{th}}^{P}\) (red), and \(\alpha_{\text{th}}^{N}\) (blue) levels. The l-type events are defined as those positive or negative blocks where at least one of the \(u^{\prime}\) samples satisfy the relation \(u^{\prime}\geq\alpha_{\text{th}}^{P}\sigma_{u}\) and \(u^{\prime}\leq\alpha_{\text{th}}^{N}\sigma_{u}\). On the other hand, s-type events are those which do not satisfy the above condition. To distinguish the l-type events from s-type ones, we use red-(blue) shaded regions to indicate the l-(s) type events. The time scales associated with l-and s-type events are denoted as \(t_{p}|_{l}\) and \(t_{p}|_{s}\) respectively, as shown in Fig. 7. These time scales are subsequently normalized with \(\gamma_{u}\), which is the integral scale of the \(u^{\prime}\) signal.
Although while demarcating between the l- and s-type events we used \(\alpha_{\text{th}}\), it is possible to do the same with any \(\alpha\) values. For instance, if the \(\alpha\) values are chosen to be very small then nearly all the positive and negative events satisfy the condition of the l-type events, and therefore, they become almost indistinguishable from the unconditioned ones (i.e., the original zero-crossing events). Conversely, if the \(\alpha\) values are too large then the number of l-type events decrease substantially and overshadowed by the s-type events. Accordingly, it is interesting to consider how the statistics of l- and s-type events change when the \(\alpha\) values are varied systematically. We focus on the PDFs of \(t_{p}|_{l}/\gamma_{u}\) and \(t_{p}|_{s}/\gamma_{u}\), and the event contributions to the velocity variance. The contributions from a particular event (either l- or s- type) to the velocity variance is defined as,
\[\langle A_{uu}^{+}\rangle=\frac{1}{T\times u_{*}^{2}}\int_{t}^{t+(t_{p}|_{l,s} )}{u^{\prime}}^{2}(t)\,dt, \tag{10}\]
where \(T\) is the total signal duration. Note that the contributions are scaled with the friction velocity and further divided by the logarithmic bin-width so the estimations remain nearly independent of the bin choice.
For the same \(u^{\prime}\) signal as used in Fig. 7, in Figs. 8a and d, we show how the PDFs of \(t_{p}|_{l}/\gamma_{u}\) and \(t_{p}|_{s}/\gamma_{u}\) change as \(\alpha\) is varied. Specific to the l-type events, the PDFs at small \(\alpha\) values are equivalent to the zero-crossing PDFs of \(u^{\prime}\) signal, but as \(\alpha\) increases the power-law exponent changes gradually from \(-1.6\) to \(-1\), with eventually attaining a log-normal distribution. On the other hand, the s-type events approach the zero-crossing PDFs at larger \(\alpha\) values, notwithstanding their evolution remains very different from the l-type ones. In particular, the distributions of \(t_{p}|_{s}/\gamma_{u}\) differ significantly from \(t_{p}|_{l}/\gamma_{u}\).
By turning our attention towards event contributions, one can see that with increasing \(\alpha\) values the \(\langle A_{uu}^{+}\rangle\) curves of l-type events attain their peaks at scales considerably larger
than the integral scales (Fig. 8b). By contrast, the peaks of the \(\langle A_{uu}^{+}\rangle\) curves corresponding to s-type events are always smaller than the integral scales (Fig. 8e). In fact, for small \(\alpha\) values, their peaks occur at scales significantly lesser than \(\gamma_{u}\). Therefore, it is plausible that by choosing an appropriate \(\alpha\) one might separate the features of small-scale turbulence by conditionally sampling only the s-type events. This is, however, a topic for further research.
By integrating \(\langle A_{uu}\rangle\) curves over all the possible time scales and dividing by the velocity variance, yields fractional contribution to \(\sigma_{u}^{2}\) (\(\mathcal{V}_{f}\)) for either of the event types. Similarly, by summing up all the possible time scales and dividing by \(T\), yields the occupation time fractions of l- and s-type events (\(\mathcal{T}_{f}\)). In Fig. 8c, we show how \(\mathcal{T}_{f}\) and \(\mathcal{V}_{f}\) vary for the l-type
and s-type events against \(\alpha/\alpha_{\rm th}\). At \(\alpha_{\rm th}\) level, we see that the l-type events nearly contribute 50% to the velocity variance while occupying 20% of the time. On the other hand, s-type events occupy 80% of the time while contributing the same to \(\sigma_{u}^{2}\). This information can also be studied in terms of an intermittency index (\(\mathcal{I}\)), defined as a ratio between \(\mathcal{V}_{f}\) and \(\mathcal{T}_{f}\).
If \(\mathcal{I}\) values are further scaled with the ones obtained from the unconditioned events (\(\mathcal{I}_{f}\)), then \(\mathcal{I}_{f}\to 1\) when \(\alpha\) is either too large or small, depending on s- or l-type events respectively. When \(\mathcal{I}_{f}\) is plotted against \(\alpha/\alpha_{\rm th}\), a clear demarcation is noticed between l- and s-type events in how they approach the unit values (Fig. 8f). We hypothesize this asymmetrical progression is related to the time-irreversible dynamics of wall-bounded flows [28].
## Appendix C Sign-indefinite velocity signal
Some earlier studies used thresholds on the time series values to detect coherent structures and suggested that the same could be applied interchangeably on either the original or absolute values of the signal [41]. We, however, show that considering absolute values of the velocity signals instead of the original affects how the events are organized in the temporal space.
To begin with, we show how the Shannon entropy curves of the event lengths would behave when the \(\alpha\) levels are applied on the absolute values of the \(u^{\prime}\) signal (Fig. 9a). Note that it is not possible to set \(\alpha=0\) in case of absolute values since no crossings would be obtained in that case. Therefore, the smallest \(\alpha\) levels are chosen as slightly larger than 0. By doing so, one observes that up to certain \(\alpha\) values the vertical profiles of \(H_{\rm n}^{|u^{\prime}|}(N_{p}|_{\alpha})\) behave identically as \(H_{\rm n}^{u^{\prime}}(N_{p}|_{\alpha})\) in Fig. 3a, when \(\alpha\) is approached from the positive side. In fact, similar to \(H_{\rm n}^{u^{\prime}}(N_{p}|_{\alpha})\), an inflection point in \(H_{\rm n}^{|u^{\prime}|}(N_{p}|_{\alpha})\) is observed at \(y^{+}=70\).
A note is necessary here regarding the estimation of the critical \(\alpha\) value (\(\alpha_{\rm th}\)) for the \(|u^{\prime}|\) signal. The scaled entropy curves of \(|u^{\prime}|\) signals form an U shape, and because of that the 0.8 value can be reached either at small or large \(\alpha\) levels (Fig. 9b). At small \(\alpha\) levels, the events have large time scales for the absolute signal, since the number of crossings are limited. However, we choose the critical \(\alpha\) levels (\(\alpha_{\rm th}\)) from the larger side, in accordance with the original signal.
However, the biggest difference between the original and absolute signal occurs when one compares the mean time scales with the phase-randomized (PR) surrogates. This comparison
is quantified through a ratio defined as,
\[R|_{\alpha/\alpha_{\rm th}}=\frac{\overline{t_{p}|_{\alpha}}}{[t_{p}|_{\alpha}]_{ \rm PR}}. \tag{10}\]
Unlike \(u^{\prime}\), for the absolute signals, \(R|_{\alpha/\alpha_{\rm th}}\) stays almost near to unity for any \(\alpha/\alpha_{\rm th}\) values (Fig. 9c). This indicates, contrary to Fig. 5e, the effect of non-linear dynamics on the temporal arrangement of the samples exceeding \(\alpha_{\rm th}\) disappears by taking the absolute values. We can further confirm this phenomenon by comparing the vertical profiles of \(T_{\rm th}/\gamma_{u}\) between
\(u^{\prime}\) and \(|u^{\prime}|\) signals.
For \(|u^{\prime}|\), the mean time scales at \(\alpha_{\rm th}\)-level remain closer to \(T_{\rm th}^{P}\) instead of \(T_{\rm th}^{N}\), where \(T_{\rm th}^{P}\) and \(T_{\rm th}^{N}\) values are obtained from the original \(u^{\prime}\) signal (Fig. 9d). More importantly, \(T_{\rm th}\) of the absolute signal is nearly insensitive when the Fourier phases are randomized. Since PR destroys the organization of coherent structures, this indifference suggests that the events detected from the absolute signals may not obey the turbulent flow physics.
|
2309.05595 | Undecidability Results and Their Relevance in Modern Music Making | This paper delves into the intersection of computational theory and music,
examining the concept of undecidability and its significant, yet overlooked,
implications within the realm of modern music composition and production. It
posits that undecidability, a principle traditionally associated with
theoretical computer science, extends its relevance to the music industry. The
study adopts a multidimensional approach, focusing on five key areas: (1) the
Turing completeness of Ableton, a widely used digital audio workstation, (2)
the undecidability of satisfiability in sound creation utilizing an array of
effects, (3) the undecidability of constraints on polymeters in musical
compositions, (4) the undecidability of satisfiability in just intonation
harmony constraints, and (5) the undecidability of "new ordering systems". In
addition to providing theoretical proof for these assertions, the paper
elucidates the practical relevance of these concepts for practitioners outside
the field of theoretical computer science. The ultimate aim is to foster a new
understanding of undecidability in music, highlighting its broader
applicability and potential to influence contemporary computer-assisted (and
traditional) music making. | Halley Young | 2023-09-11T16:23:43Z | http://arxiv.org/abs/2309.05595v2 | # Undecidability Results and Their Relevance in Modern Music Making
###### Abstract
This paper delves into the intersection of computational theory and music, examining the concept of undecidability and its significant, yet overlooked, implications within the realm of modern music composition and production. It posits that undecidability, a principle traditionally associated with theoretical computer science, extends its relevance to the music industry. The study adopts a multidimensional approach, focusing on five key areas: (1) the Turing completeness of Ableton, a widely used digital audio workstation, (2) the undecidability of satisfiability in sound creation utilizing an array of effects, (3) the undecidability of constraints on polymers in musical compositions, (4) the undecidability of satisfiability in just intonation harmony constraints, and (5) the undecidability of "new ordering systems". In addition to providing theoretical proof for these assertions, the paper elucidates the practical relevance of these concepts for practitioners outside the field of theoretical computer science. The ultimate aim is to foster a new understanding of undecidability in music, highlighting its broader applicability and potential to influence contemporary computer-assisted (and traditional) music making.
## 1 Introduction - Brief overview of the paper's goals
The primary objective of this paper is to explore the concept of undecidability within the context of modern music making. By examining various aspects of music production and theory, we aim to demonstrate that undecidability is not just an abstract notion limited to theoretical computer science, but a relevant and thought-provoking concept that has tangible implications for contemporary music composition and production.
### Undecidability results
We will achieve this by investigating four distinct areas of music making:
1. Proving that Ableton, a popular digital audio workstation, is Turing complete.
2. Establishing the undecidability of satisfiability in sound creation using audio effects.
3. Demonstrating the undecidability of constraints on polymers in musical compositions.
4. Proving that the satisfiability of just intonation harmony constraints is undecidable.
5. Proving that the satisfiability of "new ordering systems" is undecidable.
These will all rely on reductions to problems already known to be undecidable in the computer science literature.
### Relevance of these results
We will also provide detailed explanations of why someone who is not a theoretical computer scientist should care about each of these properties. In particular, we will discuss why Rice's theorem argues for a real limitation in the possibility of analyzing output of a Turing complete system (which Ableton is), and that undecidability of satisfiability constraints provides a theoretical limit on to what extent an (automated or human) composer can "think abstractly, then compose concretely" within different domains.
## 2 Decidability, Turing Completeness, and Their Implications for Music Production
This section aims to introduce the fundamental concepts that underpin our exploration of Ableton Live's computational capabilities and the implications of these for the undecidability of general properties of Ableton projects and the satisfiability of a composer's constraints or vision.
### Decidability: An Overview
Decidability, a concept from theoretical computer science, refers to the solvability of a problem through algorithmic means. A problem is considered decidable if an algorithm exists that can provide a definitive solution to every instance of the problem in a finite timeframe. Conversely, a problem is undecidable if no such algorithm can be found. Understanding decidability is crucial as it delineates the boundary between problems that can be addressed using computational techniques and those that remain fundamentally unsolvable.
### Turing Machines and Turing Completeness
The concept of a Turing machine, a theoretical model of computation, is central to understanding the abilities and limitations of computational systems. A system or programming language is considered Turing complete if it can simulate a Turing machine's behavior, meaning that it can execute any computation that a Turing machine can, given adequate time and resources.
Ableton Live, a digital audio workstation, is such a Turing complete system. Its computational power, coupled with its capability to manipulate audio and MIDI data, makes it an immensely flexible tool for musical creation. However, this Turing completeness also implies that certain questions about the behavior of Ableton Live projects are undecidable, leading to intriguing implications for music production.
### Rice's Theorem and Its Consequences
Rice's theorem is a result in computability theory that states any non-trivial property concerning a Turing machine's computed function is undecidable [2]. This theorem suggests that many questions about the behavior of computational systems and programs are fundamentally unanswerable, including those related to the properties of Ableton Live projects.
Thus, the power and flexibility of Ableton Live, as embodied in its Turing completeness, bring with them a fascinating paradox. While they allow for virtually limitless musical creativity, they also introduce elements of undecidability that challenge our ability to fully understand or predict the outcomes of complex musical projects.
### Undecidability of Satisfiability Problems
Satisfiability problems, which involve determining whether there exists a solution that satisfies a set of conditions or constraints [3], also encounter undecidability issues. These problems are pervasive in computer science and mathematics, often manifesting in various domains, including logic [14], number theory [7], and algebra [9].
In the context of music composition, satisfiability problems can occur when a composer defines a specific "vision" for their piece, expressed in terms of rhythmic, harmonic, or structural constraints. This vision might involve intricate polyrhythms, complex harmonic progressions, or specific structural properties.
However, the satisfiability of these constraints--whether there exists a musical piece that fulfills all of them--is often undecidable. This means there is no algorithm that can determine, for every possible set of constraints, whether a satisfying musical piece exists. Therefore, a composer, whether human or machine, might have a vision for a piece without any guarantee that it can be realized.
In the following sections, we will explore how these undecidability issues can limit a composer's ability to know a priori if their vision can ever be realized, highlighting the inherent tension between the vast expressive power of a Turing complete system like Ableton Live and the fundamental limits imposed by undecidability.
## 3 Ableton Live's Turing Completeness
### An Introduction to Ableton Live
Ableton Live is a comprehensive digital audio workstation (DAW) that offers a diverse set of tools for computer musicians, catering to various aspects of music production and performance. A key feature of Ableton Live is its flexible audio and MIDI routing capabilities. You can freely route audio and MIDI between tracks, enabling complex signal flows, layering of sounds, and intricate cross-processing. This flexibility opens up vast creative possibilities, from creating intricate soundscapes to designing complex rhythmic patterns.
Additionally, Ableton Live provides a vast array of MIDI and audio effect devices. MIDI effects transform MIDI notes and control signals, influencing parameters such as pitch, velocity, and timing. Audio effects manipulate the sound, offering control over parameters such as frequency, amplitude, and time-based effects. What's even more compelling is that these effects can interact with each other, allowing the parameters of one effect to influence another.
This interoperability allows for the crafting of unique sonic textures and innovative musical ideas. The combination of these features makes Ableton Live a versatile and powerful tool, enabling musicians to push boundaries and expand their creative potential in music production and performance.
### Proof of Turing Completeness
In this section, we present our proof of Ableton Live's Turing completeness by constructing a Turing machine simulation using its built-in audio and MIDI devices.
#### 3.2.1 Infinite Tape
Create an audio track representing the infinite tape with an unbounded audio recording. Each audio sample (or short sequence of samples) represents a symbol, mapped to different frequency ranges.
#### 3.2.2 Read/Write Head
* Reading: Use Granulator II to read the audio track in real-time. Automate the FilePos parameter to control the read head's position. Use EQ Eight to isolate the frequency range representing a specific symbol, and use Envelope Follower to analyze the output, determining the symbol at the current position.
* Writing: Use Utility to control the amplitude of the audio track. Add an Expression Control device and map its output to the Gain parameter of the Utility device, effectively overwriting the symbol at the current position with the new symbol according to the Turing machine's rules.
#### 3.2.3 States
Place an Audio Effect Rack on the reading track, containing multiple audio effect chains, each representing a different state of the Turing machine. Each chain contains a combination of audio and MIDI effects that determine the next state and tape action based on the symbol read by Granulator II and analyzed by the Envelope Follower.
#### 3.2.4 Rules
Implement the Turing machine's rules using MIDI Effect Racks, Chord devices, and MIDI routing in this manner
* Create an audio track for the infinite tape and read/write head, referred to as the "reading track." This track will contain the Granulator II device for reading symbols, the EQ Eight and Envelope Follower for symbol analysis, and the Utility and Expression Control devices for overwriting symbols.
* Create a separate MIDI track for each state in the Turing machine. Label these tracks as "State 1," "State 2," and so on.
* Route the MIDI output of each "State" track to the Chain Selector parameter of the Audio Effect Rack on the reading track. To do this, set the "MIDI To" option in the I/O section of each "State" track to the reading track, and then select "Chain Selector" as the target parameter.
* On each "State" MIDI track, place a MIDI Effect Rack with multiple chains, each chain representing a different rule for the current state based on the input symbol. For example, in "State 1" track's MIDI Effect Rack, create chains for each possible symbol the read/write head may encounter while in State 1.
* Configure the Chain Selector in the MIDI Effect Rack on each "State" track to choose the appropriate chain based on the symbol read by the read/write head. Use the Envelope Follower's output value (from the reading track) to control the Chain Selector on the corresponding "State" track. To do this, you can use MIDI mapping or automation.
### The Validity of the Proof
The method we employed to prove Ableton Live's Turing completeness is a widely accepted technique within the realm of theoretical computer science [6]. It's based on the concept of simulation, where the key components of a Turing machine - an infinite tape, a read/write head, states, and transition rules - are mapped to corresponding functionalities within Ableton Live's music production environment. By demonstrating that Ableton Live can effectively simulate
a Turing machine, we establish its Turing completeness. This is because any system that can simulate a Turing machine, which is the theoretical model for all computation, is itself Turing complete. Therefore, our mapping approach not only demonstrates the richness and flexibility of Ableton Live as a music production tool but also highlights its computational universality.
### Implications of Turing completeness in music production
One implication of Ableton's Turing completeness is that it is, in theory, possible to build any program in Ableton (a neural network, a compiler of a different equation, etc.) However, I argue that the most important implication of Ableton's Turing completeness for music analysts and creators (particularly those interested in automating musical tasks) comes from Rice's theorem. Imagine that we are designing an audio installation that is supposed to go on indefinitely, and we want to avoid ever having the perception of dissonance (defined according to an existing metric like having lots of energy at two beating frequencies). According to Rice's theorem, there does not exist a program in any language which, regardless of the contents of the installation, could check that it observes that property. Similarly, if an automated system wanted to design procedural music to a videogame which continued as long as the game was being played and responded to the game's inputs, there could never be a program that could check every such procedural music and determine whether it responds in a logical way. This undecidability result means that by using Ableton's complexity, we must sacrifice some degree of certainty about its outputs.
## 4 The Undecidability of Understanding Audio Effects
Consider a scenario where a sound engineer aspires to generate a specific audio output using a particular audio effect. This effect can be modeled as a real-valued function. The question that might arise in such a scenario is: "Given any possible input audio signals, will applying this specific audio effect consistently produce the desired output audio signal?"
To figure out whether we can answer this, we must turn to work in computability theory. The undecidability of the universal theory of the reals, a result that stems from the seminal work of mathematicians like Alonzo Church, Alan Turing, and others, is a profound and far-reaching theorem in mathematical logic and computability theory [4][15][16]. It states that there does not exist a universal algorithm that can decide whether an arbitrary mathematical statement concerning real numbers is true or false. This result is a consequence of Godel's incompleteness theorem and the negative solution to Hilbert's Entscheidungsproblem, the decision problem. Note that certain statements about reals, e.g., first order theories of real numbers, are decidable [8]; however, they become undecidable when the sine function (which is fundamental to audio processing) is included in the theory.
Now, let's translate this abstract mathematical concept into our audio engineering context. The real-valued function representing an audio effect can be thought of as an equation involving real numbers. Just like the equations in the universal theory of the reals, this audio effect function might have many possible inputs, which correspond to the different potential input audio signals. The output of this function, meanwhile, corresponds to the resulting sound.
When the sound engineer seeks to find an audio effect that can consistently produce a desired output from any possible input, they are essentially trying to solve a problem similar to deciding whether a specific equation holds for all real numbers. In mathematical terms, they are trying to find a function (audio effect) that satisfies a particular condition (produces the desired sound) for all possible inputs (audio signals). Furthermore, they are likely implicitly including functions such as the sine function in their equation.
Given the undecidability result of the universal theory of the reals, however, we know that there cannot exist a universal algorithm capable of deciding whether an arbitrary equation holds for all real numbers. By analogy, then, there cannot exist a universal algorithm capable of deciding whether an arbitrary audio effect can produce a desired output from any possible input.
This has far-reaching implications for sound engineering and, more broadly, any field that involves transformations of real-valued signals or data. It suggests that there may be no systematic method for determining whether a particular transformation (such as an audio effect) can achieve a specific desired result from any possible input.
For a human music producer, the undecidability of understanding audio effects can have significant implications. On the one hand, it emphasizes the limits of formal, algorithmic approaches and the potential unpredictability of audio production processes. This can encourage producers to embrace a more exploratory, creative, and intuitive approach to using audio effects, leveraging their personal experiences, aesthetic judgments, and experimental practices. On the other hand, it also highlights the inherent complexity and open-ended nature of audio production, which can be both challenging and exciting. It suggests that there are no definitive answers or universal recipes for achieving specific sounds, and that the creative possibilities are vast and potentially unbounded.
In the case of automated systems for music production, the undecidability of understanding audio effects can also have important consequences. It implies that there are inherent limitations to the capabilities of these systems, particularly in terms of their ability to predict and control the outcomes of applying audio effects. This can impact the design and development of such systems, emphasizing the need for robustness, adaptability, and flexibility. It might also necessitate the use of probabilistic and heuristic methods, as well as machine learning techniques, which can cope with uncertainty and make educated guesses in the face of undecidability. However, it also opens up opportunities for creative applications of AI in music production, where the unpredictability of audio effects is not a bug, but a feature that can be used to generate new and unexpected musical ideas.
## 5 Undecidability of Satisfiability with Polyrhythms or Just Intonation
### Undecidability of Satisfiability with Multiplication of Unknown Integers
The theory of undecidability of integer variable multiplication traces its origins to the field of mathematical logic and theoretical computer science. Here, it is associated with the principle of the universal theory of the reals, which focuses on the properties of real numbers and the mathematical structures derived from that set. This theory consists of several profound results, including the undecidability of multiplication involving unknown integer variables.
The undecidability of the satisfiability of multiplication by unknown integer variables is not an obvious concept, and it took revolutionary work by computer scientists and logicians to bring it to light. Essentially, the claim of this theory is that there is no universal method or computational process (termed 'algorithm') that can definitively determine whether any general mathematical statement, particularly one involving the multiplication of real numbers or unknown variables, is always true or false under all circumstances.
This proposition was explicitly proved through the work of Yuri Matiyasevich, building on contributions by three leading logicians of the 20th century: Julia Robinson, Martin Davis, and Hilary Putnam [5][13][12]. Their quest to resolve a significant mathematical problem of the time, known as Hilbert's Tenth Problem, inadvertently led to the discovery of the undecidability. Hilbert's Tenth Problem asked for an algorithm that could determine whether any given Diophantine equation had integer solutions. A Diophantine equation is a polynomial equation that seeks integer solutions.
Matiyasevich built upon the work of the aforementioned logicians, and in 1970, he provided the final piece of the puzzle, proving that no such algorithm exists for Hilbert's Tenth Problem, thus confirming the resolution of the problem in the negative [10]. This negative resolution subsequently implied that certain specific sets, in this case, the sets that represent the satisfiability of certain Diophantine equations or equivalently, the multiplication of integer unknown variables, are undecidable.
### Relationship to Polymer and Polyrhythm
In metric music, beats can be visually represented using pairs of integers, labeled as (n, m). Here, 'n' signifies the beat number or the position of the beat within a rhythmic cycle and'm' represents the time signature. This notation system enables the mapping of complex rhythms onto a numeric grid and permits us to vislauize and grapple with abstract musical concepts.
Taking a hypothetical scenario in which we want two beats from different rhythms to coincide at a definite point in time, we can denote this intention using an equation: (n1, k1) * (n2, k2) = (a, b). Here, (n1, k1) and (n2, k2) correspond to the starting beats of the two rhythms, while (a, b) symbolizes the point of conjunction. The variables k1 and k2 in this equation are unknown, thereby making the equation a case of multiplication involving unknown variables.
The theory of the undecidability of the satisfiability of multiplication by unknown variables becomes highly relevant here. There are no definitive computational procedures that can yield actual values for k1 and k2, making the equation true. This implies that it's not systematically determinable, on solely mathematical grounds, whether we can make two beats from differing rhythms coincide at a specific time point--a given polyrhythm feasible.
This undecidability offers profound implications for music theory and composition, particularly when it comes to polyrhythms. Composers often have to deal with the complexity and unpredictability that comes with writing polyrhythms. The challenge lies in systematically specifying and reasoning about whether it is possible for certain beats within a complex polyrhythm to align. Because the algorithmic and procedural methods cannot provide definitive answers due to the undecidability at play, composers must often resort to heuristic or empirical methods. These methods, based on experimentation, trial and error, or intuitive judgment, highlight the often underappreciated complexity and creative challenge in music composition.
### Just Intonation
Just intonation is a system of musical tuning in which the frequencies of notes are related by ratios of small whole numbers. Theoretically, it results in a pure and consonant sound that is often more pleasing to the ear compared to other tuning systems. However, the process of creating just intonation involves the multiplication of ratios, which inherently corresponds to the multiplication of unknowns in the context of mathematical equations.
The undecidability of the satisfiability of multiplication by unknowns directly impacts our capacity to devise a universally applicable algorithm for creating just intonation automatically. In other words, there is no general algorithm or decision procedure that can definitively determine whether it's possible to tune a piece of music in just intonation by multiplying specific frequency ratios. This undecidability makes it inherently complex to systematize the process of creating just intonation.
For instance, let's consider a situation where we want to tune a piece of music that uses a seven-note diatonic scale in just intonation. We may want to determine the frequency ratios that will provide a perfect fifth (a frequency ratio of 3:2) between each pair of successive notes. We could express this requirement using equations involving multiplication of unknowns, such as a/b * c/d = 3/2, where a/b and c/d represent the frequency ratios of the two notes. However, due to the undecidability of the satisfiability of multiplication by unknowns, we can't conclusively determine algorithmically whether there exist specific values of a, b, c, and d that will satisfy these equations for all pairs of notes..
### Implications
For human composers, the undecidability of these harmonic and rhythmic constraints has profound implications. It means that there might be times when they conceive of a musical idea, a particular harmonic or rhythmic language, but they might not be able to determine if it is even plausible to realize that idea given the constraints of the harmonic space. The inability to formally or algorithmically reason about these constraints can introduce an element of profound uncertainty into the creative process. Rather than seeing this as a limitation, it can be interpreted as an invitation to the composer to delve deeper into the rich and unexplored territories of the harmonic space. This uncertainty, then, serves not as a boundary, but as a stimulus for innovation and experimentation in music composition.
When it comes to machine models used for automatic composition or music analysis, the undecidability of reasoning about just intonation harmony or polyrhythms presents substantial challenges as well. It suggests that AI systems cannot be guaranteed to always find a solution to satisfy certain harmonic constraints, or even determine whether such a solution exists. This necessitates the development of AI models that are capable of handling these inherent uncertainties, perhaps by leveraging probabilistic models or machine learning techniques. At the same time, the undecidability could also be seen as a source of creative potential, enabling AI models to generate novel and unexpected musical ideas within the uncharted territories of the harmonic space.
### The Undecidability of Horn Clauses and Composer-Generated Systems of Organization
### Proof of undecidability of Horn Clauses
The proof for the undecidability of Horn clauses is rooted in the reduction from the Halting Problem, which is a well-known undecidable problem within computer science. One begins by establishing that the set of all true ground (i.e., variable-free) Horn clauses is recursively enumerable [1]. This means that there exists a Turing machine which will list all true Horn clauses. Then, a reduction is crafted from the Halting Problem to the problem of deciding if a given ground Horn clause is true. The subsistence of this reduction implies that if there were an algorithm to decide the truthfulness or groundness of any given Horn clause, such an algorithm could be used to solve the Halting Problem, contradicting its established status as undecidable.
David A. Plaisted, in his comprehensive study "The Undecidability of Ground Term Rewrite Systems" [11], was able to prove that establishing satisfiability for sets of Horn clauses is undecidable when clauses could contain function symbols of arity \(>0\). His work is considered significant in demonstrating the undecidability of Horn clauses satisfiability. Plaisted obtained this result by reducing the word problem for semi-Thue systems (which is a known undecidable problem) to the satisfiability problem of Horn clauses.
### Implications for Composers
The notion of equating a musical system to general Horn clauses suggests that each musical rule in a system, whether it governs melody, harmony, or rhythm, can be formulated as a Horn clause.
In order to achieve this transmutation, one must assign a literal or proposition to every possible musical event or condition. For instance, we can designate 'note X is followed by note Y' as a positive literal, and 'chord progression A leads to chord progression B' as a negative literal or a combination of both. As such, each literal serves as a concrete representation of a musical condition. These literals are the building blocks with which the Horn clauses are formulated, effectively creating a musical language with syntactic rules and structures mirroring those in propositional logic.
Collectively, these translated musical rules defined by a composer form a system that resembles the characteristics of a set of general Horn clauses in propositional logic. Each rule represented as a Horn clause becomes a stipulation that demands satisfaction. In this context, satisfaction implies a musical sequence that corresponds and adheres to the designed compositional system or the logical constraints impounding it.
In propositional logic, the notion of whether a model exists that satisfies a given set of clauses is described as the satisfiability problem. Analogously applied to the domain of music, this translates to the existential question: can a sequence of musical events, a composition, be constructed satisfying all the rules of a given system simultaneously?
However, drawing this parallel presents an inherent challenge grounded in the conclusively proven undecidability of general Horn clause satisfiability. This paradigm transforms from an intellectual exercise into a practical dilemma for composers who wish to implement such systems in their work. Without guaranteed certifying mechanisms, composers following this approach walk a delicate path between logical structure and potential unsatisfiability. The pursuit of an aesthetically pleasing composition that simultaneously adheres to intricate logical constraints places an extraordinary demand on both the creative and logical faculties of the composer.
## 6 Conclusion
In this paper, we have presented several key findings that have profound implications for both the practice of music composition and the development of automated music generation systems. Our results highlight the inherent complexities and limitations encountered when attempting to computationally represent and generate musical pieces, whether the context is traditional musical forms like string quartets or more experimental formats such as real-time, non-time-bounded media installations.
Our first major result was establishing the Turing completeness of Ableton Live, a popular digital audio workstation. This result not only underscores Ableton Live's vast expressive power as a musical tool but also sets the stage for our subsequent investigations into the inherent computational limits of music creation within such a powerful system.
We then demonstrated the undecidability of satisfiability for polyrhythmic and just intonation constraints, which are common elements in contemporary music composition. This result underscores the fundamental limits of formalizing and algorithmically processing certain aspects of musical composition. We showed that even with a Turing complete system, there is no guarantee that a composer's specific vision for a piece can always be realized, whether the constraints involve intricate rhythmic structures, complex harmonic progressions, or other specific musical properties.
Moreover, we extended this result to general composer-generated structural constraints, further highlighting the challenges composers face when attempting to realize their musical ideas, particularly when those ideas involve intricate or complex structural properties.
These findings have significant implications for both human and automated composers. For human composers, our results underscore the importance of intuition, experience, and exploratory techniques in the composition process, particularly when dealing with complex musical constraints that cannot be fully formalized or algorithmically processed. For automated systems, our results highlight the limitations of relying solely on algorithmic methods for generating music that satisfies specific constraints. They also underscore the need for incorporating heuristic or learning-based methods to navigate the vast and complex landscape of musical possibilities.
In conclusion, our findings serve as a bridge between the worlds of computational theory and music composition, shedding light on the fascinating interplay between computation and creativity, and ultimately, between machines and the art of music. |
2310.20138 | DEPN: Detecting and Editing Privacy Neurons in Pretrained Language
Models | Large language models pretrained on a huge amount of data capture rich
knowledge and information in the training data. The ability of data
memorization and regurgitation in pretrained language models, revealed in
previous studies, brings the risk of data leakage. In order to effectively
reduce these risks, we propose a framework DEPN to Detect and Edit Privacy
Neurons in pretrained language models, partially inspired by knowledge neurons
and model editing. In DEPN, we introduce a novel method, termed as privacy
neuron detector, to locate neurons associated with private information, and
then edit these detected privacy neurons by setting their activations to zero.
Furthermore, we propose a privacy neuron aggregator dememorize private
information in a batch processing manner. Experimental results show that our
method can significantly and efficiently reduce the exposure of private data
leakage without deteriorating the performance of the model. Additionally, we
empirically demonstrate the relationship between model memorization and privacy
neurons, from multiple perspectives, including model size, training time,
prompts, privacy neuron distribution, illustrating the robustness of our
approach. | Xinwei Wu, Junzhuo Li, Minghui Xu, Weilong Dong, Shuangzhi Wu, Chao Bian, Deyi Xiong | 2023-10-31T03:09:36Z | http://arxiv.org/abs/2310.20138v2 | # DEPN: Detecting and Editing Privacy Neurons in Pretrained Language Models
###### Abstract
Large language models pretrained on a huge amount of data capture rich knowledge and information in the training data. The ability of data memorization and regurgitation in pretrained language models, revealed in previous studies, brings the risk of data leakage. In order to effectively reduce these risks, we propose a framework DEPN to Detect and Edit Privacy Neurons in pretrained language models, partially inspired by knowledge neurons and model editing. In DEPN, we introduce a novel method, termed as privacy neuron detector, to locate neurons associated with private information, and then edit these detected privacy neurons by setting their activations to zero. Furthermore, we propose a privacy neuron aggregator dememorize private information in a batch processing manner. Experimental results show that our method can significantly and efficiently reduce the exposure of private data leakage without deteriorating the performance of the model. Additionally, we empirically demonstrate the relationship between model memorization and privacy neurons, from multiple perspectives, including model size, training time, prompts, privacy neuron distribution, illustrating the robustness of our approach.
## 1 Introduction
Remarkable progress has been made in large language models (LLMs) in recent years (Brown et al., 2020; Liu et al., 2021; Ouyang et al., 2022; Lee et al., 2023). However,despite this success, LLMs are confronted with privacy and security concerns in real-world applications (Guo et al., 2022; Brown et al., 2022; Li et al., 2023). The primary cause of privacy and security risks is the inherent nature of large pretrained language models. Previous studies (Carlini et al., 2019, 2021; Thakkar et al., 2021; Henderson et al., 2018) have demonstrated that pretrained language models tend to memorize and regurgitate a significant portion of the training data, including atypical data points that appear only once in the training data. Additionally, external factors (e.g., membership attack) also contribute to these risks. A variety of methods have been explored to attack LLMs for training data extraction. For instance, Carlini et al. (2021) have successfully extracted personal information from GPT-3's output, while Li et al. (2023) have induced the generation of personal information by utilizing multi-step prompts in ChatGPT. All these show that large pretrained language models suffer from a serious risk of privacy leakage.
In order to safeguard privacy, numerous methods have been proposed. The majority focus on either removing sensitive information during the data processing stage (Liu et al., 2017; El Emam et al., 2009; Zhou et al., 2008; Garcia-Pablos et al., 2020), or reducing the extent to which models memorize training data during the training stage (Li et al., 2021; Hoory et al., 2021; Plant et al., 2021; Coavoux et al., 2018). However, privacy breaches often come to light after the completion of model training, rendering previous methods less effective. There are also methods proposed in the post-processing stage, which involve slight parameter retraining to make the model forget privacy information (Bourtoule et al., 2021; Gupta et al., 2021; Neel et al., 2020). Nevertheless, these methods generally incur high computational complexity, making it challenging to apply them to complex model architectures. In practice, model developers often attempt to prevent language models from outputting specific information via blocking or filtering certain keywords, which, however, does not truly address the underlying issue.
We speculate that private information might be
stored in specific neurons, just like knowledge neurons (Geva et al., 2021; Meng et al., 2022; Dai et al., 2022). This presumption suggests that we could change the model memorization of private information by detecting and deleting these neurons (termed as privacy neurons). Therefore, we propose a framework DEPN for detecting and editing privacy neurons. To detect privacy neurons, we introduce a privacy neuron detector that uses gradient integration to simultaneously compute the contributions of multiple markers to neuron activations. This allows us to estimate an overall privacy attribution score for private information. Subsequently, we further propose a privacy neuron editor that simply sets the activations of the top \(z\) privacy neurons with the highest privacy scores to zero to erase the model memorization of the corresponding private information. For the scenario of processing multiple sentences at the same time, we also present a privacy neuron aggregator to facilitate privacy information editing in batches.
Experimental results show that our framework can quickly reduce the risk of private data leakage without affecting model performance. Compared with other methods, our framework is highly efficient. Furthermore, we have found that model memorization leads to the aggregation of privacy neurons in our experiments, and demonstrated that our framework is very suitable for the scenario of deep model dememorization.
The main contributions of our work are summarized as follows:
* For the first time, we explore model editing into privacy protection of pretrained language models, provide a new way for privacy protection, and propose DEPN to effectively eliminate model memorization in the post-processing stage.
* We propose the privacy neuron detector to localize privacy neurons based on gradient attribution, and the privacy neuron editor to dememorize privacy information in pretrained language models.
* We conduct experiments to demonstrate that the proposed framework is capable of protecting privacy leakage from pretrained language models.
## 2 Preliminary
Privacy DefinitionPrivacy preservation has become an issue of great concern in the era of pre-trained language models. Protecting privacy first requires specifying the boundaries of privacy. The definition of privacy is broad. It is closely related to its context and discourse (Brown et al., 2022). In any texts about, a specific person can be considered as private. For the convenience of research, a narrow definition of privacy is usually taken (Sousa and Kern, 2023), which treats personal identity information as privacy, such as names, ID numbers, phone numbers and other related expressions. The proposed DEPN can be adapted to the above two definitions.
Model EditingGeva et al. (2021) find that the feed-forward network module in Transformer (i.e., a two-layer perceptron) can be considered as a key-value memory, where each key corresponds to a text pattern and each value represents a distribution over the vocabulary. Based on this finding, a strand of research, (Geva et al., 2021; Meng et al., 2022; Dai et al., 2022) propose to edit factual knowledge encoded in pre-trained LLMs by locating neurons related to the entities of factual knowledge.
The basic idea of localization is to change the parameters of neurons, and then observe the changes in the probability of the object entity predicted by the model. The neurons with greater influence on the probability are more closely related to the object entity.
However, these methods have a limitation that they can only observe the probability change of one token at a time. Semantic units are usually composed of a sequence of tokens, rather than a single token. This makes it impossible to use these methods directly.
## 3 Methodology
The proposed DEPN consists of three components: the privacy neuron detector (SS3.2), the privacy neuron editor (SS3.3) to erase the model memorization of privacy data, and the privacy neuron aggregator (SS3.4) for privacy preservation in batches.
### Privacy Prediction Task
Given a tuple \(\mathbf{T}=\{\mathbf{X},\mathbf{Y}\}\), let \(\mathbf{Y}=\{y_{1},...,y_{n}\}\) be the sequence with private information, \(\mathbf{X}\) be the the context of the sequence, \(\mathbf{\theta}\) be the parameters of a language model. Given a context \(\mathbf{X}\), the
probability of the language model yielding a token is \(P(y_{i}|\mathbf{X},\mathbf{\theta}),y_{i}\in\mathbf{Y}\), so the probability of the model leaking the private sequence is:
\[P(\mathbf{Y}|\mathbf{X},\mathbf{\theta})=\prod_{i=1}^{|\mathbf{Y}|}P(y_{i}|\mathbf{X},\mathbf{\theta}) \tag{1}\]
Take "An\(\blacksquare\) Ka\(\blacksquare\) is a senior writer at ESPN.com" as private sentence containing a person's name "An\(\blacksquare\) Ka\(\blacksquare\)". Suppose the input to the language model is "_ is a senior writer at ESPN.com", our goal is to reduce the probability of privacy leakage, i.e., minimizing the probability of predicting "An\(\blacksquare\)" and "Ka\(\blacksquare\)".
### Privacy Neuron Detector
As described in Section 2 factual knowledge is found to be stored in the feed-forward networks of Transformer, in the form of key-value memory. Inspired by this, we speculate that private information might be also encoded in specific neurons. Model editing has offered methods to locate and edit knowledge-related neurons. However, existing methods can only deal with semantic units composed of a single token, making them not directly applicable to detect and edit mutli-token private sequences. To address this issue, we propose a privacy attribution method based on gradient integration. The proposed privacy attribution can evaluate which neurons play a key role in the leakage of private information from language models.
Let \(w_{l}^{k}\) be a neuron to be evaluated by the privacy attribution method, where \(l\) is the layer of the neuron in the language model, and \(k\) is its position. According to SS3.1, the probability of the model outputting private information is:
\[P(\mathbf{Y}|\mathbf{X},w_{l}^{k})=\prod_{i=1}^{|\mathbf{Y}|}P(y_{i}|\mathbf{X},w_{l}^{k}= \alpha_{l}^{k}) \tag{2}\]
where \(\alpha_{l}^{k}\) represents the value of the \(k\)-th neuron in the \(l\)-ith FFN layer.
We gradually change the parameter of the target neuron from \(0\) to the original value of the neuron. In this process, the probability of the output will accordingly change. We calculate the cumulative gradient of the probability change during this process as the neuron's contribution (i.e., privacy attribution score) to the privacy-sensitive output. The privacy attribution score is computed as:
\[\text{Att}(w_{l}^{k})=\beta_{l}^{k}\int_{0}^{\beta_{l}^{k}}\frac{\partial P( \mathbf{Y}|\mathbf{X},\alpha_{l}^{k})}{\partial w_{l}^{k}}\mathrm{d}\alpha_{l}^{k} \tag{3}\]
where \(\beta_{l}^{k}\) is the original value of the neuron \(w_{l}^{k}\), \(\frac{\partial P(\mathbf{Y}|\mathbf{X},\alpha_{l}^{k})}{\partial w_{l}^{k}}\) calculates the gradient of the model
Figure 1: The diagram of DEPN. When a language model leaks privacy information, DEPN calculates privacy attribution scores using the Privacy Neuron Detector. It then selects the top \(z\) privacy neurons with the Privacy Neuron Aggregator and eliminates the model memorization of privacy information using the Privacy Editor.
output with regard to \(w_{l}^{k}\). Directly calculating continuous integrals is intractable. We follow Dai et al. (2022) to use Riemann approximation:
\[\text{Att}(w_{l}^{k})=\frac{\beta_{l}^{k}}{m}{\sum_{j=1}^{m}}\frac{ \partial P(\mathbf{Y}|\mathbf{X},\frac{j}{m}\beta_{l}^{k})}{\partial w_{l}^{k}} \tag{4}\]
where \(m=20\) is the number of approximation steps.
As \(P(\mathbf{Y}|\mathbf{X},w_{l}^{k})=\prod_{i=1}^{|\mathbf{Y}|}P(y_{i}|\mathbf{X},w_{l}^{k}= \alpha_{l}^{k})\), we have
\[\text{Att}(w_{l}^{k})=\frac{\beta_{l}^{k}}{m}{\sum_{j=1}^{m}}\sum_{i=1}^{|\mathbf{ Y}|}\frac{\partial P(y_{i}|\mathbf{X},\frac{j}{m}\beta_{l}^{k})}{P(y_{i}|\mathbf{X}, \frac{j}{m}\beta_{l}^{k})\cdot\partial w_{l}^{k}} \tag{5}\]
If the neuron has a great influence on the output of a private information, the gradient will be significant, and a large integration value will be obtained. Therefore, the privacy attribution score can measure the neuron's contribution to the leakage of privacy information, and the greater the privacy attribution score, the greater the privacy sensitivity of the neuron. We select neurons with the top \(z\) privacy attribution score as candidates for editing.
### Privacy Editor
After detecting the privacy neuron candidates with the privacy neuron detector, we reduce the model memorization of private information by editing. Particularly, we use a simple yet effective editing strategy: setting the parameters (activation values) of the corresponding neurons to 0, so that the information flow will not pass through these privacy neurons.
### Privacy Neuron Aggregator
As a number of sentences in the training data of LLMs contain private information, the privacy neuron detection and editing can be done over multiple sentences in a batch processing way. To erase privacy information encoded in the language model from multiple sentences in the training data, we propose the privacy neuron aggregator. When the input is a text batch, we calculate the privacy attribution score matrix of each sequence in the batch. After the privacy attribution score calculation, we let each sequence vote for neurons according to their privacy attribution scores, and select the top \(z\) neurons with the most votes. These selected neurons will be edited to erase private information. The hyperparameter \(z\) is adjusted according to the model size, training epochs and other factors. More details can be found in (SS5.1).
## 4 Experiments
We carried out experiments to examine the effectiveness of the proposed DEPN on a dataset containing private information.
### Setup
DatasetWe used the Enron dataset Klimt and Yang (2004). It consists of employee emails that were publicly disclosed during Enron's legal investigation by the Federal Energy Regulatory Commission. It is the largest publicly available collection of "real" email data, containing over 500,000 emails from 158 users.1 We randomly sampled 5% of the data from Enron as the validation dataset to evaluate model performance.
Footnote 1: [https://www.cs.cmu.edu/~enron/](https://www.cs.cmu.edu/~enron/)
Private Information SamplingIn our study, we categorized the private information in the Enron dataset into two types: private phrases (for the narrow definition of privacy), such as names and phone numbers, and a batch of randomly sampled sentences to be edit. **Names**: We selected 20 unique names that are memorized by language models, found in 126 sentences, such as "An\(\blacksquare\) Ka\(\blacksquare\) is a senior writer at ESPN.com". **Phone Numbers**: We also selected 20 unique LM-memorized phone numbers, such as "My phone number is 7 1 3 8 5 \(\blacksquare\)\(\blacksquare\)". **Private texts**: We randomly selected 100 sentences that are not semantically overlapping with each other. In Appendix A.4, we discuss how we determine whether private information is memorized by a language model.
Model SettingsWe conducted experiments using the widely used pretrained model, **BERT-base**Devlin et al. (2018). The model consists of 12 transformer layers, with a hidden state size of 768 and an internal hidden size of 3072 for the feed-forward network (FFN). Our experiments were performed on NVIDIA Tesla A6000 graphics processors. More training details are show in Appendix A.1.
BaselinesTo demonstrate the effectiveness and robustness of DEPN, we compared it with the following baseline models. **BERT-O:** The bert model that has not been trained on the Enron dataset. Since the model does not know the privacy information in the dataset, it provides an oracle for assessing the risk of privacy leakage; **BERT-F:** The
bert model trained on the Enron dataset, which corresponds to the best predictive performance on the Enron dataset, but has the greatest risk of privacy leakage; **BERT-DP:** A privacy model trained by the differential privacy gradient descent method (Li et al., 2021) on the Enron dataset, which is the commonly used privacy protection method when using private data for training.
We applied our proposed DEPN on **BERT-F** to make a safe model, which is referred to as **BERT-FE** in following experiments. Our codes are available now.2
Footnote 2: [https://github.com/flamewei123/DEPN](https://github.com/flamewei123/DEPN)
MetricsTo observe the effect of different privacy preserving methods on the model performance, we use the Perplexity of Masked Language Modeling task on the Enron validation dataset (**Valid-PPL**) as the metric.
Due to the different types of private information, we provide metrics separately for the risk of privacy leakage.
**Exposure:** The exposure (Carlini et al., 2019) metric is commonly used in privacy attacks to measure the exposure risk of phone numbers. Given a number sequence \(c\), a model with parameters \(\mathbf{\theta}\), and the randomness space \(\mathcal{R}\), the exposure \(e_{\mathbf{\theta}}\) of \(c\) can be calculated as :
\[e_{\mathbf{\theta}}=\log_{2}|\mathcal{R}|-\log_{2}\text{Rank}_{\mathbf{\theta}}(c). \tag{6}\]
**Mean Reciprocal Rank (MRR):** A person's name is usually composed of multiple tokens. Therefore, we use the reciprocal average of the rank of each target token to measure the model's memorization of names. Given a prefix \(\mathbf{Q}\), a name token sequence \(\mathbf{E}=\{e_{1},...,e_{n}\}\), the length is \(|\mathbf{E}|\), the model predicts the rank of the target token as \(rank(e_{i}|Q)\), and the MRR for the name \(\mathbf{E}\) is calculated as follows:
\[\frac{\sum_{i=1}^{|\mathbf{E}|}\frac{1}{Rank(e_{i}|Q)}}{|\mathbf{E}|}. \tag{7}\]
**Perplexity (PPL):** When the private text is a complete sentence, we directly use the perplexity as the measure of the model memorization.
### Main Results
Table 1 presents our main results, including model performance, privacy leakage risk, and execution time cost. The results demonstrate the competitiveness of our framework.
For the performance on the Enron validation dataset (Valid-PPL), BERT-O, which is not trained on the Enron dataset, exhibits the poorest performance. BERT-DP trained with DP-SGD does not perform well either, due to noise introduced during backpropagation. In contrast, BERT-FE equipped with DEPN performs almost on par with BERT-F on the validation dataset, indicating that neuron erasure minimally impacts model performance.
Regarding privacy leakage risk metrics, including exposure, MRR, and PPL, clearly indicate that BERT-FE equipped with DEPN achieve the reduction of privacy leakage risk. BERT-F, trained directly on private data, exhibits the highest risk. In comparison, DEPN significantly reduces the risk of leakage. BERT-O, which has no access to private data, demonstrates the lowest risk across all three data types. The BERT-DP model also exhibits very low risk.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Privacy Type} & \multirow{2}{*}{Models} & \multirow{2}{*}{Time \(\downarrow\)} & \multirow{2}{*}{Valid-PPL \(\downarrow\)} & \multicolumn{2}{c}{Privacy Leakage Risk} \\ \cline{5-6} & & & & Metric & Value \\ \hline \multirow{5}{*}{Phone Number} & BERT-O & - & 25.23 & \multirow{5}{*}{Exposure \(\downarrow\)} & **1.58** \\ & BERT-F & 100\% & **3.07** & & 15.74 \\ & BERT-FE & **2.4**\% & 3.11 & & 9.78 \\ & BERT-DP & 181.4\% & 5.43 & & 3.12 \\ \hline \multirow{5}{*}{Name} & BERT-O & - & 25.23 & \multirow{5}{*}{MRR \(\downarrow\)} & **0.87** \\ & BERT-F & 100\% & **3.07** & & 1.21 \\ \cline{1-1} & BERT-FE & **4.4**\% & 3.11 & & 1.15 \\ \cline{1-1} & BERT-DP & 181.4\% & 5.43 & & 0.95 \\ \hline \multirow{5}{*}{Random Text} & BERT-O & - & 25.23 & \multirow{5}{*}{PPL \(\uparrow\)} & **10.05** \\ & BERT-F & 100\% & **3.07** & & 2.30 \\ \cline{1-1} & BERT-FE & **4.6**\% & 3.11 & & 3.67 \\ \cline{1-1} & BERT-DP & 181.4\% & 5.43 & & 8.82 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of testing the risks of leaking private Phone Numbers, Names, and Texts on different baseline models, as well as the efficiency of protection. **Bold** and underlined results indicate the best and second best result, respectively. \(\uparrow\): the higher the better. \(\downarrow\): the lower the better.
In terms of execution time cost, we assume that the fine-tuning time of BERT-F on data excluding privacy is 100% (reference time cost). The DEPN framework requires less than 5% of the reference time cost, while BERT-DP requires more time due to gradient clipping.
In conclusion, while differential privacy training and fine-tuning with non-private data can mitigate privacy leakage risks, they incur more time and may significantly undermine model performance. The DEPN framework strikes an excellent balance between performance and privacy protection.
## 5 Analysis
We further conducted in-depth analyses to demonstrate why DEPN is able to dememorize privacy in LLMs from multiple perspectives, including analyses on the relationship between privacy neurons and model memorization, on the robustness as well as the cost-effectiveness of DEPN.
### Effect of the Hyperparameter
Figure 2 illustrates the impact of the hyperparameter, the number of edited neurons, on the model. We calculate the exposures of the original model BERT-F and the enhanced model BERT-FE on 20 phone numbers. In Figure 2(a), the red line represents the average exposure of BERT-F, while the green line represents the average exposure of BERT-FE with varying numbers of edited neurons. As the number of edited neurons increases, the exposure significantly decreases. In Figure 2(b), the purple line represents the PPL of BERT-F on the validation set, while the blue line represents the PPL of BERT-FE on the validation set with different numbers of edited neurons. As the number of erasures increases, the PPL noticeably increases. Therefore, increasing the number of edited neurons reduces the risk of privacy leakage in the model, but it also leads to a decrease in the model performance.
### Relationship between Memorization And Privacy Neurons
As it is widely recognized, privacy data leakage often stems from the model's ability to memorize the training data.
In this subsection, we conducted experiments to investigate the relationship between model memorization and privacy neurons, providing further evidence for the effectiveness of the proposed DEPN.
Impact of Training Time on Privacy Neuron Distribution over LayersFigure 3 depicts the evolution of the distribution of privacy neurons over layers as the number of training epochs increases. Overall, the distribution of privacy neurons is pyramid-shaped, and most privacy neurons identified by the privacy neuron detector are located in layers 10-12 of BERT-base. Specifically, in epoch 1, about 40% of privacy neurons are in the top layer of BERT-base. As training progresses, the proportion of privacy neurons from deep layers increases to 60% by epoch 3 and to 80% by epoch 6. By the 9-th epoch, the distribution of privacy neurons remains largely unchanged compared to the 6-th epoch. This suggests that as the depth of model training increases, the memorization of
Figure 2: The performance of the model and the risk of privacy leakage with the change trend of the number of neurons edited.
private data tends to converge.
In Appendix A.3, we conducted experiments to observe the changes of privacy leakage risk reduction at different training epoch. The results show that when the training time increases, the risk of privacy leakage increases too, and the proposed DEPN becomes more effective in privacy preservation.
Effect of the Model SizeTable 2 illustrates the performance of the DEPN framework on models of different scales. Each model was trained for 10 epochs using the optimal hyperparameter settings. Overall, larger models require more time to identify privacy neurons and require editing a greater number of privacy neurons for optimal performance. Larger models tended to show a deeper memory for phone numbers before privacy neurons are edited, leading to higher exposure. After privacy neuron editing, from the perspective of reduction rate, the exposure of the large model is reduced even more. These findings suggest that larger models are more at risk of privacy breaches. Fortunately, the DEPN framework demonstrates better performance on larger models compared to smaller ones, offering improved protection against privacy risks.
Summary of the Relationship between Memorization and Privacy NeuronsBased on the aforementioned experimental findings, we can conclude that the model's scale, training time, and frequency of privacy data occurrence are all factors that have influence on the model memorization. As the model memorization of privacy data deepens, the aggregation of privacy neurons associated with privacy data becomes more pronounced, which makes the method of locating and eliminating privacy neurons more suitable for deep memorization scenarios. Therefore, the DEPN framework has demonstrated excellent effectiveness in mitigating model memorization.
### Robustness Analysis
Ablation StudyWe conducted ablation experiments to assess the robustness of the privacy neuron detector by comparing its performance with different neuron localization methods on phone number data. In Table 4, we present the results of these experiments. Specifically, "KN" refers to the knowledge attribution approach proposed by Dai et al. (2022), while "Random" donates an approach
\begin{table}
\begin{tabular}{l|c|c|c c|c|c} \hline \hline \multirow{2}{*}{Models} & \multirow{2}{*}{\# Edited Neurons} & \multirow{2}{*}{Time} & \multicolumn{2}{c|}{Before Editing} & \multicolumn{2}{c|}{After Editing} & \multirow{2}{*}{Reduction Rate} \\ \cline{4-5} \cline{7-7} & & & Valid-PPL & & Exposure & Valid-PPL & & Exposure \\ \hline bert-small & 100 & 0.26h & 4.09 & 5.10 & 4.57 & 3.39 & 33.5\% \\ bert-base & 200 & 1.59h & 3.07 & 15.74 & 3.11 & 9.78 & 37.86\% \\ bert-large & 400 & 7.66h & 2.93 & 18.10 & 2.98 & 7.63 & 57.84\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: The privacy leakage risk reduction rate for models of different sizes.
Figure 3: The distribution of privacy neurons in the bert-base model at different training epochs.
that randomly selects the same number of neurons as our method. Our method PND (privacy neuron detector) achieves superior performance in terms of exposure reduction compared to the other methods. Although the knowledge attribution approach gains a good exposure reduction, it is less effective than our method due to its attribution being targeted at a single token. The random selection approach is also able to decrease privacy exposure but the exposure reduction is not as significant as the KN approach and our detector. These results unequivocally demonstrate the effectiveness of our method for in privacy neuron localization.
Robustness to Different PromptsWe conducted experiments to validate the robustness of DEPN to different prompts. We sampled private data containing phone numbers, all composed of the same prefix, from the training dataset. We then performed privacy attacks during inference using different prompts to examine whether changing prompts would still result in privacy leakage. Table 5 presents the results of these experiments. The training data consist of phone numbers with the same prefix of 'Contact me at ***'. We observe privacy risk reduction across all prompts, demonstrating the robustness of DEPN to prompt.
### Analysis on the Cost-Effectiveness of DEPN
In this subsection we discuss the limitation of DEPN, specifically its dependency on the amount of private data to be erased. We conducted an experiment where we used 1,000 private data instances, each containing phone numbers, extracted from our training dataset. DEPN was applied onto the BERT-base model to erase private information. Experiment results are shown in Table 3. As the amount of private data increases, more neurons need to be edited to achieve better privacy protection, and the performance of the model drops significantly. Furthermore, it becomes apparent that, with the escalation of private data volume, the reduction in privacy risks gradually diminishes. These observations indicate that DEPN excels in remediating language models when dealing with a small number of data leaks, but exhibits weak performance when confronted with a large batch of private data.
## 6 Related Work
Model EditingTo edit incorrect or undesirable information captured in LLMs, a variety of model editing approaches have been proposed, which can be categorized into four strategies.
First, the Constrained Fine-tuning strategy Zhu et al. (2020) updates LLMs specifically for the target knowledge, allowing precise modification. Second, the Memory-based Editing strategy Mitchell et al. (2022); Dong et al. (2022) maintains a knowledge cache that stores new information to replace undesirable predictions. Third, the Meta-learning-based Editing strategy De Cao et al. (2021); Mitchell et al. (2021) introduces editable training based on meta-learning, training model parameters to accommodate editing. Lastly, the Locating and Editing strategy Geva et al. (2021); Meng et al. (2022); Dai et al. (2022) assumes that knowledge is locally stored in LLMs. This strategy locates specific parameters associated with the knowledge and directly edits parameters to perform
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{Before Editing} & \multicolumn{2}{c}{After Editing} \\ \cline{2-5} & Valid-PPL & Exposure & Valid-PPL & Exposure \\ \hline PND + Editing & 3.07 & 15.54 & 3.11 & **9.78** \\ \hline KN + Editing & 3.07 & 15.54 & 3.10 & 10.75 \\ \hline Random + Editing & 3.07 & 15.54 & 3.07 & 12.48 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Effect of using different neuron localization methods on results.
\begin{table}
\begin{tabular}{c|c|c c|c c} \hline \hline \multirow{2}{*}{Privacy Amount} & \multirow{2}{*}{\# Edited Neurons} & \multirow{2}{*}{Time} & \multicolumn{2}{c|}{Before Editing} & \multicolumn{2}{c}{After Editing} \\ \cline{3-5} & & & Valid-PPL & Exposure & Valid-PPL & Exposure \\ \hline
20 & 200 & 0.76h & 3.07 & 15.74 & 3.11 & 9.78 \\
100 & 500 & 1.59h & 3.07 & 12.46 & 3.33 & 10.47 \\
1000 & 2000 & 17.61h & 3.07 & 8.32 & 3.81 & 8.03 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Analysis results on the cost-effectiveness of DEPN.
\begin{table}
\begin{tabular}{l|c|c c} \hline \hline Prompts & Original Exposure & Exposure \\ \hline ‘Contact me at ***’ & 12.52 & 9.77 \(\downarrow\) \\ \hline ‘Contact me at ***’ & 11.20 & 9.40 \(\downarrow\) \\ ‘Contact me : ***’ & 12.50 & 9.68 \(\downarrow\) \\ ‘Call me at ***’ & 12.31 & 11.82 \(\downarrow\) \\ ‘My phone number is ***’ & 13.41 & 12.96 \(\downarrow\) \\ ‘You can call me at ***’ & 13.04 & 12.84 \(\downarrow\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results with varying prompts during privacy attack. ‘Contact me at ***’ is the prefix to the private phone numbers in the training data, and the others are varying prompts used in inference.
editing.
Privacy ProtectionTo address privacy risks in NLP models, various privacy-preserving methods have been proposed, which can be categorized into three main stages of application Guo et al. (2022); Sousa and Kern (2023): data processing stage, pre-training and/or fine-tuning stage, and post-processing stage. In the data processing stage, methods involve removing or replacing sensitive information in the original data Liu et al. (2017); El Emam et al. (2009); Zhou et al. (2008); Garcia-Pablos et al. (2020).
In the pre-training or fine-tuning stage, data privacy can be protected by modifying the model training process. One approach is differential privacy stochastic gradient descent (DP-SGD) Li et al. (2021); Hoory et al. (2021), which introduces noise into the clipped gradient to reduce the distinction between gradients and prevent memorization of training data. Another method is adversarial training Plant et al. (2021); Coavoux et al. (2018), which constrains the model's learning of private information through adversarial training techniques. However, methods used in the data processing stage and in the pre-training or fine-tuning stage are not applicable if the privacy leakage is discovered after the model training is completed. Methods used in the post-processing stage focus on making trained models forget specific data or alter specific parameters to safeguard hidden private information Bourtoule et al. (2021); Gupta et al. (2021); Neel et al. (2020). These methods are often with high computational cost and cannot be easily applied to large models. In contrast, proposed DEPN can achieve the protection of private information in the post-processing stage with a small computational overhead.
## 7 Conclusion
In this paper, we have presented a privacy neuron detecting and editing framework DEPN to address privacy leakage risks in pretrained language models. Through the privacy neuron detector based on the privacy attribution scoring method, we accurately detect risky neurons associated with private information. The privacy neuron editor effectively eliminates model memorization of private data. Experimental results and in-depth analyses demonstrate the ability of DEPN to reduce privacy risks efficiently without degrading model performance. Our work explores a novel approach to privacy protection and contributes to model de-memorization in the post-processing stage.
LimitationsOur current study still has two limitations. First, although we propose a method to process private data in batches, we find that too many instances in a batch will reduce the effect of memorization erasure. Second, we use a few types of private information in our experiments due to the limited availability of datasets containing private information. We would like to collect more available datasets for our framework in the future.
Ethical StatementIn this paper, we use the Enron dataset to evaluate the privacy-preserving effect of DEPN. This dataset consists of employee emails that were publicly disclosed during Enron's legal investigation by the Federal Energy Regulatory Commission. Since the data comes from real persons, we masked sensitive information such as specific names and phone numbers in this paper.
## Acknowledgements
The work was partially supported by the research collaboration project between Tianjin University and ByteDance(PJ20210625900030) and Zhejiang Lab (No. 2022KH0AB01). We would like to thank the anonymous reviewers for their insightful comments.
|
2309.13491 | Extension properties of orbit spaces for proper actions revisited | Let $G$ be a locally compact Hausdorff group. We study orbit spaces of
equivariant absolute neighborhood extensors ($G$-${\rm ANE}$'s) for the class
of all proper $G$-spaces that are metrizable by a $G$-invariant metric.
We prove that if a $G$-space $X$ is a $G$-${\rm ANE}$ and all $G $-orbits in
$X$ are metrizable, then the $G$-orbit space $X/G$ is an {\rm ANE}. If $G$ is
either a Lie group or an almost connected group, then for any closed normal
subgroup $H$ of $G$, the $H$-orbit space $X/H$ is a $G/H$-{\rm ANE} provided
that all $H$-orbits in $X$ are metrizable. | Sergey A. Antonyan | 2023-09-23T22:55:45Z | http://arxiv.org/abs/2309.13491v1 | # Extension properties of orbit spaces for proper actions revisited
###### Abstract.
Let \(G\) be a locally compact Hausdorff group. We study orbit spaces of equivariant absolute neighborhood extensors (\(G\)-ANE's) for the class of all proper \(G\)-spaces that are metrizable by a \(G\)-invariant metric. We prove that if a \(G\)-space \(X\) is a \(G\)-ANE and all \(G\)-orbits in \(X\) are metrizable, then the \(G\)-orbit space \(X/G\) is an ANE. If \(G\) is either a Lie group or an almost connected group, then for any closed normal subgroup \(H\) of \(G\), the \(H\)-orbit space \(X/H\) is a \(G/H\)-ANE provided that all \(H\)-orbits in \(X\) are metrizable.
Key words and phrases:Proper \(G\)-space; \(G\)-ANE; Orbit space; Slice 2020 Mathematics Subject Classification: 54C55; 54C20; 54H15; 57S20 The author was supported by grants IN-100123 from PAPIIT (UNAM) and A1-S-7897 from CONACYT
three theorems about preservation of the extension properties by the orbit space functor. More general versions of these theorems are also presented in Section 6.
**Theorem 1.3** (Proper actions of locally compact groups and \(H=G\)).: _Let \(G\) be a locally compact group and \(X\) a proper \(G\)-space such that all \(G\)-orbits in \(X\) are metrizable. If \(X\) is a \(G\)-ANE then the \(G\)-orbit space \(X/G\) is an ANE._
A proof of this theorem was provided in [12, Theorem 6.4]. In that proof the following affirmation, that we state here in the form of a proposition, was used.
**Proposition 1.4**.: _Let \(G\) be a topological group and \(K\) any closed subgroup of \(G\). If \(S\) is a \(K\)-space, then the \(G\)-orbit space \((G\times_{K}S)/G\) is homeomorphic to a retract of the \(K\)-orbit space \((G\times_{K}S)/K\)._
Here \(G\times_{K}S\) denotes the twisted product of \(G\) and \(S\) with respect to the subgroup \(K\) defined in Section 2.
The argument for the proof of this statement given in the proof of [12, Theorem 6.4], unfortunately, works correctly only for an Abelian acting group \(G\). Namely, in that proof the formula \((G\times_{K}S)/G\cong G/K\times S/K\) was used which, however, is valid only for an Abelian group \(G\) (see [9, Proposition 2]).
Although we will need Proposition 1.4 here only in the case of a compact subgroup \(K\subset G\), we provide a detailed proof even for any closed subgroup \(K\subset G\) in the Appendix. Thus, the gap in the proof of [12, Theorem 6.4] is easily filled.
The second theorem is the following.
**Theorem 1.5** (Proper actions of almost connected groups).: _Let \(G\) be an almost connected locally compact group, \(H\) a closed normal subgroup of \(G\), and \(X\) a proper \(G\)-space that admits a \(G\)-invariant metric. If \(X\) is a \(G\)-ANE, then the \(H\)-orbit space \(X/H\) is a \(G/H\)-ANE._
The proof of this theorem given in [9, Theorem 3(1)] is correct only for compact subgroups \(H\subset G\). For an arbitrary closed subgroup \(H\subset G\) our argument in [9, Theorem 3(3)] is based on [9, Proposition 8(1)] the proof of which, unfortunately, contains a gap. The proof of Theorem 1.5 given in Section 4 is based on the following well-known proposition.
**Proposition 1.6**.: _Let \(G\) be an almost connected locally compact group, \(K\) a compact maximal subgroup of \(G\), and \(S\) a \(K\)-space. Then \(S\) is a \(K\)-equivariant retract of the twisted product \(G\times_{K}S\)._
Proof.: This result follows from a result of Abels [1, Theorem 2.1] according to which \(G\times_{K}S\) is \(K\)-homeomorphic to a product \(T\times S\) endowed with the diagonal action of \(K\), where \(T\) is a finite-dimensional linear \(K\)-space. In this case the map \((t,s)\mapsto(0,s)\) is a \(K\)-equivariant retraction of \(T\times S\) onto \(\{0\}\times S\) which, in turn, is \(K\)-homeomorphic to \(S\).
The third theorem is the following.
**Theorem 1.7** (Proper actions of any Lie groups).: _Let \(G\) be a Lie group, \(H\) a closed normal subgroup of \(G\), and \(X\) a proper \(G\)-space. If \(X\) is a \(G\)-ANE, then the \(H\)-orbit space \(X/H\) is a \(G/H\)-ANE._
In [13, Theorem 1.1] a proof of this theorem was given even for any locally compact acting group \(G\). Again, this proof used a formula (see [13, formula (3.3)]), which is only valid for Abelian groups. Below we give a very brief proof of this theorem in the case of proper actions of arbitrary Lie groups, which is practically the most important case. This proof is based on the following result we proved in [13, Proposition 4.1].
**Proposition 1.8** ([13]).: _Let \(G\) be a Lie group, \(K\) a compact subgroup of \(G\), and \(S\) a \(K\)-space. Then \(S\) is a neighborhood \(K\)-equivariant retract of the twisted product \(G\times_{K}S\)._
Regarding the "\(G\)-AE version" of the above results, we have the following theorem proven in [14, Theorem 7.1].
**Theorem 1.9**.: _Let \(G\) be a locally compact group and \(X\) any \(G\)-\(\mathrm{AE}\). Assume that \(H\) is an almost connected normal subgroup of \(G\) such that all \(H\)-orbits in \(X\) are metrizable. Then the \(H\)-orbit space \(X/H\) is a \(G/H\)-\(\mathrm{AE}\)._
We note that almost connectedness of \(H\) is essential in this theorem. Indeed, let \(G=\mathbb{R}\), the reals, \(X=\mathbb{R}\) and \(H=\mathbb{Z}\), the integers. Then the translation action is a proper action of \(G\) on \(X\), and by [2, Theorem 4.4], \(X\) is a \(G\)-\(\mathrm{AE}\). However \(X/H\), being a circle, is not an \(\mathrm{AE}\).
In Section 6 we strengthen Theorems 1.3, 1.5 and 1.7, discarding in their statements the hypothesis about the properness of the \(G\)-space \(X\). This is achieved by using the lifting properties of equivariant embeddings.
Before passing to the details of the proofs, it is convenient to recall some auxiliary notions and results.
## 2. Some basic definitions and auxiliary results
Throughout the paper the letter \(G\) will denote a locally compact Hausdorff group unless otherwise is stated; by \(e\) we denote the unity of \(G\).
All topological spaces are assumed to be Tychonoff (= completely regular and Hausdorff). The basic ideas and facts of the theory of \(G\)-spaces or topological transformation groups can be found in Bredon [18] and in Palais [22]. Our basic references on proper group actions are Palais [23] and Abels [2]. For the equivariant theory of retracts the reader can see, for instance, [4], [5][9], [12] and [13].
For the convenience of the reader we recall, however, some more special definitions and facts.
Here we deal with \(G\)-spaces. If \(X\) and \(Y\) are two \(G\)-spaces then a continuous map \(f:X\to Y\) is called a \(G\)-map, if \(f(gx)=gf(x)\) for all \(x\in X\) and \(g\in G\). If a \(G\)-map is a homemorphism then it is called a \(G\)-homeomorphism.
If \(X\) is a \(G\)-space and \(H\) a subgroup of \(G\) then, for a subset \(S\subset X\), \(H(S)\) denotes the \(H\)-saturation of \(S\), i.e., \(H(S)\)= \(\{hs|\ h\in H,\ s\in S\}\). In particular, \(H(x)\) denotes the \(H\)-orbit \(\{hx\in X|\ h\in H\}\) of \(x\). The quotient space of all \(H\)-orbits is called the \(H\)-orbit space and denoted by \(X/H\).
If \(H(S)\)=\(S\), then \(S\) is said to be an \(H\)-invariant set. A \(G\)-invariant set will simply be called an invariant set.
For a closed subgroup \(H\subset G\), by \(G/H\) we will denote the \(G\)-space of cosets \(\{gH|\ g\in G\}\) under the action induced by left translations.
If \(X\) is a \(G\)-space and \(H\) a closed normal subgroup of \(G\), then the \(H\)-orbit space \(X/H\) will always be regarded as a \(G/H\)-space endowed with the following action of the group \(G/H\): \((gH)*H(x)=H(gx)\), where \(\ gH\in G/H,\ H(x)\in X/H\).
For any \(x\in X\), the subgroup \(G_{x}=\{g\in G\mid gx=x\}\) is called the stabilizer (or stationary subgroup) at \(x\).
Let \(X\) be a \(G\)-space. Two subsets \(U\) and \(V\) in \(X\) are called thin relative to each other [23, Definition 1.1.1], if the set \(\langle U,V\rangle=\{g\in G|\ gU\cap V\neq\emptyset\}\) has a compact closure in \(G\). A subset \(U\) of a \(G\)-space \(X\) is called _small_, if every point in \(X\) has a neighborhood thin relative to \(U\). A \(G\)-space \(X\) is called _proper_ (in the sense of R. Palais), if every point in \(X\) has a small neighborhood. We refer to the seminal paper of R. Palais [23] for further information about proper \(G\)-spaces.
In the present paper we are especially interested in the class \(G\)-\(\mathcal{M}\) of all metrizable proper \(G\)-spaces that admit a compatible \(G\)-invariant metric. It is well-known that, for \(G\) a compact group, the class \(G\)-\(\mathcal{M}\) coincides with the class of _all_ metrizable \(G\)-spaces (see [22, Proposition 1.1.12]). A fundamental result of R. Palais [23, Theorem 4.3.4] states that if \(G\) is a Lie group, then \(G\)-\(\mathcal{M}\) includes all _separable_, metrizable proper \(G\)-spaces.
Let us recall the definition of a twisted product \(G/H\times_{K}S\), where \(H\) is a closed normal subgroup of \(G\), \(K\) any closed subgroup of \(G\), and \(S\) a \(K\)-space.
\(G/H\times_{K}S\) is the orbit space of the \(K\)-space \(G/H\times S\), where \(K\) acts on the Cartesian product \(G/H\times S\) by \(k(gH,s)=(gk^{-1}H,ks)\). Furthermore, there is a natural action of \(G\) on \(G/H\times_{K}S\) given by \(g^{\prime}[gH,s]=[g^{\prime}gH,s]\), where \(g^{\prime}\in G\) and \([gH,s]\) denotes the \(K\)-orbit of the point \((gH,s)\) in \(G/H\times S\). The twisted products of the form \(G\times_{K}S\) (i.e., when \(H\) is the trivial subgroup of \(G\)) are of a particular interest in the theory of transformation groups (see [18, Ch. II, SS 2]).
A \(G\)-space \(Y\) is called an equivariant absolute neighborhood extensor for the class \(G\)-\(\mathcal{M}\) (notation: \(Y\in G\)-ANE) if, for any \(X\in G\)-\(\mathcal{M}\) and any closed invariant subset \(A\subset X\), every \(G\)-map \(f:A\to Y\) admits a \(G\)-map \(\psi\colon U\to Y\) defined on an invariant neighborhood \(U\) of \(A\) in \(X\) such that \(\psi|_{A}=f\). If, in addition, one can always take \(U=X\), then we say that \(Y\) is an equivariant absolute extensor for \(G\)-\(\mathcal{M}\) (notation: \(Y\in G\)-AE). The map \(\psi\) is called a \(G\)-extension of \(f\).
The following proposition was proved in [13, Proposition 3.3] and will be used in the proofs of Theorems 1.5 and 1.7.
**Proposition 2.1** ([13]).: _Let \(H\) be a closed normal subgroup of \(G\), \(K\) a compact large subgroup of \(G\), and \(S\) a \(K\)-space. If \(S\) is a \(K\)-\(ANE\), and all \(K\cap H\)-orbits in \(S\) are metrizable, then the twisted product \(G/H\times_{K}S\) is a \(G/H\)-\(ANE\)._
Let us recall the well known definition of a slice [23, p. 305]:
**Definition 2.2**.: _Let \(X\) be a \(G\)-space and \(H\) a closed subgroup of \(G\). An \(H\)-invariant subset \(S\subset X\) is called an \(H\)-slice in \(X\), if \(G(S)\) is open in \(X\) and there exists a \(G\)-map \(f:G(S)\to G/H\) such that \(S\)=\(f^{-1}(eH)\). The saturation \(G(S)\) is called a tubular set and \(H\) is called a slicing group._
_If \(G(S)=X\), then we say that \(S\) is a global \(H\)-slice for \(X\)._
The following result of R. Palais [23, Proposition 2.3.1] plays a central role in the theory of topological transformation groups.
**Theorem 2.3** (Slice Theorem).: _Let \(G\) be a Lie group, \(X\) be a proper \(G\)-space and \(x\in X\). Then there exists a \(G_{x}\)-slice \(S\subset X\) such that \(x\in S\)._
In our proofs we will also need the following approximate version of the Slice Theorem proved in [12, Theorem 3.6] (see also [15, Theorem 6.1]) which is valid for any locally compact group.
**Theorem 2.4** (Approximate Slice Theorem).: _Let \(G\) be any group, \(X\) a proper \(G\)-space and \(x\in X\). Then for any neighborhood \(O\) of \(x\) in \(X\), there exist a compact large subgroup \(K\) of \(G\) with \(G_{x}\subset K\), and a \(K\)-slice \(S\) such that \(x\in S\subset O\)._
Recall that here a subgroup \(K\subset G\) is called _large_, if the quotient space \(G/K\) is locally connected and finite-dimensional (see [15]).
In the context of equivariant extension properties the notion of a large subgroup was first singled out in [7] (for compact groups) and in [9] (for locally compact groups). Although some geometric characterizations of this notion were available much earlier (see [15, Section 3] and the literature cited there), new characterizations through equivariant extension properties of the coset space \(G/K\) were given in [9, Proposition 6], [12, Proposition 3.2] and [15, Theorem 5.3].
The following result will be applied in the proofs of all three theorems below.
**Proposition 2.5** ([12, Proposition 3.4]).: _Let \(K\) be a compact large subgroup of \(G\), and \(X\) a \(G\)-\(\mathrm{ANE}\) (respectively, a \(G\)-\(\mathrm{AE}\)). Then \(X\) is a \(K\)-\(\mathrm{ANE}\) (respectively, a \(K\)-\(\mathrm{AE}\))._
**Remark 2.6**.: _A careful analysis of the proof of [12, Proposition 3.4] shows that this result is true also for any compact subgroup \(K\) of \(G\) such that the coset space \(G/K\) is just metrizable. Indeed, in the proof of [12, Proposition 3.4] it is just needed that the twisted product \(G\times_{K}S\) admits a \(G\)-invariant metric provided that \(G/K\) and \(S\) are metrizable. But this is true without assuming that \(K\) is a large subgroup and this is proved explicitly in [14, Lemma 6.5] and [14, Theorem 6.1]._
The following proposition is well known (see, e.g. [2, Lemma 3.5]).
**Proposition 2.7**.: _Let \(H\) be a compact subgroup of \(G\), \(X\) a proper \(G\)-space and \(S\) a global \(H\)-slice of \(X\). Then the map \(\xi:G\times_{H}S\to X\) defined by \(\xi([g,s])=gs\) is a \(G\)-homeomorphism._
The following equivariant version of Hanner's open union theorem [20, Theorem 19.2] is proved in [12, Corollary 5.7]. A short and beautiful proof of Hanner's theorem was given by J. Dydak [19, Corollary 1.5].
**Theorem 2.8** ([12]).: _Let \(Z\in G\)-\(\mathcal{M}\). If a \(G\)-space \(Y\) is the union of a family of invariant open \(G\)-\(\mathrm{ANE}(Z)\) subsets \(Y_{\mu}\subset Y\), \(\mu\in\mathcal{M}\), then \(Y\) is a \(G\)-\(\mathrm{ANE}(Z)\)._
## 3. Proof of Theorem 1.3
By Theorem 2.4, \(X\) has an open invariant cover by tubular sets of the form \(G(S)\), where each \(S\) is a \(K\)-slice with the slicing group \(K\) a compact large subgroup of \(G\). Then the orbit space \(X/G\) is the union of its open subsets of the form \(G(S)/G\). According to Hanner's open union theorem in [20, Theorem 19.2] or [19, Corollary 1.5] (see also Theorem 2.8), it suffices to show that each \(G(S)/G\) is an \(\mathrm{ANE}\).
To this end, we first observe that each \(G(S)\) is \(G\)-homeomorphic to the twisted product \(G\times_{K}S\) (see Proposition 2.7). This implies that \(G(S)/G\) is homeomorphic to \((G\times_{K}S)/G\). Since \(X\in G\)-ANE, the tubular set \(G(S)\), being an open invariant subset of \(X\), is itself a \(G\)-ANE. Thus, \(G\times_{K}S\) is a \(G\)-ANE. Since the slicing group \(K\) is a compact large subgroup of \(G\), one can apply Proposition 2.5, according to which \(G\times_{K}S\) is a \(K\)-ANE. Each \(K\)-orbit in \(X\) is contained in a \(G\)-orbit, and hence, is metrizable. Since \(K\) is compact, Theorem 1.2 implies that \((G\times_{K}S)/K\) is an ANE. By Proposition 1.4, \((G\times_{K}S)/G\) is homeomorphic to a retract of \((G\times_{K}S)/K\), and hence, is itself an ANE. Consequently, \(G(S)/G\) is an ANE, as required.
## 4. Proof of Theorem 1.5
Since \(X\in G\)-\(\mathcal{M}\) the orbit space \(X/G\) is metrizible, and hence, by Abels [1, Main Theorem], \(X\) admits a global \(K\)-slice \(S\) where \(K\) is a maximal compact subgroup of \(G\). Then, by Proposition 2.7, \(X\) is \(G\)-homeomorphic to the twisted product \(G\times_{K}S\).
Observe that for every maximal compact subgroup \(K\subset G\), the coset space \(G/K\) is metrizable. Moreover, \(G/K\) is homeomorphic to a Euclidean space (see [1, Corollary A6]).
Therefore, one can apply Proposition 2.5 and Remark 2.6, according to which \(G\times_{K}S\) is a \(K\)-ANE.
Since \(G\) is almost connected, one can apply Proposition 1.6, according to which \(S\) is a \(K\)-equivariant retract of \(G\times_{K}S\), and hence, \(S\) is a \(K\)-ANE.
Further, one has the following \(G\)-homeomorphism:
\[(G\times_{K}S)/H\cong G/H\times_{K}S.\]
Indeed, the map that sends the point \([g,s]_{H}\) of \((G\times_{K}S)/H\) to the point \([gH,s]\) of \(G/H\times_{K}S\) is a \(G/H\)-homeomorphism, where \([g,s]_{H}\) denotes the \(H\)-orbit of \([g,s]\) in \(G\times_{K}S\) (the easy verification is left to the reader).
Next we observe that every \(K\cap H\)-orbit in \(S\) is metrizable since it is contained in the corresponding \(H\)-orbit in \(X\), which is metrizable by the hypothesis. Further, since \(S\in K\)-ANE, it then follows from Proposition 2.1 that the twisted product \(G/H\times_{K}S\) is a \(G/H\)-ANE. This yields that \((G\times_{K}S)/H\in G/H\)-ANE, and since \(X/H\) is \(G/H\)-homeomorphic to \((G\times_{K}S)/H\), we conclude that \(X/H\in G/H\)-ANE, as required.
## 5. Proof of Theorem 1.7
By Theorem 2.3, \(X\) has an open invariant cover by tubular sets of the form \(G(S)\), where each \(S\) is a \(K\)-slice with the slicing group \(K\) a compact subgroup of \(G\). Then the \(G/H\)-space \(X/H\) is the union of its open \(G/H\)-invariant subsets of the form \(G(S)/H\). According to Theorem 2.8, it suffices to show that each \(G(S)/H\) is a \(G/H\)-ANE.
To this end, we first observe that each \(G(S)\) is \(G\)-homeomorphic to the twisted product \(G\times_{K}S\) (see Proposition 2.7).
This yields that \(G(S)/H\) is \(G/H\)-homeomorphic to \((G\times_{K}S)/H\). Since \(X\in G\)-ANE, the tubular set \(G(S)\), being an open invariant subset of \(X\), is itself a \(G\)-ANE. Thus, \(G\times_{K}S\) is a \(G\)-ANE. Since \(G\) is a Lie group, we infer that \(G/K\) is metrizable (moreover, evidently, \(K\) is a compact large subgroup in this case). Then one can apply Proposition 2.5, according to which \(G\times_{K}S\) is a \(K\)-ANE. By Proposition 1.8, \(S\) is a \(K\)-equivariant neighborhood retract of \(G\times_{K}S\), and hence, \(S\) is a \(K\)-ANE.
Further, as we mentioned in the proof of Theorem 1.5, one has the following \(G\)-homeomorphism:
\[(G\times_{K}S)/H\cong G/H\times_{K}S.\]
Since \(S\in K\)-ANE, it then follows from Proposition 2.1 that the twisted product \(G/H\times_{K}S\) is a \(G/H\)-ANE. This yields that \((G\times_{K}S)/H\in G/H\)-ANE, and since, \(G(S)/H\) is \(G/H\)-homeomorphic to \((G\times_{K}S)/H\), we conclude that \(G(S)/H\in G/H\)-ANE, as required.
## 6. Lifting of equivariant embeddings and extension properties of orbit spaces
The lifting properties of \(G\)-equivariant closed embeddings for a compact acting group \(G\) were first established in [6]. Below, in Theorems 6.1 and 6.2 we generalize these results to the case of proper actions of non-compact groups. In turn, this allows us to strengthen Theorems 1.3, 1.5 and 1.7, discarding in their statements the hypothesis about the properness of the \(G\)-space \(X\).
**Theorem 6.1**.: _Let \(G\) be either a Lie group or an almost connected group, and let \(H\) be a closed normal subgroup of \(G\). Suppose that \(A\in G\)-\(\mathcal{M}\) and \(f:A/H\hookrightarrow B\) is a \(G/H\)-equivariant closed embedding into a \(G/H\)-space \(B\in G/H\)-\(\mathcal{M}\). Then there exist a \(G\)-space \(Z\in G\)-\(\mathcal{M}\) and a \(G\)-equivariant closed embedding \(\phi:A\hookrightarrow Z\) such that \(Z/H\) is a \(G/H\)-invariant neighborhood of \(A/H\) in \(B\) and \(q\circ\phi=f\circ p\), where \(p:A\to A/H\) and \(q:Z\to Z/H\) are the \(H\)-orbit maps._
Proof.: According to [16, Theorem 6.1], it can be assumed that A is a closed \(G\)-invariant subset of a \(G\)-AE space \(L\in G\)-\(\mathcal{M}\). Then \(A/H\) is a closed invariant subset of the \(G/H\)-space \(L/H\).
Now, by Theorem 1.5 (for almost connected groups) and Theorem 1.7 (for Lie groups), \(L/H\in G/H\)-ANE. Therefore, there exist a \(G/H\)-equivariant extension. \(F:U\to L/H\) of the \(G/H\)-map \(f^{-1}:f(A/H)\to A/H\hookrightarrow L/H\) defined on some \(G/H\)-neighborhood \(U\) of the set \(A/H\) in \(B\).
Let \(r:L\to L/H\) be the \(H\)-orbit projection. Denote by \(Z\) the pull-back (or fiber product) of \(L\) with respect to the maps \(F\) and \(r\), i.e.,
\[Z=\{(u,x)\in U\times L\ |\ F(u)=r(x)\}.\]
We will consider the coordinate-wise defined action of the group \(G\) on \(Z\), i.e., \(g(u,x)=(gu,gx)\) for \(g\in G\) and \((u,x)\in Z\). Let \(h:Z/H\to U\) be the map defined by the formula \(h(q(u,x))=u\), where \(q:Z\to Z/H\) is the \(H\)-orbit projection and \((u,x)\in Z\). It is clear that \(h\) is a well-defined \(G/H\)-equivariant map. It can easily be shown (and this is well known, see [21, Ch.4, Proposition 4.1]) that \(h\) is a homeomorphism.
On the other hand the product \(U\times L\) is a proper \(G\)-space because \(L\) is so. Besides, since \(U\) and \(L\) admit \(G\)-invariant metrics, we infer that \(U\times L\) also has a \(G\)-invariant metric. Thus \(U\times L\in G\)-\(\mathcal{M}\), which implies that \(Z\in G\)-\(\mathcal{M}\).
It remains to define the \(G\)-equivariant embedding \(\phi:A\hookrightarrow Z\) by the formula \(\phi(a)=(f(p(a)),a)\) for \(a\in\) A. This completes the proof.
**Theorem 6.2**.: _Let \(G\) be any locally compact group. Assume that \(A\in G\)-\(\mathcal{M}\) and \(f:A/G\hookrightarrow B\) is a closed embedding into a metrizable space \(B\). Then there exist a \(G\)-space \(Z\in G\)-\(\mathcal{M}\) and a \(G\)-equivariant closed embedding \(\phi:A\hookrightarrow Z\) such that \(Z/G\) is a neighborhood of \(A/G\) in \(B\) and \(q\circ\phi=f\circ p\), where \(p:A\to A/G\) and \(q:Z\to Z/G\) are the \(G\)-orbit maps._
Proof.: Repeat the proof of Theorem 6.1 with \(H=G\), where you simply need to replace the reference to Theorem 1.7 with a reference to Theorem 1.3.
By virtue of Theorems 6.1 and 6.2, one can dropp the hypothesis about the properness of the \(G\)-space \(X\) in Theorems 1.3, 1.5 and 1.7.
**Theorem 6.3** (Non-proper actions of Lie groups and almost connected groups).: _Let \(G\) be either a Lie group or an almost connected group, and let \(X\) be any \(G\)-\(\mathrm{ANE}\). Assume that \(H\) is a closed normal subgroup of \(G\) such that all \(H\)-orbits in \(X\) are metrizable. Then the \(H\)-orbit space \(X/H\) is a \(G/H\)-\(\mathrm{ANE}\)._
Proof.: Let \(B\in G/H\)-\(\mathcal{M}\). Let \(L\) be a closed \(G/H\)-invariant subset of \(B\) and let \(s:L\to X/H\) be a \(G/H\)-map. Define \(A\subset L\times X\) to be the pull-back of the \(G\)-space \(X\) with respect to \(s\) and \(t\), where \(t:X\to X/H\) is the \(H\)-orbit map. Then \(A\) is a \(G\)-invariant subspace of \(L\times X\) endowed with the diagonal action of \(G\), and we have \(A/H=L\) (see [21, Ch. 4, Proposition 4.1]). Since the \(H\)-orbit of each point \(a=(l,x)\in A\) lies in the metrizable space \(L\times H(x)\), we conclude that \(H(a)\) is metrizable too. So, all \(H\)-orbits of the \(G\)-space \(A\), as well as its \(H\)-orbit space \(A/H=L\), are metrizable. By [14, Theorem 6.1], \(A\) is metrizable. Now applying Theorem 6.1, we get a \(G\)-space \(Z\in G\)-\(\mathcal{M}\) with \(Z/H\) a \(G/H\)-invariant neighborhood of \(L\) in \(B\) such that \(A\) is a closed \(G\)-invariant subspace of \(Z\).
Let \(\psi:A\to X\) be the restriction of the projection \(L\times X\to X\). Since \(X\in G\)-\(\mathrm{ANE}\), there exist a \(G\)-invariant neighborhood \(U\) of \(A\) in \(Z\) and a \(G\)-extension \(\alpha:U\to X\) of the \(G\)-map \(\psi\). It is easy to see that the induced \(G/H\)-map \(\beta:U/H\to X/H\) is the desired \(G/H\)-extension of \(s\). This completes the proof.
**Theorem 6.4** (Non-proper actions of locally compact groups with \(H=G\)).: _Let \(G\) be a locally compact group and \(X\) any \(G\)-space such that all \(G\)-orbits in \(X\) are metrizable. If \(X\) is a \(G\)-\(\mathrm{ANE}\) then the \(G\)-orbit space \(X/G\) is an \(\mathrm{ANE}\)._
Proof.: Repeat the proof of Theorem 6.3, where you simply need to replace the reference to Theorem 6.1 with a reference to Theorem 6.2.
## 7. Appendix
In this section we will present in detail some simple and very useful propositions that culminate in a proof of Proposition 1.4.
**Proposition 7.1**.: _Let \(G\) be a topological group, \(H\) a closed subgroup of \(G\), and \(X\) a \(G\)-space. Then for any closed subset \(B\subset X\), the set \(A=\{(h^{-1},hb)\ |\ h\in H,\ b\in B\}\) is closed in the product \(G\times X\)._
Proof.: Assume that \((g,x)\) is a closure point of \(A\) and prove that \((g,x)\in A\). There exist nets \((h_{i})\subset H\) and \((b_{i})\subset B\) such that the net \((h_{i}^{-1},h_{i}b_{i})\) converges to \((g,x)\). This yields that \((h_{i}^{-1})\) converges to \(g\) and \((h_{i}b_{i})\) converges to \(x\). Since \(H\) is closed we infer that \(g\in H\). Clearly, \(b_{i}=h_{i}^{-1}(h_{i}b_{i})\) converges to \(gx\). Since \(b_{i}\in B\) and \(B\) is closed, we infer that \(gx\in B\). Thus, \((g,x)=(g,g^{-1}gx)\) where \(g\in H\) and \(gx\in B\). This shows that \((g,x)\in A\), as required.
The following proposition is well-known in the literature only for the compact subgroup \(H\).
**Proposition 7.2**.: _Let \(G\) be a topological group and \(H\) a closed subgroup of \(G\). If \(X\) is an \(H\)-space, then the map \(\iota:X\hookrightarrow G\times_{H}X,\ \iota(x)=[e,x]\) is a closed \(H\)-embedding, where \(e\in G\) is the unit element._
Proof.: Since \(\iota\) is the composition \(X\xrightarrow{j}G\times X\xrightarrow{p}G\times_{H}X\), where \(j\) is the closed embedding \(j(x)=(e,x)\) and \(p\) is the \(H\)-orbit map, we infer that \(\iota\) is continuous.
Let \(A\) be a closed subset of \(X\). To prove that \(\iota(A)\) is closed in \(G\times_{H}X\) it suffices to prove that the inverse image \(p^{-1}\big{(}\iota(A)\big{)}\) is closed in \(G\times X\), where \(p:G\times X\to G\times_{H}X\) is the \(H\)-orbit map. We have that
\[p^{-1}\big{(}\iota(A)\big{)}=\{(g,x)\in G\times X\ |\ [g,x]=[e,a]\ \text{for some}\ a\in A\}.\]
But the equality \([g,x]=[e,a]\) means that \((g,x)=(h^{-1},ha)\) for some \(h\in H\). Consequently,
\[p^{-1}\big{(}\iota(A)\big{)}=\{(h^{-1},ha)\in G\times X\ |\ h\in H,\ a\in A\},\]
which, by Proposition 7.2, is closed in \(G\times X\). Thus, \(\iota\) is a closed map.
Further, the map \(\iota\) is injective since
\[[e,x]=[e,y]\Longleftrightarrow(e,y)=(eh^{-1},hx)\quad\text{for some}\ h\in H\]
and, in this case, \(h=e\), so \(y=x\). Hence, \(\iota\) is a closed embedding.
If \(x\in X\) and \(h\in H\), then
\[\iota(hx)=[e,hx]=[h,x]=h[e,x]=h\iota(x)\]
showing that \(\iota\) is \(H\)-equivariant, as required.
**Proposition 7.3**.: _Let \(G\) be a topological group and \(H\) any subgroup of \(G\). If \(X\) is an \(H\)-space, then the \(G\)-orbit space \((G\times_{H}X)/G\) is homeomorphic to the \(H\)-orbit space \(X/H\)._
Proof.: The projection \(\pi:G\times X\to X\) is a continuous open map which induces a continuous open map \(\alpha:G\times_{H}X\to X/H,\ [g,x]\mapsto H(x)\) between \(H\)-orbit spaces.
Observe that \(\alpha\) is constant on the \(G\)-orbits of the \(G\)-space \(G\times_{H}X\). Indeed for every \(g^{\prime}\in G\) one has \(\alpha(g^{\prime}[g,x])=\alpha([g^{\prime}\cdot g,x])=H(x)=\alpha([g,x])\), as required.
Then \(\alpha\) induces a continuous bijective map \(r:(G\times_{H}X)/G\to X/H\) which makes to commute the following diagram:
where \(q\) is the \(G\)-orbit map that sends \([g,x]\) to its orbit \(G([g,x])\). Since \(\alpha\) is open, it follows from the commutativity of the diagram that the induced map \(r\) also is open, and hence, it is the desired homeomorphism.
**Proposition 7.4**.: _Let \(G\) be a topological group and \(H\) a closed subgroup of \(G\). If \(X\) is an \(H\)-space, then the \(H\)-orbit space \(X/H\) is homeomorphic to a retract of the \(H\)-orbit space \((G\times_{H}X)/H\)._
Proof.: By Proposition 7.2, the map \(\iota:X\hookrightarrow G\times_{H}X\), \(x\mapsto[e,x]\) is a closed \(H\)-embbeding. This induces a closed embedding \(\tilde{\iota}:X/H\hookrightarrow(G\times_{H}X)/H\). Consequently, it suffices to prove that the image \(\mathcal{I}m\,\tilde{\iota}\) is a retract of \((G\times_{H}X)/H\).
For every \([g,x]\in G\times_{H}X\) we will denote by \([g,x]_{H}\) the \(H\)-orbit in the \(G\)-space \(G\times_{H}X\). Clearly, \(\mathcal{I}m\,\tilde{\iota}=\{[e,x]_{H}\mid x\in X\}\).
Define a map \(r:(G\times_{H}X)/H\to\mathcal{I}m\,\tilde{\iota}\) by the rule: \(r:[g,x]_{H}\mapsto[e,x]_{H}\). This map is well defined since for any \(h\in H\) one has
\[r:[gh^{-1},hx]_{H}\mapsto[e,hx]_{H}=[h,x]_{H}=(h[e,x])_{H}=[e,x]_{H},\]
as required.
The projection \(G\times X\to X\) is \(H\)-equivariant and thus induces a continuous map \((G\times_{H}X)/H\to X/H\). This is given by \([g,x]\mapsto H(x)\), and hence, factors as a continuous map \(f:\frac{G\times_{H}X}{H}\longrightarrow X/H,\;\;[g,x]_{H}\mapsto H(x)\).
The continuity of \(r\) follows from the fact that it is the composition of the following two continuous maps:
\[\frac{G\times_{H}X}{H}\stackrel{{ f}}{{\longrightarrow}}X/H \stackrel{{\tilde{\iota}}}{{\longrightarrow}}\mathcal{I}m\, \tilde{\iota},\]
\[[g,x]_{H}\mapsto H(x)\mapsto[e,x]_{H}.\]
Besides, if \([e,x]_{H}\in\mathcal{I}m\,\tilde{\iota}\), then \(r\big{(}[e,x]_{H}\big{)}=[e,x]_{H}\), so \(r\) is the desired retraction. Thus, \(X/H\) is homeomorphic to \(\mathcal{I}m\,\tilde{\iota}\) which is a retract of the \(H\)-orbit space \((G\times_{H}X)/H\), as required.
Now, as a simple combination of Propositions 7.3 and 7.4, we get Proposition 1.4 already stated in Section 1.
We conclude the paper with the following conjecture.
**Conjecture 7.5**.: _Let \(G\) be a locally compact group, \(K\) a compact large subgroup of \(G\), and \(S\) a \(K\)-space. Then \(S\) is a neighborhood \(K\)-equivariant retract of the twisted product \(G\times_{K}S\)._
This conjecture first appeared in [13, Question 4.4] in the form of a question. Note that this is true for any Lie group \(G\) (see Proposition 1.8) and for any almost connected group \(G\) (see Proposition 1.6). The validity of this conjecture will allow us to extend the proof of Theorem 1.7 to the case of proper actions of arbitrary locally compact groups.
|
2308.16830 | On the Randić index and its variants of network data | Summary statistics play an important role in network data analysis. They can
provide us with meaningful insight into the structure of a network. The
Randi\'{c} index is one of the most popular network statistics that has been
widely used for quantifying information of biological networks, chemical
networks, pharmacologic networks, etc. A topic of current interest is to find
bounds or limits of the Randi\'{c} index and its variants. A number of bounds
of the indices are available in literature. Recently, there are several
attempts to study the limits of the indices in the Erd\H{o}s-R\'{e}nyi random
graph by simulation. In this paper, we shall derive the limits of the
Randi\'{c} index and its variants of an inhomogeneous Erd\H{o}s-R\'{e}nyi
random graph. Our results charaterize how network heterogeneity affects the
indices and provide new insights about the Randi\'{c} index and its variants.
Finally we apply the indices to several real-world networks. | Mingao Yuan | 2023-08-31T16:03:51Z | http://arxiv.org/abs/2308.16830v1 | # On the Randic index and its variants of network data
###### Abstract
Summary statistics play an important role in network data analysis. They can provide us with meaningful insight into the structure of a network. The Randic index is one of the most popular network statistics that has been widely used for quantifying information of biological networks, chemical networks, pharmacologic networks, etc. A topic of current interest is to find bounds or limits of the Randic index and its variants. A number of bounds of the indices are available in literature. Recently, there are several attempts to study the limits of the indices in the Erdos-Renyi random graph by simulation. In this paper, we shall derive the limits of the Randic index and its variants of an inhomogeneous Erdos-Renyi random graph. Our results charaterize how network heterogeneity affects the indices and provide new insights about the Randic index and its variants. Finally we apply the indices to several real-world networks.
60K35; 05C80.
**Keywords and phrases:** Randic index, harmonic index, random graph, asymptotic property.
## 1 Introduction
A network (graph) consists of a set of agents and a set of pairwise interactions among the agents. Networks are canonical models that capture relations within or between data sets. Due to the increasing popularity of relational data, network data analysis has been a primary research topic in statistics, machine learning and many other scientific fields [5, 1, 29, 37, 25]. One of the fundamental problems in network data analysis is to understand the structural properties of a given network. The structure of a small network can be easily described by its visualization. However, larger networks can be difficult to envision and describe. It is thus important to have several summary statistics that provide us with meaningful insight into the structure of a network. Based on these statistics, we are able to compare networks or classify them according to properties that they exhibit. There are a wealth of descriptive statistics that measure some aspect of the structure or characteristics of a network. For example, the diameter of a network measures the maximum distance between two individuals; the global clustering coefficient measures the extent to which individuals in a graph tend to
cluster together; the modularity is a measure of the strength of division of a network into subgroups.
Summary statistics of networks are sometimes termed topological indices, especially in chemical or pharmacological science [32]. One of the most popular topological indices is the Randic index invented in [38]. The Randic index measures the extent of branching of a network [6; 38]. It was observed that the Randic index is strongly correlated with a variety of physico-chemical properties of alkanes [38]. The Randic index play a central role in understanding quantitative structure-property and structure-activity relations in chemistry and pharmacology [40; 39]. In subsequent years, the Randic index finds countless applications. For instance, it is used to characterize and quantify the similarity between different networks or subgraphs of the same network [24], it serves as a quantitative characterization of network heterogeneity [21], and graph robustness can be easily estimated by the Randic index [18; 19]. Moreover, the Randic index possesses a wealth of non-trivial and interesting mathematical properties [8; 9; 12; 17; 30]. Motivated by the Randic index, various Randic-type indices have been introduced and attracted great interest in the past years. Among them, the harmonic index is a well-known one [22; 23; 45; 41].
One of the popular research topics in the study of topological indices is to derive bounds of the indices and study their asymptotic properties. Recently, [33; 34] performed numeric and analytic analyses of the Randic index and the harmonic index in the Erdos-Renyi random graph. Analytic upper and lower bounds of the two indices are obtained and simulation studies show that the indices converge to one half of the number of nodes. Additionally, [18; 20; 31] find the expectations of variants of the Randic index in the Erdos-Renyi random graph. However, these results only apply to the Erdos-Renyi random graph and the exact limits of the indices are not theoretically studied.
In this paper, we shall derive the limits of the general Randic index and the general sum-connectivity index in an inhomogeneous Erdos-Renyi random graph. The general Randic index and the general sum-connectivity index contain the Randic index and the harmonic index as a special case, respectively. Thus our results theoretically validate the empirical observations in [33; 34] that the indices of the Erdos-Renyi random graph converge to one half of the number of nodes. In addition, our results explicitly describe how network heterogeneity affects the indices. We also observe that the limits of the Randic index and the harmonic index do not depend on the sparsity of a network, while the limits of their variants do. In this sense, the Randic index and the harmonic index are more preferable than their variants as measures of network structure.
The structure of the article is as follows. In Section 2 we present the main results. Section 3 summarizes simulation results and real data application. The proof is deferred to Section 4.
Notations: Let \(c_{1},c_{2}\) be positive constants and \(n_{0}\) be a positive integer. For two positive sequence \(a_{n}\), \(b_{n}\), denote \(a_{n}\asymp b_{n}\) if \(c_{1}\leq\frac{a_{n}}{b_{n}}\leq c_{2}\) for \(n\geq n_{0}\); denote \(a_{n}=O(b_{n})\) if \(\frac{a_{n}}{b_{n}}\leq c_{2}\) for \(n\geq n_{0}\); \(a_{n}=o(b_{n})\) if \(\lim_{n\to\infty}\frac{a_{n}}{b_{n}}=0\). Let \(X_{n}\) be a sequence of random variables. \(X_{n}=O_{P}(a_{n})\) means \(\frac{X_{n}}{a_{n}}\) is bounded in probability. \(X_{n}=o_{P}(a_{n})\) means \(\frac{X_{n}}{a_{n}}\) converges to zero in probability. Denote \(a_{+}=\max\{a,0\}\).
## 2 The Randic index and its variants
A graph is a mathematical model of network that consists of nodes (vertices) and edges. Let \(\mathcal{V}=[n]:=\{1,2,\ldots,n\}\) for a given positive integer \(n\). An _undirected_ graph on \(\mathcal{V}\) is a pair \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) in which \(\mathcal{E}\) is a collection of subsets of \(\mathcal{V}\) such that \(|e|=2\) for every \(e\in\mathcal{E}\). Elements in \(\mathcal{E}\) are called edges. A graph can be conveniently represented as an adjacency matrix \(A\), where \(A_{ij}=1\) if \(\{i,j\}\) is an edge, \(A_{ij}=0\) otherwise and \(A_{ii}=0\). It is clear that \(A\) is symmetric, since \(\mathcal{G}\) is undirected. A graph is said to be random if \(A_{ij}(1\leq i<j\leq n)\) are random.
Let \(f=(f_{ij})\), (\(1\leq i<j\leq n\)) be a vector of numbers between 0 and 1. The inhomogeneous Erdos-Renyi random graph \(\mathcal{G}(n,p_{n},f)\) is defined as
\[\mathbb{P}(A_{ij}=1)=p_{n}f_{ij},\]
where \(p_{n}\in[0,1]\) and \(A_{ij}\) (\(1\leq i<j\leq n\)) are independent. If all \(f_{ij}\) are the same, then \(\mathcal{G}(n,p_{n},f)\) is the Erdos-Renyi random graph. For a non-constant vector \(f\), \(\mathcal{G}(n,p_{n},f)\) is an inhomogeneous version of the Erdos-Renyi random graph. This model covers several random graphs that have been extensively studied in random graph theory and algorithm analysis [14, 15, 13, 16, 42].
Given a constant \(\alpha\), the general Randic index of a graph \(\mathcal{G}\) is defined as ([8])
\[\mathcal{R}_{\alpha}=\sum_{\{i,j\}\in\mathcal{E}}d_{i}^{\alpha}d_{j}^{\alpha}, \tag{1}\]
where \(d_{k}\) is the degree of node \(k\), that is, \(d_{k}=\sum_{j\neq k}A_{kj}\). The index \(\mathcal{R}_{\alpha}\) generalizes the well-known Randic index \(\mathcal{R}_{-\frac{1}{2}}\) invented in [38]. When \(\alpha=-1\), the index \(\mathcal{R}_{-1}\) corresponds to the modified second Zagreb index [36, 12].
Another popular variant of the Randic index is the general sum-connectivity index [43, 44] defined as
\[\chi_{\alpha}=\sum_{\{i,j\}\in\mathcal{E}}(d_{i}+d_{j})^{\alpha}. \tag{2}\]
An important special case is the harmonic index \(\mathcal{H}=2\chi_{-1}\)[22, 23, 45].
Recently, [33, 34] conduct a simulation study of the Randic index \(\mathcal{R}_{-\frac{1}{2}}\) and the harmonic index \(\mathcal{H}=2\chi_{-1}\) in the Erdos-Renyi random graph and observe that the indices converge to \(n/2\). Moreover, [18, 20, 31] derive analytical expressions of the expectations for the indices \(\mathcal{R}_{-1}\),\(\chi_{1}\),\(\chi_{2}\) of the Erdos-Renyi random graph. In this paper, we shall derive the exact limits of the general Randic index \(\mathcal{R}_{\alpha}\) and the general sum-connectivity index \(\chi_{\alpha}\) in \(\mathcal{G}(n,p_{n},f)\). Our results significantly improve the results in [18, 33, 34, 20, 31] and provide new insights about the Randic index and its variants.
**Theorem 2.1**.: _Let \(\alpha\) be a fixed constant and \(\mathcal{G}(n,p_{n},f)\) be the inhomogeneous Erdos-Renyi random graph. Suppose \(np_{n}\log 2\geq\log n\) and \(\min_{1\leq i<j\leq n}\{f_{ij}\}>\epsilon\) for some positive constant \(\epsilon\in(0,1)\). Then_
\[\mathcal{R}_{\alpha} = \left[1+O_{P}\left(\frac{(\log(np_{n}))^{4(1-\alpha)_{+}}}{\sqrt{ np_{n}}}\right)\right]p_{n}^{2\alpha+1}\sum_{i<j}f_{i}^{\alpha}f_{j}^{\alpha}f_{ ij}, \tag{3}\] \[\chi_{\alpha} = \left[1+O_{P}\left(\frac{(\log(np_{n}))^{2(1-\alpha)_{+}}}{\sqrt{ np_{n}}}\right)\right]p_{n}^{\alpha+1}\sum_{i<j}(f_{i}+f_{j})^{\alpha}f_{ij}, \tag{4}\]
_where \(f_{i}=\sum_{j\neq i}^{n}f_{ij}\)._
The condition \(\min_{1\leq i<j\leq n}\{f_{ij}\}>\epsilon\) implies the minimum expected degree scales with \(np_{n}\). The condition \(np_{n}\log 2\geq\log n\) means that the graph is relatively dense. A similar condition is assumed in [14] to study the maximum eigenvalue of the inhomogeneous random graph.
Note that the expected total degree of \(\mathcal{G}(n,p_{n},f)\) has order \(n^{2}p_{n}\). Thus \(p_{n}\) controls the sparsity of the network: a graph with smaller \(p_{n}\) would have fewer edges. By (3) and (4), the limits of the Randic index \(\mathcal{R}_{-\frac{1}{2}}\) and the harmonic \(\chi_{-1}\) do not depend on \(p_{n}\), while the limits of their variants do involve \(p_{n}\). Asymptotically, the Randic index and the harmonic are uniquely determined by the network structure parametrized by \(f\). In this sense, they are superior to their variants as measures of global structure of networks.
Now we present two examples of \(\mathcal{G}(n,p_{n},f)\). The simplest example is the Erdos-Renyi random graph, that is, \(f_{ij}\equiv 1\). We denote the graph as \(\mathcal{G}(n,p_{n})\).
**Corollary 2.2**.: _Let \(\alpha\) be a fixed constant. For the Erdos-Renyi random graph \(\mathcal{G}(n,p_{n})\) with \(np_{n}\log 2\geq\log n\), we have_
\[\mathcal{R}_{\alpha} = \frac{n^{2(1+\alpha)}p_{n}^{2\alpha+1}}{2}\left[1+O_{P}\left( \frac{(\log(np_{n}))^{4(1-\alpha)_{+}}}{\sqrt{np_{n}}}\right)\right], \tag{5}\] \[\chi_{\alpha} = 2^{\alpha-1}n^{\alpha+2}p_{n}^{\alpha+1}\left[1+O_{P}\left( \frac{(\log(np_{n}))^{2(1-\alpha)_{+}}}{\sqrt{np_{n}}}\right)\right]. \tag{6}\]
_Especially, the Randic index \(\mathcal{R}_{-\frac{1}{2}}\) is equal to_
\[\mathcal{R}_{-\frac{1}{2}}=\frac{n}{2}\left[1+O_{P}\left(\frac{(\log(np_{n})) ^{4(1-\alpha)_{+}}}{\sqrt{np_{n}}}\right)\right],\]
_the modified second Zagreb index \(\mathcal{R}_{-1}\) is equal to_
\[\mathcal{R}_{-1}=\frac{1}{2p_{n}}\left[1+O_{P}\left(\frac{(\log(np_{n}))^{4(1 -\alpha)_{+}}}{\sqrt{np_{n}}}\right)\right],\]
_and the harmonic index \(\mathcal{H}\) is equal to_
\[\mathcal{H}=\frac{n}{2}\left[1+O_{P}\left(\frac{(\log(np_{n}))^{2(1-\alpha)_{ +}}}{\sqrt{np_{n}}}\right)\right].\]
According to Corollary 2.2, the ratio \(\frac{2}{n}\mathcal{R}_{-\frac{1}{2}}\) or \(\frac{2}{n}\mathcal{H}\) converges in probability to \(1\) when \(np_{n}\log 2\geq\log n\). This theoretically confirms the empirical observation in [33, 34] that the Randic index \(\mathcal{R}_{-\frac{1}{2}}\) or the harmonic index \(\mathcal{H}\) is approximately equal to \(\frac{n}{2}\). The expectation of the indices \(\mathcal{R}_{-1},\chi_{1},\chi_{2}\) are derived in [18, 20, 31]. Our results show the indices are asymptotically equal to their expectations. Moreover, Corollary 2.2 clearly quantifies how \(p_{n}\) affects the convergence rates: the larger \(p_{n}\) is, the faster the convergence rates are.
In addition, (5) and (6) explicitly characterize how the leading terms of \(\mathcal{R}_{\alpha}\) and \(\chi_{\alpha}\) depend on \(\alpha\). Note that
\[\frac{n^{2(1+\alpha)}p_{n}^{2\alpha+1}}{2} = \frac{n}{2}(np_{n})^{2\alpha+1},\] \[2^{\alpha-1}n^{\alpha+2}p_{n}^{\alpha+1} = 2^{\alpha-1}n(np_{n})^{\alpha+1}.\]
For given \(n,p_{n}\) such that \(np_{n}\log 2\geq\log n\), the leading terms are increasing functions of \(\alpha\). The indices would be extremely large or small for large \(|\alpha|\) and large \(n\). In this sense, it is preferable to use \(\mathcal{R}_{\alpha}\) or \(\chi_{\alpha}\) with small \(|\alpha|\) (for instance, \(|\alpha|\leq 1\)).
Next, we provide a non-trivial example. Let \(f_{ij}=e^{-\kappa\frac{i}{n}}e^{-\kappa\frac{j}{n}}\) with a positive constant \(\kappa\). Then \(e^{-2\kappa}\leq f_{ij}\leq 1\) for \(0\leq i<j\leq n\). In this case, \(\min_{1\leq i<j\leq n}\{f_{ij}\}>\epsilon\) holds with \(\epsilon=e^{-2\kappa}\).
Straightforward calculation yields \(f_{i}=ne^{-\kappa\frac{i}{n}\frac{(1-e^{-\kappa})}{\kappa}}(1+o(1))\) and
\[\sum_{i<j}f_{i}^{-1}f_{j}^{-1}f_{ij} = \frac{\kappa^{2}}{2(1-e^{-\kappa})^{2}}+o(1),\] \[\sum_{i<j}f_{i}^{\alpha}f_{j}^{\alpha}f_{ij} = \frac{n^{2(\alpha+1)}(1-e^{-\kappa})^{2\alpha}(1-e^{-(1+\alpha) \kappa})^{2}}{2(1+\alpha)^{2}\kappa^{2(\alpha+1)}}(1+o(1)),\ \ \alpha\neq-1,\] \[\sum_{i<j}(f_{i}+f_{j})^{\alpha}f_{ij} = \frac{n^{\alpha+2}}{2}\left(\frac{1-e^{-\kappa}}{\kappa}\right)^ {\alpha}\int_{0}^{1}\int_{0}^{1}\frac{\left(e^{-\kappa x}+e^{-\kappa y}\right) ^{\alpha}}{e^{\kappa(x+y)}}dxdy+o(1).\]
Then
\[{\cal R}_{-1} = \left[1+O_{P}\left(\frac{(\log(np_{n}))^{2}}{\sqrt{np_{n}}} \right)\right]\frac{1}{2p_{n}}\frac{\kappa^{2}}{(1-e^{-\kappa})^{2}}, \tag{7}\] \[{\cal R}_{\alpha} = \left[1+O_{P}\left(\frac{(\log(np_{n}))^{2}}{\sqrt{np_{n}}} \right)\right]\frac{n^{2(\alpha+1)}p_{n}^{2\alpha+1}}{2}\frac{(1-e^{-\kappa})^ {2\alpha}(1-e^{-(1+\alpha)\kappa})^{2}}{(1+\alpha)^{2}\kappa^{2(\alpha+1)}}, \ \ \alpha\neq-1,\] (8) \[\chi_{\alpha} = \left[1+O_{P}\left(\frac{(\log(np_{n}))^{2}}{\sqrt{np_{n}}} \right)\right]\frac{n^{\alpha+2}p_{n}^{\alpha+1}}{2}\left(\frac{1-e^{-\kappa} }{\kappa}\right)^{\alpha}\int_{0}^{1}\int_{0}^{1}\frac{\left(e^{-\kappa x}+e^ {-\kappa y}\right)^{\alpha}}{e^{\kappa(x+y)}}dxdy. \tag{9}\]
Since larger \(\kappa\) makes the expected degrees more heterogeneous, the parameter \(\kappa\) can be considered as heterogeneity level of the graph. As \(\kappa\) increases, \({\cal R}_{\alpha}\) or \(\chi_{\alpha}\) decreases if \(\alpha>-1\), and \({\cal R}_{\alpha}\) or \(\chi_{\alpha}\) increases if \(\alpha\leq-1\). This shows the effect of heterogeneity on \({\cal R}_{\alpha}\) or \(\chi_{\alpha}\). The indices could be used as indicators whether a network follows the Erdos-Renyi random graph model.
## 3 Real data application
In this section, we apply the general Randic index and the general sum index to the following real-world networks: 'karate','macaque', 'UKfaculty', 'enron', 'USairports', 'immuno', 'yeast'. These networks are available in the 'igraphdata' package of R.
For each network, the indices \({\cal R}_{-\frac{1}{2}}\), \({\cal R}_{-1}\), \(\chi_{-\frac{1}{2}}\), \(\chi_{-1}\) and the bound \(\log n/(n\log 2)\) are calculated. Here, \(\log n/(n\log 2)\) is the sparsity lower bound required by Theorem 2.1 and Corollary 2.2. In addition, we also compute several descriptive statistics: the number of nodes (\(n\)), the edge density, the maximum degree (\(d_{max}\)), the median degree (\(d_{mean}\)) and the minimum degree (\(d_{min}\)). These results are summerized in Table 1. The edge densities of networks'macaque', 'UKfaculty', 'enron' and 'USairports' are greater than \(\log n/(n\log 2)\), which indicates our theoretical results are applicable. The Randic indices \({\cal R}_{-\frac{1}{2}}\) and the harmonic indices \(2\chi_{-1}\) of 'enron' and 'USairports' are much smaller than \(\frac{n}{2}\), the indices of the Erdos-Renyi random graph. Thus the Erdos-Renyi random graph may not be a good
model for these two networks. The networks'macaque' and 'UKfaculty' have the indices close to \(\frac{n}{2}\). In this sense, they can be considered as samples from the Erdos-Renyi random graph model. For the networks 'karate', 'immuno' and 'yeast', the edge densities are slightly smaller than the bound \(\log n/(n\log 2)\). Note that the condition \(p_{n}>\log n/(n\log 2)\) is a sufficient condition for Theorem 2.1 and Corollary 2.2 to hold and can not be relaxed based on the current proof technique. We conjecture that Theorem 2.1 and Corollary 2.2 still hold if \(np_{n}\to\infty\). Currently, we are not clear whether our theoretical results can be applied to the networks 'karate', 'immuno' and 'yeast' or not.For sparse networks, that is, \(np_{n}=O\left(1\right)\), the Randic index \(\mathcal{R}_{-\frac{1}{2}}\) could assume any value between \(0\) and \(\frac{n}{2}\), which is empirically verified in [33]. Therefore, the Randic index \(\mathcal{R}_{-\frac{1}{2}}\) far less than \(\frac{n}{2}\) does not necessarily imply the network are not generated from the Erdos-Renyi random graph model. We point out that a statistical hypothesis testing is needed to test whether the Randic index is equal to some number. Based on our knowledge, there is no such test available in literature. It is an interesting future topic to propose a test for the Randic index.
## 4 Proof of main results
In this section, we provide the detailed proofs of the main results. Recall that \(A_{ij}=1\) if and only if \(\{i,j\}\) is an edge. Then the general Randic index in (1) and the general sum-connectivity index in (2) can be written as
\[\mathcal{R}_{\alpha} = \sum_{1\leq i<j\leq n}A_{ij}d_{i}^{\alpha}d_{j}^{\alpha},\] \[\chi_{\alpha} = \sum_{1\leq i<j\leq n}A_{ij}(d_{i}+d_{j})^{\alpha}.\]
Note that the degrees \(d_{i}\) are not independently and identically distributed. Moreover, \(\mathcal{R}_{\alpha}\) and \(\chi_{\alpha}\) are non-linear functions of \(d_{i}\). These facts make it a non-trivial task to derive the limits
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline network & \(n\) & \(\log n/(n\log 2)\) & density & \(d_{max}\) & \(d_{median}\) & \(d_{min}\) & \(\mathcal{R}_{-\frac{1}{2}}\) & \(\mathcal{R}_{-1}\) & \(\chi_{-\frac{1}{2}}\) & \(\chi_{-1}\) \\ \hline karate & 34 & 0.149 & 0.134 & 17 & 5 & 3 & 13.970 & 2.866 & 21.001 & 5.927 \\ \hline macaque & 45 & 0.122 & 0.251 & 22 & 11 & 4 & 21.576 & 2.092 & 50.702 & 10.374 \\ \hline UKfaculty & 81 & 0.078 & 0.175 & 41 & 13 & 2 & 37.728 & 2.957 & 99.101 & 17.738 \\ \hline enron & 184 & 0.040 & 0.130 & 111 & 31 & 21 & 80.876 & 4.063 & 276.792 & 37.672 \\ \hline USairports & 755 & 0.012 & 0.016 & 168 & 11 & 5 & 262.836 & 41.776 & 602.894 & 106.592 \\ \hline immuno & 1316 & 0.0078 & 0.0072 & 17 & 10 & 3 & 648.820 & 70.951 & 1410.842 & 320.022 \\ \hline yeast & 2617 & 0.004 & 0.003 & 118 & 10 & 4 & 1076.274 & 285.491 & 2034.479 & 469.020 \\ \hline \end{tabular}
\end{table}
Table 1: The Randic index and harmonic index of real networks.
of \({\cal R}_{\alpha}\) and \(\chi_{\alpha}\) for general \(\alpha\). The proof strategy is as follows: (a) use the Taylor expansion to expand \({\cal R}_{\alpha}\) or \(\chi_{\alpha}\) as a sum of leading term and reminder terms; (b) find the order of the leading term and the reminder terms.
**Proof of Theorem 2.1:** (I) We prove the result of the general Randic index first. For convenience, let
\[{\cal R}_{-\alpha}=\sum_{1\leq i<j\leq n}A_{ij}d_{i}^{-\alpha}d_{j}^{-\alpha}. \tag{10}\]
We provide the proof in two cases: \(\alpha>-1\) and \(\alpha\leq-1\). Denote \(\mu_{i}=\mathbb{E}(d_{i})=p_{n}f_{i}\).
Let \(\alpha>-1\). Applying the mean value theorem to the mapping \(x\to x^{-\alpha}\), we have
\[\frac{1}{d_{i}^{\alpha}}=\frac{1}{\mu_{i}^{\alpha}}-\alpha\frac{d_{i}-\mu_{i} }{X_{i}^{\alpha+1}},\]
where \(d_{i}\leq X_{i}\leq\mu_{i}\) or \(\mu_{i}\leq X_{i}\leq d_{i}\). Since \(A_{ii}=0\) (\(i=1,2,\ldots,n\)) and the adjacency matrix \(A\) is symmetric, by (10) one has
\[{\cal R}_{-\alpha} = \frac{1}{2}\sum_{1\leq i,j\leq n}\frac{A_{ij}}{d_{i}^{\alpha}d_{ j}^{\alpha}} \tag{11}\] \[= \frac{1}{2}\sum_{1\leq i,j\leq n}\frac{A_{ij}}{\mu_{i}^{\alpha} \mu_{j}^{\alpha}}-\frac{\alpha}{2}\sum_{1\leq i,j\leq n}\frac{A_{ij}(d_{i}-\mu _{i})}{X_{i}^{\alpha+1}\mu_{j}^{\alpha}}-\frac{\alpha}{2}\sum_{1\leq i,j\leq n }\frac{A_{ij}(d_{j}-\mu_{j})}{X_{j}^{\alpha+1}\mu_{i}^{\alpha}}\] \[+\frac{\alpha^{2}}{2}\sum_{1\leq i,j\leq n}\frac{A_{ij}(d_{i}-\mu _{i})(d_{j}-\mu_{j})}{X_{i}^{\alpha+1}X_{j}^{\alpha+1}}.\]
Next we show the first term in (11) is the leading term. To this end, we will find the exact order of the first term and show the remaining terms are of smaller order.
Firstly, we show the first term in (11) is asymptotically equal to its expectation. By the assumption \(\min_{1\leq i,j\leq n}\{f_{ij}\}>\epsilon\), it is clear that \(np_{n}\epsilon\leq\mu_{i}\leq np_{n}\) for all \(i\in[n]\) and \(\epsilon n^{2}\leq\sum_{1\leq i,j\leq n}f_{ij}\leq n^{2}\). Note that \(A_{ij}\) (\(1\leq i<j\leq n\)) are independent and \(\mathbb{E}(A_{ij})=p_{n}f_{ij}\). Then
\[\mathbb{E}\left[\sum_{1\leq i<j\leq n}\frac{A_{ij}-p_{n}f_{ij}}{\mu_{i}^{ \alpha}\mu_{j}^{\alpha}}\right]^{2} = \sum_{1\leq i<j\leq n}\mathbb{E}\left[\frac{A_{ij}-p_{n}f_{ij}}{ \mu_{i}^{\alpha}\mu_{j}^{\alpha}}\right]^{2}=O\left(\frac{n^{2}p_{n}}{(np_{n}) ^{4\alpha}}\right).\]
By the Markov's inequality, it follows that
\[\left|\sum_{1\leq i<j\leq n}\frac{A_{ij}}{\mu_{i}^{\alpha}\mu_{j}^{\alpha}}- \sum_{1\leq i<j\leq n}\frac{p_{n}f_{ij}}{\mu_{i}^{\alpha}\mu_{j}^{\alpha}} \right|=\left|\sum_{1\leq i<j\leq n}\frac{A_{ij}-p_{n}f_{ij}}{\mu_{i}^{\alpha }\mu_{j}^{\alpha}}\right|=O_{P}\left(\frac{\sqrt{n}\sqrt{np_{n}}}{(np_{n})^{2 \alpha}}\right).\]
Then we get
\[\sum_{1\leq i<j\leq n}\frac{A_{ij}}{\mu_{i}^{\alpha}\mu_{j}^{\alpha}}=\sum_{1 \leq i<j\leq n}\frac{p_{n}f_{ij}}{\mu_{i}^{\alpha}\mu_{j}^{\alpha}}+O_{P}\left( \frac{\sqrt{n}\sqrt{np_{n}}}{(np_{n})^{2\alpha}}\right)=\sum_{1\leq i<j\leq n} \frac{p_{n}f_{ij}}{\mu_{i}^{\alpha}\mu_{j}^{\alpha}}\left(1+O_{P}\left(\frac{ 1}{\sqrt{n}\sqrt{np_{n}}}\right)\right). \tag{12}\]
Now we find a bound of the second term in (11). The idea is to find an upper bound of the expectation of its absolute value and then apply the Markov's inequality to get a bound. Note that
\[\mathbb{E}\left[\left|\sum_{1\leq i,j\leq n}\frac{A_{ij}(d_{i}-\mu _{i})}{X_{i}^{\alpha+1}\mu_{j}^{\alpha}}\right|\right] = \mathbb{E}\left[\left|\sum_{1\leq i\leq n}\left(\sum_{1\leq j\leq n }\frac{A_{ij}}{\mu_{j}^{\alpha}}\right)\frac{(d_{i}-\mu_{i})}{X_{i}^{\alpha+1 }}\right|\right] \tag{13}\] \[\leq \mathbb{E}\left[\sum_{1\leq i\leq n}\left(\sum_{1\leq j\leq n} \frac{A_{ij}}{\mu_{j}^{\alpha}}\right)\frac{|d_{i}-\mu_{i}|}{X_{i}^{\alpha+1 }}\right].\]
Let \(\delta_{n}=[\log(np_{n})]^{-2}\). Recall that \(X_{i}\) is between \(d_{i}\) and \(\mu_{i}\). If \(X_{i}<\delta_{n}\mu_{i}\) and \(X_{i}<d_{i}\), then \(X_{i}<d_{i}\) and \(X_{i}<\mu_{i}\). In this case, \(X_{i}\) can not be between \(d_{i}\) and \(\mu_{i}\). Therefore, \(X_{i}<\delta_{n}\mu_{i}\) implies \(d_{i}\leq X_{i}\). Then \(I[X_{i}<\delta_{n}\mu_{i}]\leq I[d_{i}\leq X_{i}<\delta_{n}\mu_{i}]\leq I[X_{ i}<\delta_{n}\mu_{i}]\). Note that \(np_{n}\epsilon\leq\mu_{i}\leq np_{n}\) for all \(i\in[n]\), then we have
\[\mathbb{E}\left[\left|\sum_{1\leq i,j\leq n}\frac{A_{ij}(d_{i}- \mu_{i})}{X_{i}^{\alpha+1}\mu_{j}^{\alpha}}\right|\right]\leq O\left(\frac{1}{ (np_{n})^{\alpha}}\right)\sum_{1\leq i\leq n}\mathbb{E}\left[\frac{d_{i}|d_{i} -\mu_{i}|}{X_{i}^{\alpha+1}}\right] \tag{14}\] \[= O\left(\frac{1}{(np_{n})^{\alpha}}\right)\sum_{1\leq i\leq n} \mathbb{E}\left[\frac{d_{i}|d_{i}-\mu_{i}|}{X_{i}^{\alpha+1}}I[\delta_{n}\mu_ {i}\leq X_{i}]\right]\] \[+O\left(\frac{1}{(np_{n})^{\alpha}}\right)\sum_{1\leq i\leq n} \mathbb{E}\left[\frac{d_{i}|d_{i}-\mu_{i}|}{X_{i}^{\alpha+1}}I[\delta_{n}\mu_ {i}>X_{i}]\right],\] \[= O\left(\frac{1}{(np_{n})^{\alpha}}\right)\sum_{1\leq i\leq n} \mathbb{E}\left[\frac{d_{i}|d_{i}-\mu_{i}|}{X_{i}^{\alpha+1}}I[\delta_{n}\mu_ {i}\leq X_{i}]\right]\] \[+O\left(\frac{1}{(np_{n})^{\alpha}}\right)\sum_{1\leq i\leq n} \mathbb{E}\left[\frac{d_{i}|d_{i}-\mu_{i}|}{X_{i}^{\alpha+1}}I[d_{i}\leq X_{ i}<\delta_{n}\mu_{i}]\right].\]
Note that \(\alpha>-1\). If \(\delta_{n}\mu_{i}\leq X_{i}\), then
\[\frac{1}{X_{i}^{\alpha+1}}\leq\frac{1}{(\delta_{n}\mu_{i})^{\alpha+1}}=O\left( \frac{1}{(\delta_{n}np_{n})^{\alpha+1}}\right).\]
Hence we have
\[\frac{1}{(np_{n})^{\alpha}}\sum_{1\leq i\leq n}\mathbb{E}\left[ \frac{d_{i}|d_{i}-\mu_{i}|}{X_{i}^{\alpha+1}}I[\delta_{n}\mu_{i}\leq X_{i}]\right] \tag{15}\] \[\leq O\left(\frac{1}{(\delta_{n}np_{n})^{\alpha+1}(np_{n})^{\alpha}} \right)\sum_{1\leq i\leq n}\mathbb{E}\left[d_{i}|d_{i}-\mu_{i}|I[\delta_{n}\mu_ {i}\leq X_{i}]\right]\] \[\leq O\left(\frac{1}{(\delta_{n}np_{n})^{\alpha+1}(np_{n})^{\alpha}} \right)\sum_{1\leq i\leq n}\mathbb{E}\left[d_{i}|d_{i}-\mu_{i}|\right].\]
By definition, the second moment of degree \(d_{i}\) is equal to
\[\mathbb{E}[d_{i}^{2}]=\mathbb{E}\left[\sum_{j\neq k}A_{ij}A_{ik}+\sum_{j}A_{ij }\right]=p_{n}^{2}\sum_{j\neq k}f_{ij}f_{ik}+p_{n}\sum_{j}f_{ij},\]
and \(Var(d_{i})=\sum_{j\neq i}p_{n}f_{ij}(1-p_{n}f_{ij}),\) then by the Cauchy-Schwarz inequality, one has
\[\sum_{1\leq i\leq n}\mathbb{E}\left[d_{i}|d_{i}-\mu_{i}|\right] \leq \sum_{1\leq i\leq n}\sqrt{\mathbb{E}[d_{i}^{2}]\mathbb{E}[(d_{i}- \mu_{i})^{2}]} \tag{16}\] \[= \sum_{1\leq i\leq n}\sqrt{\left(p_{n}^{2}\sum_{j\neq k}f_{ij}f_{ ik}+p_{n}\sum_{j}f_{ij}\right)\sum_{j}p_{n}f_{ij}(1-p_{n}f_{ij})}\] \[= O\left(n\sqrt{n^{3}p_{n}^{3}}\right).\]
Combining (15) and (16) yields
\[\frac{1}{(np_{n})^{\alpha}}\sum_{1\leq i\leq n}\mathbb{E}\left[ \frac{d_{i}|d_{i}-\mu_{i}|}{X_{i}^{\alpha+1}}I[\delta_{n}\mu_{i}\leq X_{i}]\right] = O\left(\frac{n\sqrt{n^{3}p_{n}^{3}}}{(\delta_{n}np_{n})^{\alpha +1}(np_{n})^{\alpha}}\right) \tag{17}\] \[= \frac{n^{2}p_{n}}{(np_{n})^{2\alpha}}O\left(\frac{1}{\delta_{n}^ {\alpha+1}\sqrt{np_{n}}}\right)\] \[= \frac{n^{2}p_{n}}{(np_{n})^{2\alpha}}O\left(\frac{(\log(np_{n}))^ {2(\alpha+1)}}{\sqrt{np_{n}}}\right).\]
Now we bound the second term of (14). Note that if \(d_{i}\leq X_{i}<\delta_{n}\mu_{i},\) then \(d_{i}<\mu_{i}\) and \(\frac{d_{i}}{X_{i}^{\alpha+1}}\leq\frac{1}{d_{i}^{\alpha}}.\) Since \(d_{i}\) is the degree of node \(i,\) it can only take integer value between \(0\) and \(n-1.\) Moreover, \(d_{i}=0\) implies \(A_{ij}=0\) for any \(j\in[n].\) By the definition of the Randic index (1), these terms with \(d_{i}=0\) are zero in (10) and (11). Therefore, we only consider the terms with \(d_{i}\geq 1\) and \(d_{j}\geq 1.\) Then the second term of (14) can be bounded by
\[\frac{1}{(np_{n})^{\alpha}}\sum_{1\leq i\leq n}\mathbb{E}\left[ \frac{d_{i}|d_{i}-\mu_{i}|}{X_{i}^{\alpha+1}}I[d_{i}\leq X_{i}<\delta_{n}\mu_{ i}]\right]\leq\frac{1}{(np_{n})^{\alpha}}\sum_{1\leq i\leq n}\mathbb{E}\left[ \frac{\mu_{i}-d_{i}}{d_{i}^{\alpha}}I[d_{i}<\delta_{n}\mu_{i}]\right] \tag{18}\] \[= \frac{1}{(np_{n})^{\alpha}}\sum_{1\leq i\leq n}\sum_{k=1}^{\delta _{n}\mu_{i}}\frac{\mu_{i}-k}{k^{\alpha}}\mathbb{P}(d_{i}=k).\]
Next we obtain an upper bound of \(\mathbb{P}(d_{i}=k).\) Note that the degree \(d_{i}\) follows the Poisson-Binomial distribution \(PB(p_{n}f_{i1},p_{n}f_{i2},\ldots,p_{n}f_{in}).\) Then
\[\mathbb{P}(d_{i}=k) = \sum_{S\subset[n]\setminus\{i\},|S|=k}\prod_{j\in S}p_{n}f_{ij} \prod_{j\in S^{C}\setminus\{i\}}(1-p_{n}f_{ij}) \tag{19}\] \[\leq \sum_{S\subset[n]\setminus\{i\},|S|=k}\prod_{j\in S}p_{n}\prod_{j \in S^{C}\setminus\{i\}}(1-p_{n}\epsilon)\] \[= \binom{n}{k}p_{n}^{k}(1-p_{n}\epsilon)^{n-k}.\]
Note that \(\binom{n}{k}\leq e^{k\log n-k\log k+k}\) and \((1-p_{n}\epsilon)^{n-k}=e^{(n-k)\log(1-p_{n}\epsilon)}.\) Then by (19) we get
\[\mathbb{P}(d_{i}=k) \leq \exp\left(k\log(np_{n})-k\log k+k+(n-k)\log(1-p_{n}\epsilon) \right). \tag{20}\]
Let \(g(k)=k\log(np_{n})-k\log k+k+(n-k)\log(1-p_{n}\epsilon)\). Then
\[g^{\prime}(k)=\log\left(\frac{np_{n}}{1-p_{n}\epsilon}\right)-\log k.\]
For \(k<\frac{np_{n}}{1-p_{n}\epsilon}\), \(g^{\prime}(k)<0\). For \(k>\frac{np_{n}}{1-p_{n}\epsilon}\), \(g^{\prime}(k)>0\). Hence \(g(k)\) achieves its maximum at \(k=\frac{np_{n}}{1-p_{n}\epsilon}\). For \(k\leq\delta_{n}np_{n}\), \(g(k)\leq g(\delta_{n}np_{n})\). Hence
\[\mathbb{P}(d_{i}=k)\leq\exp\left(\delta_{n}np_{n}\log\frac{1}{\delta_{n}(1-p_{ n}\epsilon)}+\delta_{n}np_{n}+n\log(1-p_{n}\epsilon)\right)\leq\exp\left(-np_{n} \epsilon(1+o(1))\right).\]
Note that \(\mu_{i}\leq np_{n}\). Then for \(k\leq\delta_{n}\mu_{i}\leq\delta_{n}np_{n}\), by (18), (19), (20), one has
\[\mathbb{E}\left[\frac{d_{i}|d_{i}-\mu_{i}|}{X_{i}^{\alpha+1}}I[d _{i}\leq X_{i}<\delta_{n}\mu_{i}]\right] \leq \exp\left(\log(\delta_{n}np_{n})\right)\exp\left(\log(np_{n}) \right)\exp\left(-np_{n}\epsilon(1+o(1))\right) \tag{21}\] \[= \exp\left(-np_{n}\epsilon(1+o(1))\right).\]
Hence, we get
\[\frac{1}{(np_{n})^{\alpha}}\sum_{1\leq i\leq n}\mathbb{E}\left[ \frac{d_{i}|d_{i}-\mu_{i}|}{X_{i}^{\alpha+1}}I[d_{i}\leq X_{i}<\delta_{n}\mu_{ i}]\right] = \frac{1}{(np_{n})^{\alpha}}ne^{-\epsilon np_{n}(1+o(1))}=\frac{n ^{2}p_{n}}{(np_{n})^{2\alpha}}e^{-\epsilon np_{n}(1+o(1))}.\]
Recall that \(np_{n}\log 2\geq\log n\). Then \(\frac{(\log(np_{n}))^{s}}{(np_{n})^{k}}e^{-\epsilon np_{n}(1+o(1))}=o(1)\) for any fixed positive constants \(k,s,\epsilon\). By (13), (14), (17), (22) and the Markov's inequality, one has
\[\sum_{1\leq i,j\leq n}\frac{A_{ij}(d_{i}-\mu_{i})}{X_{i}^{\alpha+1}\mu_{j}^{ \alpha}}=O_{P}\left(\frac{n^{2}p_{n}}{(np_{n})^{2\alpha}}\frac{(\log(np_{n}))^ {2(\alpha+1)}}{\sqrt{np_{n}}}\right). \tag{23}\]
The third term in (11) can be similarly bounded as the second term. Now we consider the last term in (11). Note that
\[\sum_{1\leq i,j\leq n}\frac{A_{ij}(d_{i}-\mu_{i})(d_{j}-\mu_{j})} {X_{i}^{\alpha+1}X_{j}^{\alpha+1}} = \sum_{1\leq i,j\leq n}\frac{A_{ij}(d_{i}-\mu_{i})(d_{j}-\mu_{j})} {X_{i}^{\alpha+1}X_{j}^{\alpha+1}}I[X_{i}\geq\delta_{n}\mu_{i},X_{j}\geq \delta_{n}\mu_{j}]\] \[+\sum_{1\leq i,j\leq n}\frac{A_{ij}(d_{i}-\mu_{i})(d_{j}-\mu_{j}) }{X_{i}^{\alpha+1}X_{j}^{\alpha+1}}I[X_{i}<\delta_{n}\mu_{i},X_{j}\geq\delta_{ n}\mu_{j}]\] \[+\sum_{1\leq i,j\leq n}\frac{A_{ij}(d_{i}-\mu_{i})(d_{j}-\mu_{j}) }{X_{i}^{\alpha+1}X_{j}^{\alpha+1}}I[X_{i}\geq\delta_{n}\mu_{i},X_{j}<\delta_{ n}\mu_{j}]\] \[+\sum_{1\leq i,j\leq n}\frac{A_{ij}(d_{i}-\mu_{i})(d_{j}-\mu_{j}) }{X_{i}^{\alpha+1}X_{j}^{\alpha+1}}I[X_{i}<\delta_{n}\mu_{i},X_{j}<\delta_{n} \mu_{j}].\]
We shall bound each term in (24). The first term can be bounded as follows.
\[\mathbb{E}\left[\left|\sum_{1\leq i,j\leq n}\frac{A_{ij}(d_{i}-\mu_{ i})(d_{j}-\mu_{j})}{X_{i}^{\alpha+1}X_{j}^{\alpha+1}}I[X_{i}\geq\delta_{n}\mu_{ i},X_{j}\geq\delta_{n}\mu_{j}]\right|\right] \tag{25}\] \[\leq \frac{1}{\delta_{n}^{2(\alpha+1)}}\sum_{1\leq i,j\leq n}\mathbb{E }\left[\frac{A_{ij}|d_{i}-\mu_{i}||d_{j}-\mu_{j}|}{\mu_{i}^{\alpha+1}\mu_{j}^{ \alpha+1}}I[X_{i}\geq\delta_{n}\mu_{i},X_{j}\geq\delta_{n}\mu_{j}]\right]\] \[\leq \frac{1}{\delta_{n}^{2(\alpha+1)}}O\left(\frac{1}{(np_{n})^{2( \alpha+1)}}\right)\sum_{1\leq i,j\leq n}\mathbb{E}\left[A_{ij}|d_{i}-\mu_{i}|| d_{j}-\mu_{j}|\right].\]
Denote \(\tilde{d}_{i}=\sum_{k\neq j,i}A_{ik}\), \(\tilde{d}_{j}=\sum_{k\neq j,i}A_{jk}\), \(\tilde{\mu}_{i}=\mathbb{E}(\tilde{d}_{i})\) and \(\tilde{\mu}_{j}=\mathbb{E}(\tilde{d}_{j})\). Then \(\tilde{d}_{i}\) and \(\tilde{d}_{j}\) are independent, \(d_{i}=\tilde{d}_{i}+A_{ij}\) and \(d_{j}=\tilde{d}_{j}+A_{ij}\). It is easy to get that
\[|d_{i}-\mu_{i}|=|\tilde{d}_{i}-\tilde{\mu}_{i}+A_{ij}-p_{n}f_{ij}| \leq|\tilde{d}_{i}-\tilde{\mu}_{i}|+|A_{ij}-p_{n}f_{ij}|\leq|\tilde{d}_{i}- \tilde{\mu}_{i}|+1,\] \[\mathbb{E}[|\tilde{d}_{i}-\tilde{\mu}_{i}|]\leq\sqrt{\mathbb{E}[( \tilde{d}_{i}-\tilde{\mu}_{i})^{2}]}=\sqrt{\sum_{k\neq j,i}p_{n}f_{ik}(1-p_{n} f_{ik})}=O(\sqrt{np_{n}}).\]
Similarly, \(|d_{j}-\mu_{j}|\leq|\tilde{d}_{j}-\tilde{\mu}_{j}|+1\) and \(\mathbb{E}[|\tilde{d}_{j}-\tilde{\mu}_{j}|]=O(\sqrt{np_{n}})\). Then we have
\[\mathbb{E}\left[A_{ij}|d_{i}-\mu_{i}||d_{j}-\mu_{j}|\right] \leq \mathbb{E}[A_{ij}]+\mathbb{E}[A_{ij}|\tilde{d}_{i}-\tilde{\mu}_{ i}||\tilde{d}_{j}-\tilde{\mu}_{j}|] \tag{26}\] \[+\mathbb{E}[A_{ij}|\tilde{d}_{i}-\tilde{\mu}_{i}|]+\mathbb{E}[A_{ ij}|\tilde{d}_{j}-\tilde{\mu}_{j}|]\] \[= p_{n}f_{ij}+p_{n}f_{ij}\mathbb{E}[|\tilde{d}_{i}-\tilde{\mu}_{i} |]\mathbb{E}[|\tilde{d}_{j}-\tilde{\mu}_{j}|]\] \[+p_{n}f_{ij}\mathbb{E}[|\tilde{d}_{i}-\tilde{\mu}_{i}|]+p_{n}f_{ ij}\mathbb{E}[|\tilde{d}_{j}-\tilde{\mu}_{j}|]\] \[= O\left(np_{n}^{2}\right).\]
Combining (25) and (26) yields
\[\mathbb{E}\left[\left|\sum_{1\leq i,j\leq n}\frac{A_{ij}(d_{i}- \mu_{i})(d_{j}-\mu_{j})}{X_{i}^{\alpha+1}X_{j}^{\alpha+1}}I[X_{i}\geq\delta_{n} \mu_{i},X_{j}\geq\delta_{n}\mu_{j}]\right|\right] \tag{27}\] \[\leq \frac{1}{\delta_{n}^{2(\alpha+1)}}O\left(\frac{n^{3}p_{n}^{2}}{( np_{n})^{2(\alpha+1)}}\right)\] \[= \frac{n^{2}p_{n}}{(np_{n})^{2\alpha}}O\left(\frac{1}{\delta_{n}^{ 2(\alpha+1)}np_{n}}\right)\] \[= \frac{n^{2}p_{n}}{(np_{n})^{2\alpha}}O\left(\frac{(\log(np_{n}))^ {4(\alpha+1)}}{np_{n}}\right),\]
The second term in (24) can be bounded as follows.
\[\mathbb{E}\left[\left|\sum_{1\leq i,j\leq n}\frac{A_{ij}(d_{i}-\mu_{ i})(d_{j}-\mu_{j})}{X_{i}^{\alpha+1}X_{j}^{\alpha+1}}I[X_{i}\geq\delta_{n}\mu_{i},d_ {j}\leq X_{j}<\delta_{n}\mu_{j}]\right|\right] \tag{28}\] \[\leq \frac{1}{\delta_{n}^{\alpha+1}}\sum_{1\leq i,j\leq n}\mathbb{E} \left[\frac{A_{ij}|d_{i}-\mu_{i}||d_{j}-\mu_{j}|}{\mu_{i}^{\alpha+1}d_{j}^{ \alpha+1}}I[X_{i}\geq\delta_{n}\mu_{i},d_{j}\leq X_{j}<\delta_{n}\mu_{j}]\right]\] \[\leq \frac{1}{\delta_{n}^{\alpha+1}(np_{n})^{\alpha+1}}\sum_{1\leq i, j\leq n}\mathbb{E}\left[\frac{A_{ij}|d_{i}-\mu_{i}||d_{j}-\mu_{j}|}{d_{j}^{ \alpha+1}}I[d_{j}<\delta_{n}\mu_{j}]\right].\]
Recall that
\[|d_{i}-\mu_{i}|=|\tilde{d}_{i}-\tilde{\mu}_{i}+A_{ij}-p_{n}f_{ij}|,\ \ \ \ |d_{j}-\mu_{j}|=|\tilde{d}_{j}-\tilde{\mu}_{j}+A_{ij}-p_{n}f_{ij}|.\]
Moreover, \(d_{j}<\delta_{n}\mu_{j}\) implies \(\tilde{d}_{j}<\delta_{n}\mu_{j}.\) Then we have
\[\mathbb{E}\left[\frac{A_{ij}|d_{i}-\mu_{i}||d_{j}-\mu_{j}|}{d_{j} ^{\alpha+1}}I[d_{j}<\delta_{n}\mu_{j}]\right] \tag{29}\] \[= \mathbb{E}\left[\frac{A_{ij}|\tilde{d}_{i}-\tilde{\mu}_{i}+A_{ij }-p_{n}f_{ij}||\tilde{d}_{j}-\tilde{\mu}_{j}+A_{ij}-p_{n}f_{ij}|}{d_{j}^{ \alpha+1}}I[d_{j}<\delta_{n}\mu_{j}]\Big{|}A_{ij}=1\right]\mathbb{P}(A_{ij}=1)\] \[\leq p_{n}\mathbb{E}\left[\frac{|\tilde{d}_{i}-\tilde{\mu}_{i}+1-p_{ n}f_{ij}||\tilde{d}_{j}-\tilde{\mu}_{j}+1-p_{n}f_{ij}|}{(\tilde{d}_{j}+1)^{ \alpha+1}}I[\tilde{d}_{j}<\delta_{n}\mu_{j}]\right].\]
Since \(\tilde{d}_{i}\), \(\tilde{d}_{j}\) are independent and \(\mathbb{E}[|\tilde{d}_{j}-\tilde{\mu}_{j}|]=O(\sqrt{np_{n}})\), then by a similar argument as in (18)-(22), it follows that
\[p_{n}\mathbb{E}\left[\frac{|\tilde{d}_{i}-\tilde{\mu}_{i}+1-p_{n }f_{ij}||\tilde{d}_{j}-\tilde{\mu}_{j}+1-p_{n}f_{ij}|}{(\tilde{d}_{j}+1)^{ \alpha+1}}I[\tilde{d}_{j}<\delta_{n}\mu_{j}]\right] \tag{30}\] \[\leq p_{n}\sqrt{np_{n}}\mathbb{E}\left[\frac{|\tilde{d}_{j}-\tilde{\mu }_{j}+1-p_{n}f_{ij}|}{(\tilde{d}_{j}+1)^{\alpha+1}}I[\tilde{d}_{j}<\delta_{n} \mu_{j}]\right]\] \[\leq p_{n}\sqrt{np_{n}}e^{-\epsilon np_{n}(1+o(1))}.\]
Combining (28), (29) and (30) yields
\[\mathbb{E}\left[\left|\sum_{1\leq i,j\leq n}\frac{A_{ij}(d_{i}- \mu_{i})(d_{j}-\mu_{j})}{X_{i}^{\alpha+1}X_{j}^{\alpha+1}}I[X_{i}\geq\delta_{n }\mu_{i},d_{j}\leq X_{j}<\delta_{n}\mu_{j}]\right|\right] \tag{31}\] \[\leq \frac{p_{n}\sqrt{np_{n}}}{\delta_{n}^{\alpha+1}(np_{n})^{\alpha+ 1}}n^{2}e^{-\epsilon np_{n}(1+o(1))}\] \[= \frac{n^{2}p_{n}}{(np_{n})^{2\alpha}}e^{-\epsilon np_{n}(1+o(1))}.\]
The third term in (24) can be similarly bounded as the second term. Now we consider the last term in (24). By a similar argument as in (28)-(31), one gets
\[\mathbb{E}\left[\left|\sum_{1\leq i,j\leq n}\frac{A_{ij}(d_{i}-\mu _{i})(d_{j}-\mu_{j})}{X_{i}^{\alpha+1}X_{j}^{\alpha+1}}I[d_{i}\leq X_{i}<\delta _{n}\mu_{i},d_{j}\leq X_{j}<\delta_{n}\mu_{j}]\right|\right] \tag{32}\] \[\leq \sum_{1\leq i,j\leq n}\mathbb{E}\left[\frac{A_{ij}|d_{i}-\mu_{i}| |d_{j}-\mu_{j}|}{d_{i}^{\alpha+1}d_{j}^{\alpha+1}}I[d_{i}\leq\delta_{n}\mu_{i}, d_{j}\leq\delta_{n}\mu_{j}]\right]\] \[\leq \sum_{1\leq i,j\leq n}\mathbb{E}\left[\frac{A_{ij}|\tilde{d}_{i}- \tilde{\mu}_{i}+A_{ij}-p_{n}f_{ij}|\tilde{d}_{j}-\tilde{\mu}_{j}+A_{ij}-p_{n} f_{ij}|}{(\tilde{d}_{j}+A_{ij})^{\alpha+1}(\tilde{d}_{j}+A_{ij})^{\alpha+1}}I[ \tilde{d}_{i}\leq\delta_{n}\mu_{i},\tilde{d}_{j}\leq\delta_{n}\mu_{j}]\right]\]
\[\leq p_{n}\sum_{1\leq i,j\leq n}\mathbb{E}\left[\frac{(|\tilde{d}_{i}- \tilde{\mu}_{i}|+1)(|\tilde{d}_{j}-\tilde{\mu}_{j}|+1)}{(\tilde{d}_{j}+1)^{ \alpha+1}(\tilde{d}_{j}+1)^{\alpha+1}}I[\tilde{d}_{i}\leq\delta_{n}\mu_{i}, \tilde{d}_{j}\leq\delta_{n}\mu_{j}]\right]\] \[= p_{n}\left(\sum_{1\leq i\leq n}\mathbb{E}\left[\frac{(|\tilde{d} _{i}-\tilde{\mu}_{i}|+1)}{(\tilde{d}_{i}+1)^{\alpha+1}}I[\tilde{d}_{i}\leq \delta_{n}\mu_{i}\right]\right)^{2}\] \[\leq p_{n}n^{2}e^{-2\epsilon np_{n}(1+o(1))}=\frac{n^{2}p_{n}}{(np_{n} )^{2\alpha}}e^{-2\epsilon np_{n}(1+o(1))}.\]
By (24)-(32) and the Markov's inequality, it follows that
\[\sum_{1\leq i,j\leq n}\frac{A_{ij}(d_{i}-\mu_{i})(d_{j}-\mu_{j})}{X_{i}^{ \alpha+1}X_{j}^{\alpha+1}}=O_{P}\left(\frac{n^{2}p_{n}}{(np_{n})^{2\alpha}} \frac{(\log(np_{n}))^{4(\alpha+1)}}{np_{n}}\right). \tag{33}\]
It is easy to verify that \(\sum_{1\leq i<j\leq n}\frac{p_{n}f_{ij}}{\mu_{i}^{\alpha}\mu_{j}^{\alpha}}\geq \frac{\epsilon n(n-1)p_{n}}{2(np_{n})^{2\alpha}}\). Then combining (11), (12), (23) and (33) yields the limit of \(\mathcal{R}_{-\alpha}\) with \(\alpha>-1\).
Next, we consider \(\mathcal{R}_{-\alpha}\) for \(\alpha\leq-1\). In this case, we rewrite the general Randic index as
\[\mathcal{R}_{\alpha}=\sum_{1\leq i<j\leq n}A_{ij}d_{i}^{\alpha}d_{j}^{\alpha}, \hskip 28.452756pt\alpha\geq 1. \tag{34}\]
By the Taylor expansion, we have
\[d_{i}^{\alpha}=\mu_{i}^{\alpha}+\alpha X_{i}^{\alpha-1}(d_{i}-\mu_{i}),\]
where \(X_{i}\) is between \(d_{i}\) and \(\mu_{i}\). Then
\[\mathcal{R}_{\alpha} = \frac{1}{2}\sum_{1\leq i,j\leq n}A_{ij}d_{i}^{\alpha}d_{j}^{\alpha} \tag{35}\] \[= \frac{1}{2}\sum_{1\leq i,j\leq n}A_{ij}\mu_{i}^{\alpha}\mu_{j}^{ \alpha}+\frac{\alpha}{2}\sum_{1\leq i,j\leq n}A_{ij}(d_{i}-\mu_{i})X_{i}^{ \alpha-1}\mu_{j}^{\alpha}+\frac{\alpha}{2}\sum_{1\leq i,j\leq n}A_{ij}(d_{j}- \mu_{j})X_{j}^{\alpha-1}\mu_{i}^{\alpha}\] \[+\frac{\alpha^{2}}{2}\sum_{1\leq i,j\leq n}A_{ij}(d_{i}-\mu_{i}) (d_{j}-\mu_{j})X_{i}^{\alpha-1}X_{j}^{\alpha-1}.\]
We shall show that the first term in (35) is the leading term and the remaining terms are of smaller order. Similar to (12), it is easy to get
\[\sum_{1\leq i<j\leq n}A_{ij}\mu_{i}^{\alpha}\mu_{j}^{\alpha}=\sum_{1\leq i<j\leq n }p_{n}f_{ij}\mu_{i}^{\alpha}\mu_{j}^{\alpha}\left(1+O_{P}\left(\frac{1}{\sqrt{n }\sqrt{np_{n}}}\right)\right). \tag{36}\]
Since the second term and the third term in (35) have the same order, we only need to bound the second term and the last term. Let \(M=\frac{4}{\epsilon(1-p_{n}\epsilon)}\). Clearly \(M\) is bounded and \(M>4\). The expectation of the absolute value of the second term in (35) can be bounded by
\[\mathbb{E}\left[\left|\sum_{1\leq i,j\leq n}A_{ij}(d_{i}-\mu_{i}) X_{i}^{\alpha-1}\mu_{j}^{\alpha}\right|\right] \leq \mathbb{E}\left[\sum_{1\leq i,j\leq n}A_{ij}\left|d_{i}-\mu_{i} \right|X_{i}^{\alpha-1}\mu_{j}^{\alpha}I[M\mu_{i}\leq X_{i}\leq d_{i}]\right] \tag{37}\] \[+\mathbb{E}\left[\sum_{1\leq i,j\leq n}A_{ij}\left|d_{i}-\mu_{i} \right|X_{i}^{\alpha-1}\mu_{j}^{\alpha}I[X_{i}\leq M\mu_{i}]\right].\]
Note that
\[\mathbb{E}\left[\sum_{1\leq i,j\leq n}A_{ij}\left|d_{i}-\mu_{i} \right|X_{i}^{\alpha-1}\mu_{j}^{\alpha}I[X_{i}\leq M\mu_{i}]\right] \leq M^{\alpha-1}(np_{n})^{2\alpha-1}\sum_{1\leq i,j\leq n}\mathbb{E} \left[A_{ij}\left|\tilde{d}_{i}-\mu_{i}+A_{ij}\right|\right] \tag{38}\] \[= (np_{n})^{2\alpha}n^{2}p_{n}O\left(\frac{1}{\sqrt{np_{n}}}\right),\]
and
\[\mathbb{E}\left[\sum_{1\leq i,j\leq n}A_{ij}\left|d_{i}-\mu_{i} \right|X_{i}^{\alpha-1}\mu_{j}^{\alpha}I[M\mu_{i}\leq X_{i}\leq d_{i}]\right] \tag{39}\] \[\leq O((np_{n})^{\alpha})\mathbb{E}\left[\sum_{1\leq i,j\leq n}A_{ij} \left|d_{i}-\mu_{i}\right|d_{i}^{\alpha-1}I[M\mu_{i}\leq d_{i}]\right]\] \[= O((np_{n})^{\alpha}p_{n})\sum_{1\leq i,j\leq n}\mathbb{E}\left[ \left|\tilde{d}_{i}-\tilde{\mu}_{i}+1-p_{n}f_{ij}\right|\tilde{d}_{i}^{\alpha- 1}I[M\mu_{i}-1\leq\tilde{d}_{i}]\right]\] \[= O((np_{n})^{\alpha}p_{n})\sum_{1\leq i,j\leq n}\sum_{k=M\mu_{i} -1}^{n-2}k^{\alpha-1}(k-\tilde{\mu}_{i}+1-p_{n}f_{ij})\mathbb{P}(\tilde{d}_{i }=k).\]
By a similar argument as in (20), it follows that
\[\sum_{k=M\mu_{i}-1}^{n-2}k^{\alpha-1}(k-\tilde{\mu}_{i}+1-p_{n}f _{ij})\mathbb{P}(\tilde{d}_{i}=k) \leq \sum_{k=M\mu_{i}-1}^{n-2}k^{\alpha}\binom{n}{k}p_{n}^{k}(1-p_{n} \epsilon)^{n-k} \tag{40}\] \[\leq \sum_{k=M\mu_{i}-1}^{n-2}\exp\left(\alpha\log k+g(k)\right).\]
Let \(h(k)=\alpha\log k+g(k).\) Then
\[h^{\prime}(k)=\frac{\alpha}{k}+\log\left(\frac{np_{n}}{1-p_{n}\epsilon}\right)- \log k.\]
Hence \(h(k)\) is decreasing for \(k>\frac{1.1np_{n}}{1-p_{n}\epsilon}\) and large \(n.\) Since \(k\geq M\mu_{i}-1\geq M\epsilon np_{n}-1\geq\frac{2np_{n}}{1-p_{n}\epsilon}\) for large \(n,\) then
\[h(k)\leq h\left(\frac{2np_{n}}{1-p_{n}\epsilon}\right)=\alpha\log\left(\frac{2 np_{n}}{1-p_{n}\epsilon}\right)-\frac{2np_{n}\log 2}{1-p_{n}\epsilon}+n\log(1-p_{n} \epsilon)\leq-\frac{np_{n}\log 2}{1-p_{n}\epsilon}-\epsilon np_{n}.\]
By the assumption \(np_{n}\log 2\geq\log n,\) it is easy to get \(\log n-\frac{np_{n}\log 2}{1-p_{n}\epsilon}<0.\) Then
\[\sum_{k=M\mu_{i}-1}^{n-2}k^{\alpha-1}(k-\tilde{\mu}_{i}+1-p_{n}f _{ij})\mathbb{P}(\tilde{d}_{i}=k) \leq n\exp\left(-\frac{np_{n}\log 2}{1-p_{n}\epsilon}-\epsilon np _{n}\right) \tag{41}\] \[\leq \exp\left(-\epsilon np_{n}(1+o(1))\right).\]
Hence (37) is bounded by \((np_{n})^{2\alpha}n^{2}p_{n}O\left(\frac{1}{\sqrt{np_{n}}}\right)\).
Now we bound the last term in (35). Note that
\[\sum_{1\leq i,j\leq n}|A_{ij}(d_{i}-\mu_{i})(d_{j}-\mu_{j})X_{i}^ {\alpha-1}X_{j}^{\alpha-1}| \tag{42}\] \[= \sum_{1\leq i,j\leq n}|A_{ij}(d_{i}-\mu_{i})(d_{j}-\mu_{j})X_{i}^ {\alpha-1}X_{j}^{\alpha-1}|I[X_{i}\leq M\mu_{i},X_{j}\leq M\mu_{j}]\] \[+\sum_{1\leq i,j\leq n}|A_{ij}(d_{i}-\mu_{i})(d_{j}-\mu_{j})X_{i}^ {\alpha-1}X_{j}^{\alpha-1}|I[X_{i}\leq M\mu_{i},X_{j}\geq M\mu_{j}]\] \[+\sum_{1\leq i,j\leq n}|A_{ij}(d_{i}-\mu_{i})(d_{j}-\mu_{j})X_{i}^ {\alpha-1}X_{j}^{\alpha-1}|I[X_{i}\geq M\mu_{i},X_{j}\leq M\mu_{j}]\] \[+\sum_{1\leq i,j\leq n}|A_{ij}(d_{i}-\mu_{i})(d_{j}-\mu_{j})X_{i}^ {\alpha-1}X_{j}^{\alpha-1}|I[X_{i}\geq M\mu_{i},X_{j}\geq M\mu_{j}].\]
Since \(X_{i}\) is between \(d_{i}\) and \(\mu_{i},\) then \(X_{i}\leq M\mu_{i}\) implies \(d_{i}\leq X_{i}\leq M\mu_{i},\) and \(X_{i}\geq M\mu_{i}\) implies \(d_{i}\geq X_{i}\geq M\mu_{i}.\) Similar results hold for \(X_{j}.\) Then by (42) we have
\[\sum_{1\leq i,j\leq n}|A_{ij}(d_{i}-\mu_{i})(d_{j}-\mu_{j})X_{i}^{ \alpha-1}X_{j}^{\alpha-1}| \tag{43}\] \[\leq \sum_{1\leq i,j\leq n}|A_{ij}(d_{i}-\mu_{i})(d_{j}-\mu_{j})X_{i}^ {\alpha-1}X_{j}^{\alpha-1}|I[X_{i}\leq M\mu_{i},X_{j}\leq M\mu_{j}]\] \[+\sum_{1\leq i,j\leq n}|A_{ij}(d_{i}-\mu_{i})(d_{j}-\mu_{j})X_{i}^ {\alpha-1}X_{j}^{\alpha-1}|I[X_{i}\leq M\mu_{i},d_{j}\geq X_{j}\geq M\mu_{j}]\] \[+\sum_{1\leq i,j\leq n}|A_{ij}(d_{i}-\mu_{i})(d_{j}-\mu_{j})X_{i}^ {\alpha-1}X_{j}^{\alpha-1}|I[d_{i}\geq X_{i}\geq M\mu_{i},X_{j}\leq M\mu_{j}]\] \[+\sum_{1\leq i,j\leq n}|A_{ij}(d_{i}-\mu_{i})(d_{j}-\mu_{j})X_{i}^ {\alpha-1}X_{j}^{\alpha-1}|I[d_{i}\geq X_{i}\geq M\mu_{i},d_{j}\geq X_{j}\geq M \mu_{j}].\]
Now we bound the expectation of each term in (43). Since the second term and the third term have the same order, it suffices to bound the first term, second term and the last term. By a similar argument as in (39) and (41), it is easy to get the following results.
\[\mathbb{E}\left[\sum_{1\leq i,j\leq n}A_{ij}|d_{i}-\mu_{i}||d_{j}- \mu_{j}|X_{i}^{\alpha-1}X_{j}^{\alpha-1}I[X_{i}\leq M\mu_{i},X_{j}\leq M\mu_{j }]\right] \tag{44}\] \[\leq O((np_{n})^{2(\alpha-1)}p_{n})\sum_{1\leq i,j\leq n}\mathbb{E}| \tilde{d}_{i}-\tilde{\mu}_{i}+1-p_{n}f_{ij}||\tilde{d}_{j}-\tilde{\mu}_{j}+1-p_ {n}f_{ij}|\] \[= O((np_{n})^{2(\alpha-1)}p_{n}n^{2}np_{n})\] \[= (np_{n})^{2\alpha}n^{2}p_{n}O\left(\frac{1}{np_{n}}\right),\]
\[\mathbb{E}\left[\sum_{1\leq i,j\leq n}A_{ij}|d_{i}-\mu_{i}||d_{j} -\mu_{j}|X_{i}^{\alpha-1}X_{j}^{\alpha-1}I[d_{i}\geq X_{i}\geq M\mu_{i},d_{j} \geq X_{j}\geq M\mu_{j}]\right] \tag{45}\] \[\leq \mathbb{E}\left[\sum_{1\leq i,j\leq n}A_{ij}d_{i}^{\alpha}d_{j}^ {\alpha}I[d_{i}\geq M\mu_{i},d_{j}\geq M\mu_{j}]\right]\] \[\leq p_{n}\sum_{1\leq i,j\leq n}\mathbb{E}\left[(\tilde{d}_{i}+1)^{ \alpha}(\tilde{d}_{j}+1)^{\alpha}I[\tilde{d}_{i}\geq M\mu_{i}-1,\tilde{d}_{j} \geq M\mu_{j}-1]\right]\] \[= p_{n}\left(\sum_{1\leq i\leq n}\mathbb{E}\left[(\tilde{d}_{i}+1) ^{\alpha}I[\tilde{d}_{i}\geq M\mu_{i}-1]\right]\right)^{2}\] \[= O\left(n^{2}p_{n}\right)\exp\left(-2\epsilon np_{n}(1+o(1)) \right),\]
and
\[\mathbb{E}\left[\sum_{1\leq i,j\leq n}A_{ij}|d_{i}-\mu_{i}||d_{j} -\mu_{j}|X_{i}^{\alpha-1}X_{j}^{\alpha-1}I[X_{i}\leq M\mu_{i},d_{j}\geq X_{j} \geq M\mu_{j}]\right] \tag{46}\] \[\leq O((np_{n})^{\alpha-1})\mathbb{E}\left[\sum_{1\leq i,j\leq n}A_{ ij}|d_{i}-\mu_{i}|d_{j}^{\alpha}I[d_{j}\geq M\mu_{j}]\right]\] \[\leq O((np_{n})^{\alpha-1}p_{n})\sum_{1\leq i,j\leq n}\mathbb{E} \left[|\tilde{d}_{i}-\tilde{\mu}_{i}+1-p_{n}f_{ij}|(\tilde{d}_{j}+1)^{\alpha}I [\tilde{d}_{j}\geq M\mu_{j}-1]\right]\] \[= O((np_{n})^{\alpha-1}p_{n}n^{2}\sqrt{np_{n}})\exp\left(-\epsilon np _{n}(1+o(1))\right).\]
Combining (35)- (46) yields the desired result. Then the proof of the result of the general Randic index is complete.
(II). Now we prove the result of the general sum-connectivity index. We provide the proof in two cases: \(\alpha<1\) and \(\alpha\geq 1\).
Firstly we work on \(\chi_{-\alpha}\) with \(\alpha>-1\). By Taylor expansion or the mean value theorem, we have
\[\chi_{-\alpha}=\frac{1}{2}\sum_{1\leq i,j\leq n}\frac{A_{ij}}{(d_{i}+d_{j})^{ \alpha}}=\frac{1}{2}\sum_{1\leq i,j\leq n}\frac{A_{ij}}{(\mu_{i}+\mu_{j})^{ \alpha}}-\frac{\alpha}{2}\sum_{1\leq i,j\leq n}\frac{A_{ij}}{X_{ij}^{\alpha+1 }}(d_{i}-\mu_{i}+d_{j}-\mu_{j}), \tag{47}\]
where \(X_{ij}\) is between \(\mu_{i}+\mu_{j}\) and \(d_{i}+d_{j}\). We shall prove the first term is the leading term and the second term has smaller order than the first term.
By a similar argument as in (12), it is easy to get
\[\sum_{i<j}\frac{A_{ij}}{(\mu_{i}+\mu_{j})^{\alpha}}=\sum_{i<j}\frac{p_{n}f_{ij }}{(\mu_{i}+\mu_{j})^{\alpha}}\left(1+O_{P}\left(\frac{1}{\sqrt{n^{2}p_{n}}} \right)\right). \tag{48}\]
Hence the first term of (47) is asymptotically equal to \(\sum_{i<j}\frac{p_{n}f_{ij}}{(\mu_{i}+\mu_{j})^{\alpha}}\).
Let \(\delta_{n}=[\log(np_{n})]^{-2}\). Since \(X_{ij}\) is between \(\mu_{i}+\mu_{j}\) and \(d_{i}+d_{j}\), \(X_{ij}\leq\delta_{n}(\mu_{i}+\mu_{j})\) implies \(d_{i}+d_{j}\leq X_{ij}\leq\delta_{n}(\mu_{i}+\mu_{j})\). Then
\[\sum_{i,j}\left|\frac{A_{ij}}{X_{ij}^{\alpha+1}}(d_{i}-\mu_{i}+d_ {j}-\mu_{j})\right| \tag{49}\] \[\leq \sum_{i,j}\left|\frac{A_{ij}}{X_{ij}^{\alpha+1}}(d_{i}-\mu_{i}+d_ {j}-\mu_{j})\right|I[d_{i}+d_{j}\leq X_{ij}\leq\delta_{n}(\mu_{i}+\mu_{j})]\] \[+\sum_{i,j}\left|\frac{A_{ij}}{X_{ij}^{\alpha+1}}(d_{i}-\mu_{i}+d_ {j}-\mu_{j})\right|I[X_{ij}\geq\delta_{n}(\mu_{i}+\mu_{j})].\]
Next we bound the expectation of each term in (49). For the second term, the expectation can be bounded as follows.
\[\mathbb{E}\left[\sum_{i,j}\frac{A_{ij}}{X_{ij}^{\alpha+1}}(|d_{i}- \mu_{i}|+|d_{j}-\mu_{j}|)I[X_{ij}\geq\delta_{n}(\mu_{i}+\mu_{j})]\right] \tag{50}\] \[\leq O\left(\frac{1}{\delta_{n}^{\alpha+1}(np_{n})^{\alpha+1}}\right) \sum_{i,j}\mathbb{E}\left[A_{ij}(|\tilde{d}_{i}-\mu_{i}+A_{ij}|+|\tilde{d}_{j }-\mu_{i}+A_{ij}|)\right]\] \[= O\left(\frac{n^{2}p_{n}\sqrt{np_{n}}}{\delta_{n}^{\alpha+1}(np_{ n})^{\alpha+1}}\right)=\frac{n^{2}p_{n}}{(np_{n})^{\alpha}}O\left(\frac{[\log(np_{n}) ]^{2(\alpha+1)}}{\sqrt{np_{n}}}\right).\]
Next we focus on the first term in (49). It is clear that
\[\mathbb{E}\left[\sum_{i,j}\frac{A_{ij}}{X_{ij}^{\alpha+1}}(|d_{i} -\mu_{i}|+|d_{j}-\mu_{j}|)I[d_{i}+d_{j}\leq X_{ij}<\delta_{n}(\mu_{i}+\mu_{j})]\right]\] \[\leq \mathbb{E}\left[\sum_{i,j}\frac{A_{ij}(|d_{i}-\mu_{i}|+|d_{j}- \mu_{j}|)}{(d_{i}+d_{j})^{\alpha+1}}I[d_{i}+d_{j}<\delta_{n}(\mu_{i}+\mu_{j})] \right].\]
Note that \(d_{i}+d_{j}<\delta_{n}(\mu_{i}+\mu_{j})\) implies \(d_{i}<\delta_{n}(\mu_{i}+\mu_{j})\) and \(d_{j}<\delta_{n}(\mu_{i}+\mu_{j})\), and
\[\frac{|d_{i}-\mu_{i}|+|d_{j}-\mu_{j}|}{(d_{i}+d_{j})^{\alpha+1}}=\frac{|d_{i}- \mu_{i}|}{(d_{i}+d_{j})^{\alpha+1}}+\frac{|d_{j}-\mu_{j}|}{(d_{i}+d_{j})^{ \alpha+1}}\leq\frac{|d_{i}-\mu_{i}|}{d_{i}^{\alpha+1}}+\frac{|d_{j}-\mu_{j}|}{ d_{j}^{\alpha+1}}.\]
Then we have
\[\mathbb{E}\left[\sum_{i,j}\frac{A_{ij}}{X_{ij}^{\alpha+1}}(|d_{i} -\mu_{i}|+|d_{j}-\mu_{j}|)I[d_{i}+d_{j}\leq X_{ij}<\delta_{n}(\mu_{i}+\mu_{j})]\right] \tag{51}\] \[\leq \mathbb{E}\left[\sum_{i,j}\frac{A_{ij}|d_{i}-\mu_{i}|}{d_{i}^{ \alpha+1}}I[d_{i}<\delta_{n}(\mu_{i}+\mu_{j})]\right]+\mathbb{E}\left[\sum_{ i,j}\frac{A_{ij}|d_{j}-\mu_{j}|}{d_{j}^{\alpha+1}}I[d_{j}<\delta_{n}(\mu_{i}+\mu_{j}) ]\right]\] \[\leq 2p_{n}\mathbb{E}\left[\sum_{i,j}\frac{|\tilde{d}_{i}-\mu_{i}+1|} {(\tilde{d}_{i}+1)^{\alpha+1}}I[\tilde{d}_{i}<\delta_{n}(\mu_{i}+\mu_{j})]\right]\] \[= n^{2}p_{n}e^{-\epsilon np_{n}(1+o(1))}=\frac{n^{2}p_{n}}{(np_{n} )^{\alpha}}e^{-\epsilon np_{n}(1+o(1))}.\]
Combining (47)- (51) yields
\[\chi_{-\alpha}=p_{n}^{1-\alpha}\sum_{i<j}\frac{f_{ij}}{(f_{i}+f_{j})^{\alpha} }\left(1+O_{P}\left(\frac{[\log(np_{n})]^{2(\alpha+1)}}{\sqrt{n^{2}p_{n}}} \right)\right),\hskip 28.452756pt\alpha>-1.\]
Now we work on \(\chi_{\alpha}\) with \(\alpha\geq 1\). When \(\alpha=1\), the proof is trivial. We will focus on \(\alpha>1\). By the mean value theorem, one has
\[\chi_{\alpha}=\frac{1}{2}\sum_{i,j}A_{ij}(d_{i}+d_{j})^{\alpha}=\frac{1}{2} \sum_{i,j}A_{ij}(\mu_{i}+\mu_{j})^{\alpha}+\frac{\alpha}{2}\sum_{i,j}A_{ij}X_{ ij}^{\alpha-1}(d_{i}-\mu_{i}+d_{j}-\mu_{j}), \tag{52}\]
where \(X_{ij}\) is between \(\mu_{i}+\mu_{j}\) and \(d_{i}+d_{j}\).
The remaining proof is similar to the proof of the case \(\alpha<1\). Let \(M=\frac{4}{\epsilon(1-p_{n}\epsilon)}\). It is clear \(M\) is bounded and \(M>4\). Note that
\[\sum_{i,j}\mathbb{E}\left[A_{ij}X_{ij}^{\alpha-1}(|d_{i}-\mu_{i}|+|d_{j}-\mu_{ j}|)I[X_{ij}\leq M(\mu_{i}+\mu_{j})]\right]=(np_{n})^{\alpha}n^{2}p_{n}O\left( \frac{1}{\sqrt{np_{n}}}\right), \tag{53}\]
and
\[\sum_{i,j}\mathbb{E}\left[A_{ij}X_{ij}^{\alpha-1}(|d_{i}-\mu_{i}+d _{j}-\mu_{j}|)I[d_{i}+d_{j}\geq X_{ij}>M(\mu_{i}+\mu_{j})]\right] \tag{54}\] \[\leq O(1)\sum_{i,j}\mathbb{E}\left[A_{ij}(\tilde{d}_{i}+\tilde{d}_{j} +2A_{ij})^{\alpha-1}(|\tilde{d}_{i}+\tilde{d}_{j}-\mu_{i}-\mu_{j}+2A_{ij}|)I[ \tilde{d}_{i}+\tilde{d}_{j}>M(\mu_{i}+\mu_{j}-1)]\right]\] \[\leq O(1)p_{n}\sum_{i,j}\mathbb{E}\left[(\tilde{d}_{i}+\tilde{d}_{j} +2)^{\alpha-1}(\tilde{d}_{i}+\tilde{d}_{j})I[\tilde{d}_{i}+\tilde{d}_{j}>M(\mu_ {i}+\mu_{j}-1)]\right]\] \[\leq O(1)p_{n}\sum_{i,j}\sum_{k=M(\mu_{i}+\mu_{j}-1)}^{2(n-2)}(k+2)^{ \alpha-1}k\mathbb{P}(\tilde{d}_{i}+\tilde{d}_{j}=k)\] \[= n^{2}p_{n}ne^{-\epsilon np_{n}(1+o(1))}=(np_{n})^{\alpha}n^{2}p_{ n}e^{-\epsilon np_{n}(1+o(1))},\]
where the second last step follows from a similar argument as in (41) by noting that \(\tilde{d}_{i}+\tilde{d}_{j}\) follows the Poisson-Binomial distribution.
Combining (52), (53) and (54) yields
\[\chi_{\alpha}=\left(1+O_{P}\left(\frac{1}{\sqrt{np_{n}}}\right)\right)p_{n}^{ \alpha+1}\sum_{i<j}(f_{i}+f_{j})^{\alpha}f_{ij},\hskip 28.452756pt\alpha\geq 1.\]
Then the proof is complete.
## Conflict of interest
The author has no conflict of interest to disclose.
## Acknowledgement
The author is grateful to the anonymous referees for valuable comments that significantly improve this manuscript.
|
2309.08927 | DynaMoN: Motion-Aware Fast and Robust Camera Localization for Dynamic
Neural Radiance Fields | The accurate reconstruction of dynamic scenes with neural radiance fields is
significantly dependent on the estimation of camera poses. Widely used
structure-from-motion pipelines encounter difficulties in accurately tracking
the camera trajectory when faced with separate dynamics of the scene content
and the camera movement. To address this challenge, we propose Dynamic
Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance
Fields (DynaMoN). DynaMoN utilizes semantic segmentation and generic motion
masks to handle dynamic content for initial camera pose estimation and
statics-focused ray sampling for fast and accurate novel-view synthesis. Our
novel iterative learning scheme switches between training the NeRF and updating
the pose parameters for an improved reconstruction and trajectory estimation
quality. The proposed pipeline shows significant acceleration of the training
process. We extensively evaluate our approach on two real-world dynamic
datasets, the TUM RGB-D dataset and the BONN RGB-D Dynamic dataset. DynaMoN
improves over the state-of-the-art both in terms of reconstruction quality and
trajectory accuracy. We plan to make our code public to enhance research in
this area. | Nicolas Schischka, Hannah Schieber, Mert Asim Karaoglu, Melih Görgülü, Florian Grötzner, Alexander Ladikos, Daniel Roth, Nassir Navab, Benjamin Busam | 2023-09-16T08:46:59Z | http://arxiv.org/abs/2309.08927v3 | # DynaMoN: Motion-Aware Fast And Robust Camera
###### Abstract
Dynamic reconstruction with neural radiance fields (NeRF) requires accurate camera poses. These are often hard to retrieve with existing structure-from-motion (SfM) pipelines as both camera and scene content can change. We propose DynaMoN that leverages simultaneous localization and mapping (SLAM) jointly with motion masking to handle dynamic scene content. Our robust SLAM-based tracking module significantly accelerates the training process of the dynamic NeRF while improving the quality of synthesized views at the same time. Extensive experimental validation on TUM RGB-D, BONN RGB-D Dynamic and the DyCheck's iPhone dataset, three real-world datasets, shows the advantages of DynaMoN both for camera pose estimation and novel view synthesis.
## I Introduction
Enabling novel view synthesis on dynamic scenes often requires multi-camera setups [1, 2]. However, everyday dynamic scenes are often captured by a single camera, restricting the field of view [1]. Deep learning's success in novel view synthesis has the potential to transcend these restrictions. In comparison to the commonly used voxel, surfel or truncated signed distance fields (TSDF) output of Simultaneous Localization and Mapping (SLAM) approaches, neural radiance fields (NeRF) can address incomplete 3D reconstructions and enable novel view synthesis from new, reasonable camera positions [3]. The success in dynamic NeRFs [4, 5, 6, 7] allows novel view synthesis not only for static captures but also for dynamic ones.
To enable novel view synthesis using a NeRF, accurate camera poses, usually retrieved via COLMAP [8] or the Apple ARKit, are essential. The structure-from-motion (SfM)-based approach often demands hours of computation to estimate reasonable camera poses and is challenged by large scale dynamic scenes.
To overcome the limitation of slow SfM approaches, recent works combined SLAM and NeRF [9, 10, 11, 3]. SLAM provides faster results for the camera trajectory than classic SfM. However, by nature, scenes are more often dynamic than static. Existing NeRF-SLAM [9, 10, 11, 3] methods assume a static scene, making them challenged by dynamic ones, see Fig 1. In this paper, we present DynaMoN, a motion-aware camera localization and visualization approach which can handle highly dynamic scenes. We utilize motion and semantic segmentation during our dynamic camera localization process to enable a robust tracking of the camera path in a dynamic environment. We combine our camera localization approach with a dynamic NeRF to enable a decent visualization of novel views of the scenes. To demonstrate our robustness we evaluate our approach on three challenging datasets, namely the dynamic subset of the TUM RGB-D [13] dataset, the BONN RGB-D Dynamic [14] dataset and the DyCheck's iPhone [15] dataset.
In summary we contribute:
* DynaMoN, a motion-aware fast and robust camera localization approach for dynamic novel view synthesis.
* State-of-the-art camera localization and novel view synthesis results on TUM RGB-D, BONN RGB-D Dynamic and the DyCheck's iPhone dataset.
Fig. 1: NeRF-SLAM approaches rely on static scenes (left). However, approaches like InstantNGP [12] used for visualization in NeRF-SLAM [3] or Orbeez-SLAM [11] enable masking out dynamic pixels (left). Still when using the masking the scene representation lacks in quality (left). Dynamic NeRF approaches rely on SfM which are offline and sometimes not successful on highly dynamic scenes (center). Considering dynamics already in the SLAM approach (right) enables better camera tracking and enables novel view synthesis with higher quality.
## II Related Works
Our approach DynaMoN combines a dynamic NeRF with a fast and robust motion-aware camera localization, thus, our related works are grouped under the topics of camera localization, their employment for novel view synthesis and neural representation for dynamic scenes.
### _Camera Localization and Scene Dynamics_
SfM and SLAM [16, 17, 18, 19] are the two most common approaches for robust camera localization. More specifically, SLAM often targets real-time localization of a camera coupled with mapping. In addition to the visual information, some approaches can utilize additional sensory signals like depth and IMU to further improve the accuracy. A common branch of work employs ORB descriptors to build and track on sparse 3D maps [16, 19, 17].
In addition to classical or partially learnable SLAM approaches, Teed and Deng [18] introduce the end-to-end learnable DROID-SLAM. DROID-SLAM consists of a Dense Bundle Adjustment layer enabling recurrent iterative updates of camera poses and pixel-wise depth.
Initial camera localization approaches rely on static scenes. However, natural scenes are dynamic [20, 21, 22, 23]. To address this, Yu et al. [20] introduce a dynamic SLAM approach building upon ORB-SLAM2 [19]. Their DS-SLAM integrates a semantic segmentation network and a moving consistency check in the tracking process.
Runz et al. [24] enrich RGB-D SLAM with object-awareness and semantic segmentation capabilities. DynaSLAM [25] combines geometric and deep learning-based segmentation techniques to effectively mask out dynamic objects.
Dai et al. [22] build upon ORB-SLAM2 [19] and introduce the use of semantic segmentation using point correlation to split dynamic and static points. Ye et al. [23] divide the scene representation in a static flow field and a dynamic flow field.
### _Camera Localization and NeRF_
Using the retrieved camera poses, a captured environment can be represented as a 3D scene representation. One possible scene representation is the use of a multilayer perceptron (MLP) [26]. Mildenhall et al. [26] introduce the use of a MLP as neural representation for novel view synthesis, denoted as NeRF. A NeRF relies on accurate camera poses, typically captured with COLMAP [8]. However, this slows down the overall pipeline. An alternative to SfM is the use of SLAM to retrieve the camera poses.
Sucar et al. [27] introduce the use of an MLP as scene representation in a SLAM approach. Building upon the use of a single MLP, Zhu et al. [9] introduce the use of multiple NeRFs in a hierarchical feature grid to learn the camera pose and scene representation. NICER-SLAM [10] introduces a similar approach for RGB input including depth estimation.
Other approaches purely relying on RGB and combining SLAM and NeRF [3, 11] utilize the well-known hash-based InstantNGP [12] for the scene representation. While NeRF-SLAM [3] builds upon an adaption of Droid-SLAM [18], Orbeez-SLAM [11] leverages visual odometry [19].
### _Neural Representation of Dynamic Scenes_
In real-world, scenes are inherently dynamic in contrast to the assumptions of the more common static representation [9, 10, 27, 11, 3, 12, 26]. To represent dynamics in a NeRF, the use of grid structures has been a popular approach [4, 5]. HexPlane [4] utilizes a 4D space-time grid divided into six feature planes. The feature vector is a 4D point in space-time which is projected onto each feature plane. Similarly, Tensor4D [5] builds upon a 4D tensor field using time-aware volumes projected onto nine 2D planes. TiNeuVox [7] represents dynamic scenes using time-aware voxel features and enables faster training.
Optimizing the camera poses and the view synthesis has been explored in static scenes [28, 29, 30, 31, 32] and often improved the novel view synthesis. Liu et al. [33] introduce this optimization to a dynamic NeRF. To model the dynamic scene, they use one static and one dynamic NeRF, where the static NeRF is optimized using the camera poses.
## III Method
In our setup we consider our input to be a video captured in a dynamic environment using a moving camera without knowing its poses. This leads to two objectives: first, to estimate the camera poses along the dynamic input, and second, to build a 4D representation enabling novel views using the input images and predicted camera poses.
### _Camera Localization_
Our work builds upon DROID-SLAM [18]. To enhance its capacity of coping with highly dynamic scenes, we take advantage of motion and semantic segmentation, see Figure 2. Our underlying architecture [18] solves a dense bundle adjustment problem in every iteration for a set of keyframes to acquire their corresponding poses \(\mathbf{G}\) and depths \(\mathbf{d}\). It is weighted by the confidences \(w_{ij}\) of the optical flow calculation by RAFT [34]:
\[\mathbf{E}(\mathbf{G},\mathbf{d})=\sum_{(i,j)\in\epsilon}\left\|p_{ij}^{*}- \Pi_{C}(G_{ij}\circ\Pi_{C}^{-1}(p_{i},d_{i}))\right\|_{\Sigma_{ij}}^{2}, \tag{1}\]
where \(\Sigma_{ij}=diag(w_{ij})\), \(p_{ij}^{*}\) is the estimated optical flow, \(G_{ij}\) is the motion between the poses \(G_{i}\) and \(G_{j}\) and \(p_{i}\) as well as \(d_{i}\) represent the pixel grid and inverse depth map of frame \(i\)[18]. \((i,j)\in\epsilon\) means that there exists an edge between the images \(i\) and \(j\) in the frame graph of the SLAM method, i.e. they have overlapping fields of view. \(\circ\) represents the retraction on the SE3 manifold and \(\Pi_{C}\) as well as \(\Pi_{C}^{-1}\) the projection function from 3D to the image plane and inverse projection function, respectively. The remaining poses that do not belong to keyframes are filled in with a motion-only bundle adjustment in the end. To introduce our proposed masking of dynamics, we set the weight \(w_{ij}\) equal to \(0\), wherever one of the motion (\(M_{MS}\)) or the segmentation mask (\(M_{SS}\)) evaluates to true. Due to the nature of the
Mahalanobis distance, potentially dynamic pixels will then not be considered for optimization.
Our semantic segmentation module predicts masks for the predefined classes person, cat and dog, since these appear in the used datasets. We use a pre-trained version of the state-of-the-art semantic segmentation network DeepLabV3 [35] with a ResNet50-backbone [36]. Additionally, we utilize a motion-based filtering of pixels belonging to dynamic objects to enhance the segmentation of dynamics (\(M_{MS}\)). This enables our approach to deal with a greater variety of dynamics independent of previously known categories. We refine the motion mask and the estimated camera movement between two adjacent frames twice with an incremental increase of the segmentation threshold [37].
### _Dynamic Scene Representation_
While our dynamic camera localization module enables the retrieval of camera poses in dynamic scenes, a 4D scene representation and novel view generation is achieved with dynamic NeRF. To represent a scene in 4D (3D\(+t\)) we follow the combination of implicit and explicit representation presented by Cao and Johnson [4]. Our NeRF consists of six features planes, each pair for one coordinate axis (e.g. XY, XZ, YT) [4].
The 3D scene is represented as a 4D feature volume \(V\in\mathbf{R}^{X\times Y\times Z\times T\times F}\)[4]:
\[\begin{split}\sum_{r=1}^{R_{1}}\mathbf{M}_{r}^{X\times Y}\otimes \mathbf{M}_{r}^{Z\times T}\otimes\mathbf{v}_{r}^{1}&+\sum_{r=1}^ {R_{2}}\mathbf{M}_{r}^{X\times Z}\otimes\mathbf{M}_{r}^{Y\times T}\otimes \mathbf{v}_{r}^{2}\\ &+\sum_{r=1}^{R_{3}}\mathbf{M}_{r}^{Y\times Z}\otimes\mathbf{M}_{ r}^{X\times T}\otimes\mathbf{v}_{r}^{3},\end{split} \tag{2}\]
where each \(\mathbf{M}_{r}^{A\times B}\in\mathbf{R}^{A\times B}\) is one of the six learned feature planes, \(\mathbf{v}_{r}^{i}\) are vectors along the \(F\) axis and \(\otimes\) represents the outer product. Each pair of feature planes (e.g. \(\mathbf{M}_{r}^{X\times Y},\mathbf{M}_{r}^{Z\times T}\)) has one spatial and one spatio-temporal plane. Color and density features acquired from the feature planes are fed into tiny MLPs to yield the final density and RGB color values.
To optimize our NeRF module we sample batches of rays \(\mathcal{R}\) for which we compute the mean squared error between the rendered RGB values \(C\) and the ground truth \(\hat{C}\):
\[L_{RGB}=\frac{1}{\left|\mathcal{R}\right|}\sum_{r\in\mathcal{R}}\left\|C(r)- \hat{C}(r)\right\|_{2}^{2}. \tag{3}\]
In addition, we apply the Total Variation (TV) loss [38, 4] as regularization term for the features \(x\) of each feature plane. For the multi-spatial planes this results in
\[\begin{split} L_{TV,s}&=\frac{2}{b}\sum_{i,j}(\frac {1}{N}\left(x_{i,j}-x_{i+1,j}\right)^{2}\\ &+\frac{1}{M}\left(x_{i,j}-x_{i,j+1}\right)^{2})\end{split} \tag{4}\]
and for the spatio-temporal planes in
\[\begin{split} L_{TV,ts}&=\frac{2}{b}\sum_{i,j}( \frac{1}{N}\left(x_{i,j}-x_{i+1,j}\right)^{2}\\ &+\frac{\lambda_{ts}}{M}\left(x_{i,j}-x_{i,j+1}\right)^{2}).\end{split} \tag{5}\]
Here, \(N\) is the total number of differences in the first plane dimension, \(M\) is the total number of differences in the second dimension and b is the batch size. In total, \(L_{TV,\sigma}\) is the sum of the TV losses over all feature planes for the density and \(L_{TV,RGB}\) the same for the color. Thus, the overall loss function for training the NeRF, which is done in a coarse-to-fine manner, is characterized by the following:
\[L_{NeRF}=L_{RGB}+\lambda_{TV}(L_{TV,\sigma}+wL_{TV,RGB}). \tag{6}\]
## IV Evaluation
DynaMoN is assessed on three tough datasets, examining camera localization and novel view synthesis quality.
### _Datasets_
To assess DynaMoN, we compare both the camera localization and the novel view synthesis part with state-of-the-art approaches on three challenging datasets.
**TUM RGB-D - Dynamic subset**[13]: The TUM RGB-D dataset has a resolution of \(640\times 480\). While the dataset
Fig. 2: **Our DynaMoN architecture**. We utilize segmentation (\(M_{ss}\to S_{S_{j}}\)) and motion masks (\(M_{MS}\to S_{M_{i}}\)) on the input images (\(I\)) to enable a motion-aware fast and robust camera localization. Based on the predicted camera poses \(G_{i},G_{i-1,\ldots}\) and the camera intrinsic (\(K_{i}\)) we produce a 4D output (\(\Theta\)) using dynamic NeRF (\(M_{N}\)).
mainly focuses on static parts, five sequences were rated _slightly dynamic_ and four _highly dynamic_.
**BONN RGB-D Dynamic**[14]: The BONN RGB-D Dynamic consists of 24 dynamic sequences and 2 static sequences. The number of recordings per scene varies. The images have a resolution of \(640\times 480\).
**DyCheck's iPhone**[15]: The iPhone dataset consists of 14 scenes, seven captured with multi cameras and another seven scenes captured with a single camera. The sequences are taken with an iPhone including a lidar scanner for depth. The resolution of the images is \(720\times 960\).
For the novel view synthesis results we use every 8th frame of the TUM RGB-D dataset and BONN RGB-D Dynamic dataset for testing. On the iPhone dataset we follow the official evaluation split.
### _Implementation Details_
We implemented our approach in Python using PyTorch [41] as the deep learning framework. While we use the reported image sizes for the novel view synthesis experiments, we use half of the resolution for the camera localization to be consistent with existent trajectory evaluations of [18]. For our motion mask module we introduce a threshold to avoid false positives. We set the initial threshold to 0.95 and increase it to 0.98 throughout the motion mask refinement. For cases where the camera motion is higher than in the training sequences, the motion segmentation module results in an increase of false positives, as a counter-measurement we introduce a thresholding for the number of pixels that contain motion. Consequently, masks exceeding this threshold are discarded. This ensures that there is always enough pixels for the dense bundle adjustment.
For the loss functions we set \(w\) for \(L_{NeRF}\) to 0.1, \(\lambda_{ts}\) to 20 and \(\lambda_{TV}\) initially to 0.005.
To compare our camera pose retrieval with the classic way of using COLMAP, we followed the implementation of InstantNGP [12] to retrieve the COLMAP ground truth.
Besides the COLMAP comparison, we compare our approach with RoDynRF [33]. For the TUM RGB-D dataset, we train their approach on a NVIDIA RTX 3090 with 24GB of VRAM. We follow their training guideline and do not employ prior pose initialization.
### _Evaluation Metrics_
To evaluate our approach, we consider two aspects. First, we consider the translational trajectory error using RMS in meters. Second, we analyze the rendering quality of the novel dynamic views by reporting PSNR and SSIM [42] values.
### _Camera Localization Quality_
Our results on all datasets show a lower trajectory error compared to the state-of-the-art approaches. For the TUM RGB-D dataset, the camera trajectory error is denoted in Table I. We compare RGB-D-based and monocular SLAM approaches with our proposed camera retrieval method using only motion masks (MS) as well as using motion and semantic segmentation masks in combination (MS&SS). For the mean value of the trajectory error, our approach performs equally compared to DROID-SLAM. However, for the maximum error value our approach performs best among the compared results.
Additionally, we evaluate our approach on the BONN RGB-D Dynamic dataset in Table III. This dataset contains more dynamic scenes compared to the dynamic sequences of the TUM RGB-D dataset. On this dataset, our approach outperforms the state-of-the-art approaches in terms of mean and max trajectory error.
Testing our approach on camera localization datasets we compare our method with DROID-SLAM and COLMAP on the DyCheck's iPhone dataset [15], see Table II. DynaMoN outperforms DROID-SLAM on the iPhone dataset.
COLMAP fails to generate camera poses for the majority of the sequences.
### _Novel View Synthesis Quality_
To evaluate our approach from a NeRF perspective we consider two aspects. First, the classic way of generating camera ground truth when using NeRF is the use of COLMAP. Thus, we compare our DynaMoN using the same dynamic NeRF but with camera poses from COLMAP. Second, we compare our approach with RoDynRF [33] which regresses the camera pose along with a dynamic NeRF.
The results for the TUM RGB-D dataset, denoted in Table IV show that DynaMoN leads to a higher novel view synthesis quality compared to RoDynRF or when using the classic COLMAP for the camera poses. Examples of our renderings can be found in Fig. 3.
On the BONN RGB-D dataset, see Table V, COLMAP is challenged on several sequences. First, our DynaMoN can retrieve all camera poses, see Table III, and second, our approach performs well on all sequences of the BONN RGB-D dataset for the novel view synthesis. Examples can be seen in Fig. 3. However, if COLMAP was successful, the accurate camera poses also lead to decent novel view synthesis results.
In addition to the SLAM datasets, we compare our approach with the most similar NeRF approach RoDynRF [33]
on the multi-camera scenes of the DyCheck's iPhone dataset, see Table VI. We achieve similar but slightly worse performance on this dataset in PSNR. However, for SSIM our approach shows better results.
## V Discussion
In comparison to more sparse reconstructions, our approach addresses camera localization and novel view synthesis in combination. While this leads to more pleasing 3D visualizations and novel views, the computing time is higher compared to traditional camera localization and 3D representation methods.
Our evaluation considers two aspects, the robustness of the camera pose estimation and the novel view synthesis results. For the camera pose estimation, our approach shows an improved performance compared to the state-of-the-art approaches. Furthermore, our used motion masks already provide an improved performance. However, on scenes with large motions the combination with semantic segmentation masks provide an improved camera localization result. Moreover, we demonstrate that our approach also achieve a robust performance on scenes with less camera motion, normally used for novel view synthesis, see Table II.
According to the results in Table IV, existing camera regression and novel view synthesis results are challenged by more dynamic camera motions and dynamic scenes.
Although our approach has demonstrated competitive results on the TUM RGB-D and BONN RGB-D datasets, it faced challenges when applied to the multi-camera iPhone dataset due to its narrower field of view. Additionally, on the BONN RGB-D Dynamic dataset, our approach struggled with handling strong light influences near the camera, as illustrated in Fig. 4. Future work could focus on enhancing motion mask prediction by eliminating the need for predefined classes.
## VI Conclusion
We present DynaMoN, a motion aware fast and robust camera localization approach for novel view synthesis. DynaMoN can handle not only the motion of known objects using semantic segmentation masks but also that of unknown objects using a motion segmentation mask. Furthermore, it retrieves the camera poses faster and more robust compared to classical SfM approaches enabling a more accurate 4D scene representation. Compared to the state-of-the-art, DynaMoN outperforms other dynamic camera localization approaches and shows better results for novel view synthesis.
## Acknowledgment
We thank d.hip for providing a campus stipend and gratefully acknowledge their support.
Fig. 4: **Failure cases.** Our DynaMoN (center and right bottom) is challenged by dynamics with a high light influence.
Fig. 3: Novel views from DynaMoN (Ours) compared to the Ground Truth and COLMAP-based dynamic NeRF on the BONN RGB-D dynamic dataset (left). On the TUM RGB-D dataset we compare DynaMoN (Ours) and the ground truth. |
2309.09387 | Climate-Resilient UAVs: Enhancing Energy-Efficient B5G Communication in
Harsh Environments | This paper explores the crucial role of Unmanned Aerial Vehicles (UAVs) in
advancing Beyond Fifth Generation (B5G) communication networks, especially in
adverse weather conditions like rain, fog, and snow.
The study investigates the synergy between climate-resilient UAVs and
energy-efficient B5G communication.
Key findings include the impact of weather elements on UAV coverage and
communication dynamics. The research demonstrates significant enhancements in
energy efficiency, reduced interference, increased data transmission rates, and
optimal channel gain under various weather conditions.
Overall, this paper emphasizes the potential of climate-resilient UAVs to
improve energy-efficient B5G communication and highlights technology's role in
mitigating climate change's impact on communication systems, promoting
sustainability and resilience. | Abdu Saif, Saeed Hamood Alsamhi, Edward Curry | 2023-09-17T21:56:02Z | http://arxiv.org/abs/2309.09387v1 | # Climate-Resilient UAVs: Enhancing Energy-Efficient B5G Communication in Harsh Environments
###### Abstract
he deployment of Beyond Fifth Generation (B5G) networks is increasingly vital yet challenging due to harsh environmental conditions exacerbated by climate change. Unmanned Aerial Vehicles (UAVs) have emerged as critical enablers for B5G communication in adverse weather conditions, including rain, fog, and snow. This paper investigates the synergy between climate-resilient UAVs and energy-efficient B5G communication. It underscores the importance of technology in mitigating climate change's impact on communication systems, charting a path toward a more sustainable and resilient future. he deployment of Beyond Fifth Generation (B5G) networks is increasingly vital yet challenging due to harsh environmental conditions exacerbated by climate change. Unmanned Aerial Vehicles (UAVs) have emerged as critical enablers for B5G communication in adverse weather conditions, including rain, fog, and snow. This paper investigates the synergy between climate-resilient UAVs and energy-efficient B5G communication. We assess UAV coverage and energy efficiency across varying elevation angles in adverse weather. Our findings highlight the pronounced impact of rainfall on UAV coverage and the substantial influence of fog and snow on communication dynamics. Our paper unveils significant improvements in energy efficiency attributed to reduced interference, enhanced data transmission rates, and optimal channel gain under diverse weather conditions. This paper addresses the challenges of harsh environments, emphasizing the potential of climate-resilient UAVs to enhance energy-efficient B5G communication. It underscores the importance of technology in mitigating climate change's impact on communication systems, charting a path toward a more sustainable and resilient future. T
UAV-assisted communication, Harsh environments, Meteorological impacts, Energy efficiency, Outage probability, Spectrum efficiency, B5G.
## I Introduction
The deployment of Beyond Fifth Generation (B5G) networks represents a pivotal advancement in contemporary communication systems, poised to usher in an era of unparalleled connectivity and capabilities. The networks are designed to transcend the boundaries of their predecessors, offering revolutionary enhancements in data rates, network capacity, and latency reduction [1]. The promises of B5G networks are far-reaching, encompassing applications that range from immersive augmented reality experiences to seamless Internet of Things (IoT) connectivity and mission-critical communications for autonomous vehicles and industrial automation [2]. One of the most pressing issues is the increasing prevalence and severity of adverse environmental conditions, exacerbated by the changing climate. The effects of the conditions, such as heavy rainfall, dense fog, and snowfall, pose formidable obstacles to the reliable operation of wireless communication systems [3]. Rain, for instance, can attenuate radio signals, leading to signal degradation and reduced coverage area. Similarly, fog can scatter signals, causing signal loss and impairing the quality of communication links. In snowy conditions, the accumulation of ice and snow on communication equipment can disrupt signal propagation and, in some cases, cause equipment failure. In response to these challenges, Unmanned Aerial Vehicles (UAVs) have emerged as a disruptive technology, offering a promising solution to enhance communication in harsh environments. UAVs can swiftly navigate and adapt to challenging meteorological conditions, thus making UAVs ideal candidates for providing coverage and support to GNs in need [4, 5] offering an up-and-coming solution to enhance communication in harsh environmental conditions. UAVs have demonstrated exceptional adaptability and agility, which make them ideal candidates for providing critical coverage and support to GNs operating in challenging environments [4]. Unlike traditional fixed communication infrastructure, UAVs can autonomously reposition themselves in real-time to optimize signal transmission and reception, ensuring uninterrupted communication even in adverse weather phenomena such as heavy rain, thick fog, or snowstorms. This capability significantly enhances the reliability of communication networks in harsh environments, where maintaining a consistent and high-quality connection is paramount [4]. UAVs can significantly reduce routing overhead. Traditional GNs in harsh environments often rely on complex routing schemes to circumvent obstacles and maintain connectivity. With UAV's direct line-of-sight capability and adaptability, UAVs streamline routing and reduce the complexity of data transmission, leading to more efficient network operation [6].
Moreover, UAV-assisted cellular communication represents a vital technology for fulfilling the dynamic communication requirements of modern society [7]. As traditional communica
tion infrastructure is susceptible to malfunction and disruption during natural disasters and extreme weather events [8, 9] UAVs provide swift recovery, ensuring that communication services remain available despite adversity [10]. Furthermore, the adaptability and resilience of UAVs extend to situations where communication infrastructure is temporarily overloaded or incapacitated due to unexpected surges in network traffic, as often occurs during major events or emergencies. In these cases, UAVs can be rapidly deployed to offload traffic, ensuring that critical communication services, including emergency calls and data transmission, continue to operate smoothly [11]. This paper delves into the intersection of UAVs, adverse meteorological conditions, and B5G communication, evaluating the role of UAVs in enhancing energy-efficient communication amidst rain, fog, and snow. The assessment encompasses a range of performance metrics, including energy efficiency, outage probability, spectrum efficiency, and path loss. By addressing these critical aspects, we aim to shed light on the potential of UAV-assisted systems to thrive in harsh environments and to serve as viable replacements for dysfunctional GNs in such scenarios.
### _Motivation and contributions_
The advent of B5G networks promises revolutionary advancements in communication, enabling applications like ultra-low latency for autonomous vehicles and seamless connectivity for IoT devices. The impact of harsh environmental conditions brought on by climate change, such as heavy rain, fog, and snow, which present significant obstacles to dependable wireless communication, is limiting the effectiveness of B5G networks. This paper aims to bridge this gap by exploring the role of climate-resilient UAVs in enhancing energy-efficient B5G communication in adverse weather conditions, unlocking the full potential of B5G networks for a wide range of transformative applications. While B5G networks hold immense promise, a critical gap exists in ensuring the resilience of communication infrastructure to the adverse meteorological conditions intensified by climate change. Rain, fog, and snow can severely degrade wireless communication quality, disrupting services and hindering B5G network deployment. This paper fills this gap by looking into how climate-resilient UAVs can be used to improve energy-efficient B5G communication in harsh environments. It offers crucial insights and solutions for keeping communication services running in bad weather, ensuring critical applications do not stop working, and making B5G communication systems more climate-resilient.
The paper aims to advance understanding of how UAVs can play a pivotal role in enhancing the resilience and energy efficiency of B5G communication networks in harsh environmental conditions. The contributions encompass demonstrating UAVs as a climate-resilient solution and quantifying their impact on energy efficiency and key performance metrics. The contributions of the paper are summarised as follows:
* We introduce climate-resilient UAVs as a significant contribution to the field. It showcases how UAVs can be dependable for maintaining communication services in adverse meteorological conditions (i.e., rain, fog, and snow). The paper highlights UAVs as a critical element in addressing the challenges posed by climate-induced weather conditions in communication infrastructure.
* The paper quantifies and emphasizes the significant energy efficiency improvements made possible by integrating UAVs into B5G communication networks in harsh environments. The contributions highlight the reduced interference, augmented data transmission rates, and optimal channel gain facilitated by UAVs, underscoring their potential to optimize energy consumption in communication systems.
* We provide a holistic assessment of the proposed UAV assisted system by evaluating key performance metrics. By addressing energy efficiency, outage probability, spectrum efficiency, and path loss, it offers a comprehensive understanding of the capabilities and limitations of UAVs in mitigating the adverse effects of weather on B5G communication.
### _Related work_
The intersection of UAVs, adverse meteorological conditions, and wireless communication has garnered considerable attention in recent research. Several studies have laid the foundation for understanding the potential of UAVs in mitigating the challenges posed by harsh environments. The authors of [6] conducted pioneering research on wireless communications with UAVs, elucidating the opportunities and challenges of UAV-assisted communication. The authors highlighted the agility and adaptability of UAVs in providing on-demand wireless links, particularly in scenarios where traditional infrastructure is compromised [12]. While their study focused on general UAV applications, it provided valuable insights into the feasibility of UAVs in adverse environmental conditions. In [4], presented a comprehensive tutorial on the applications, challenges, and open problems associated with UAVs in wireless networks. The authors of [13] delved into the effects of fog and haze on visible light communication. While not UAV-centric, their work highlighted the need for resilient communication solutions in adverse weather. The study emphasized the impact of weather conditions on communication quality and reliability, aligning with our research objectives.
The authors of [14] explored the utilization of UAVs in 6G communication networks, emphasizing their ability to provide rapid deployment and coverage extension during extreme weather events. In [15] investigated the impact of rainfall on millimetre-wave communication, a key component of B5G and 6G networks. The findings highlighted the significance of weather-induced signal attenuation and the potential for UAVs to act as dynamic relays, mitigating the effects of rain on high-frequency communication. Furthermore, [16] delved into the challenges of energy-efficient communication in adverse weather. The work specifically examined energy-aware routing protocols for UAV-assisted networks operating in foggy conditions, shedding light on the importance of energy efficiency in challenging meteorological contexts.
While the above studies have made significant strides in understanding the role of UAVs and weather effects in communication networks, our paper contributes to this evolving
landscape by conducting a comprehensive assessment of UAV performance in B5G networks under the combined influence of rain, fog, and snow. Our paper provides insights into energy efficiency, outage probability, spectrum efficiency, and path loss, addressing specific challenges related to climate-resilient and energy-efficient B5G communication in harsh environments.
## II Proposed System Model
The system model is tailored to address a scenario where GNs face formidable challenges in maintaining wireless connectivity, primarily due to adverse environmental conditions, encompassing rain, fog, snow, and other disruptive factors. These conditions disrupt the conventional wireless coverage services typically provided by Ground Base stations, necessitating an innovative approach. We strategically deploy UAVs to reinstate and sustain essential communication links with GNs in these demanding scenarios to overcome this pervasive issue. In our proposed model, UAVs have advanced directional antennas, a critical feature designed to optimize network coverage performance. Furthermore, the UAVs employ dynamic altitude adjustments, responding to factors such as antenna beamwidth and the density of nearby structures, as quantified by the number of installations [17].
The altitude adaptation ensures efficient GN coverage, especially in harsh environmental conditions. The UAVs' capability to dynamically allocate GNs and user devices within the coverage area significantly ensures reliable and uninterrupted connectivity in challenging scenarios. The allocation strategy enhances coverage and connectivity, even in adverse weather conditions. To assess the efficacy of our proposed system, we employ a range of performance parameters, including path loss, energy efficiency, and coverage area. These metrics are vital for evaluating the system's performance across diverse weather conditions, ensuring reliable connectivity is maintained in the harshest environments.
### _Attenuation Models for Rain, Fog and Snow_
Utilizing UAVs for telecommunications services in adverse weather conditions presents a significant challenge. To establish a foundation for deploying UAV communications in such dynamic environments, it is essential to carefully investigate attenuation models for various typical weather conditions, including rain, fog, and snow. The attenuation models for these weather conditions are expressed as follows:
\[\gamma=\begin{cases}kR^{\alpha}&\text{Rain model}\\ K_{1}(f,T)M\quad(\text{dB/km})&\text{Fog model}\\ 0.00349\frac{R_{\text{s}}^{1.6}}{\lambda^{4}}+0.00224\frac{R_{\text{s}}}{ \lambda}\quad(\text{dB/km})&\text{Snow model}\end{cases} \tag{1}\]
**Rain Model:** The rain attenuation model is characterized by the equation \(\gamma=kR^{\alpha}\), where \(R\) represents the rain rate in millimetres per hour exceeded for 0.01
**Snow Model:** The snow attenuation model is described by the equation \(0.00349\frac{R_{\text{s}}^{1.6}}{\lambda^{4}}+0.00224\frac{R_{\text{s}}}{ \lambda}\), where \(\lambda\) is the wavelength and \(R_{\text{s}}\) is the snowfall speed.
**Fog Model:** The attenuation coefficient is given by \(\frac{4.34f^{2}}{\lambda^{2}}\) in units of \((dB/km)/(g/m^{3})\). Where,
\(\eta\) is defined as \(\frac{2+\varepsilon^{\prime\prime}}{\varepsilon^{\prime\prime}}\), and the complex permittivity of water is expressed as \(\varepsilon^{\prime\prime}(f)\) and \(\varepsilon^{\prime}(f)\). For fog attenuation modelling, the equations governing the complex permittivity of water (\(\varepsilon^{\prime\prime}(f)\) and \(\varepsilon^{\prime}(f)\)) are provided, with temperature (\(T\)) and various constants determining their values. The specific attenuation coefficient (\(K_{1}(f,T)\)) is essential for quantifying fog-induced attenuation and is directly related to the density of liquid water in the cloud or fog (\(M\)).
\[K_{1}(f,T)=\frac{0.819f}{\varepsilon^{\prime\prime}\left(1+\eta^{2}\right)} \quad(\text{dB/km})/\left(\text{g/m}^{3}\right), \tag{2}\]
where \(\eta=\frac{2+\varepsilon^{\prime}}{\varepsilon^{\prime\prime}}\) and the \(\varepsilon^{\prime\prime}(f)\) is given as:
\[\varepsilon^{\prime\prime}(f)=\frac{f\left(\varepsilon_{0}- \varepsilon_{1}\right)}{f_{p}\left[1+\left(f/f_{p}\right)^{2}\right]}+\frac{f \left(\varepsilon_{1}-\varepsilon_{2}\right)}{f_{s}\left[1+\left(f/f_{s} \right)^{2}\right]}, \tag{3}\] \[\varepsilon^{\prime}(f)=\frac{\varepsilon_{0}-\varepsilon_{1}}{ \left[1+\left(f/f_{p}\right)^{2}\right]}+\frac{\varepsilon_{1}-\varepsilon_{2} }{\left[1+\left(f/f_{s}\right)^{2}\right]}+\varepsilon_{2}, \tag{4}\]
Where \(\varepsilon_{0}\) is calculated as \(77.66+103.3(\theta-1)\), \(\varepsilon_{1}\) is derived as \(0.0671\) times \(\varepsilon_{0}\), \(\varepsilon_{2}\) is a constant with a value of \(3.52\). Here, \(\theta\) is determined as \(300/T_{\text{fog}}\), where \(T_{\text{fog}}\) represents the temperature during foggy weather conditions and is set to 293.15K. Additionally, we define the primary relaxation frequency, \(f_{p}\), as \(20.20-146(\theta-1)+316(\theta-1)^{2}\), and the secondary relaxation frequency, \(f_{s}\), as \(39.8\) times \(f_{p}\) (in GHz).
\(T\) represents the temperature of liquid water, \(K_{1}\) stands for the specific attenuation coefficient (dB/km per g/m\({}^{3}\)), and \(M\) denotes the density of liquid water in the cloud or fog (g/m\({}^{3}\)), as documented in [18]. Additionally, \(Rs\) corresponds to the snowfall speed measured in millimeters per hour, and \(\lambda\) indicates the wavelength measured in centimeters
The attenuation models are crucial for understanding and mitigating the effects of rain, fog, and snow on wireless communication, particularly when deploying UAVs in challenging meteorological environments, providing valuable insights into how environmental conditions impact communication performance and guide the development of resilient communication systems.
### _Path Loss for Rain, Fog and Snow_
Path loss propagation plays a pivotal role in shaping the wireless communication channel between UAVs and GNs within the air-to-ground (A2G) channel. Environmental parameters, including the distance between UAVs and GNs, GN elevation angles, and UAV altitudes, profoundly influence the signal path loss. To comprehensively model the A2G channel under multiple weather conditions, we integrate the specific attenuation coefficients for rain, fog, and snow into the path loss models. The resulting A2G channel models can be expressed as:
\[PL_{\text{UAV}} =(PL_{\text{LoS}}\times P_{\text{LoS}}+PL_{\text{NLoS}}\times P_{ \text{NLoS}})+\frac{(\beta+\gamma)d}{1000} \tag{5}\] \[\qquad=\left(\frac{A}{1+a\exp\left(-b\left(\frac{100}{\pi}\tan^{-1 }\left(\frac{b}{\pi}\right)-a\right)\right)}\right.\] \[\qquad\left.+20\log\frac{r}{\cos\left(\frac{180}{\pi}\tan^{-1} \left(\frac{b}{\pi}\right)\right)}+B\right)+\frac{(\beta+\gamma)d}{1000},\]
Where, \(PL_{\text{UAV}}\) represents the UAV path loss, \(PL_{\text{LoS}}\) and \(PL_{\text{NLoS}}\) are path loss components for Line-of-Sight (LoS) and Non-Line-of-Sight (NLoS) conditions, respectively. The variables \(A\) and \(\gamma\) account for specific attenuation coefficients related to various weather scenarios in dB/km. Additionally, \(f_{c}\) denotes the carrier frequency, \(c\) is the speed of light, and \(d_{[ki]}\) represents the distance from the UAV to the GNs. The terms \(\eta_{\text{LoS}}\) and \(\eta_{\text{NLoS}}\) consider excessive loss due to shadowing and scattering in LoS and NLoS links, while \(\beta\) represents atmospheric attenuation.
### _Connectivity in UAV coverage area_
To determine the optimal coverage area of the UAV, we derive the first derivative of equation (16) concerning the variable \(r\) as follows:
\[\begin{split} 0&=-\frac{Aah}{r^{2}}\left(b\right) \left(\tan^{-1}\left(\frac{h}{r}\right)-a\right)\\ &\quad\times\left(e^{-b\left(\tan^{-1}\left(\frac{h}{r}\right)-a \right)}\right)\\ &\quad\times\left(1+ae^{-b\left(\tan^{-1}\left(\frac{h}{r}\right) -a\right)}\right)^{-2}\\ &\quad\times\left(\frac{h^{2}}{r^{2}}+1\right)^{-1}\\ &\quad+20\log\left(\cos\left(180\frac{1}{\pi}\arctan\left(\frac{ h}{r}\right)\right)\right)^{-1}\\ &\quad-3600\frac{\left(\log h\right)h}{r\pi}\sin\left(180\frac{1}{ \pi}\tan^{-1}\left(\frac{h}{r}\right)\right)\\ &\quad\times\left(\cos\left(180\frac{1}{\pi}\arctan\left(\frac{ h}{r}\right)\right)\right)^{-2}\\ &\quad\times\left(\frac{h^{2}}{r^{2}}+1\right)^{-1}\end{split} \tag{6}\]
This derivative is instrumental in determining the optimal coverage area of the UAV, taking into account critical parameters like altitude, distance, and environmental factors. By findings from [19], we combine the propagation attenuation effects of rain, fog, and snow under various weather conditions to model wireless channels effectively. This integrated model helps us understand how adverse weather impacts communication performance, particularly when deploying UAVs.
The desired performance metrics for assessing UAV-assisted GN communications encompass several crucial aspects, including achieving higher data rates, enhancing energy efficiency, increasing network capacity, and ensuring service availability during harsh environmental events [20]. Notably, network capacity is evaluated based on the traffic that can be handled with a minimum bit error rate, especially in challenging conditions. The derivative-based optimization approach, coupled with a comprehensive understanding of weather-induced attenuation, forms the basis for effective wireless communication system design, focusing on achieving optimal performance metrics in UAV-assisted GN communications, even under adverse environmental conditions.
### _Energy Efficiency_
In harsh environmental communication scenarios, the UAV takes on the crucial role of a temporary communication relay, ensuring the timely and reliable exchange of vital information. However, adverse weather conditions such as rain, fog, and snow significantly impact the UAV's energy resources, particularly during refilling operations. The conditions consume valuable time and deplete the UAV's energy reserves. As a result, an urgent need exists to enhance the energy efficiency of UAV-assisted communication systems. The holistic evaluation of energy efficiency in the context of UAV-assisted communication involves considering the entire instantaneous transmission vector of the UAV, denoted as \(EE_{UAV}^{k}\). This vector encompasses all the constituent elements, representing every link between the UAV and the ground nodes. It serves as a comprehensive metric to assess and optimize the system's energy efficiency, addressing the challenges posed by harsh environmental conditions and ensuring the effective utilization of the UAV's resources for reliable communication.
\[\text{EE}_{\text{UAV}}^{[k]}=\frac{B\cdot\text{log}_{2}(1+\frac{p_{k}h_{i}}{ \sum_{n=1}^{M}p_{m}h_{m+}p_{j}h_{j}+\sigma^{2}})}{P_{tx}h}, \tag{7}\]
Where, \(h\) represents the number of hops from the UAV to GN communications, and \(P_{tx}\) stands for the maximum transmission power used by the UAV for downlink communications with GNs.
## III Results and Discussion
In this section, we present an extensive array of simulation results that vividly illustrate the performance of the proposed schemes. Our evaluation encompasses critical parameters and metrics, focusing on LoS probability, path loss, and energy efficiency in UAV-to-ground node communication. These evaluations are conducted across harsh environmental scenarios, including rain, fog, and snow.
Our simulations consider a scenario where ground nodes are randomly distributed within the UAV's coverage area, mirroring real-world conditions. To achieve a comprehensive assessment, we vary ground node elevation angles and adjust UAV altitudes accordingly. Considering the UAV's coverage area, we assume the ground nodes are scattered randomly in each scenario. The transmission distance between the source and destination of the ground nodes and the UAV is set from 100 meters to 1000 meters. Moreover, we explore a wide range of elevation angles for the ground nodes, spanning from 0 degrees to 90 degrees. By conducting rigorous simulations, we aim to provide a comprehensive understanding of how our proposed schemes perform under varying conditions and scenarios. This empirical evidence will validate the efficacy of our approaches and offer insights into their real-world applicability.
### _Path Loss_
Path loss propagation is a critical consideration when assessing communication performance, and it is essential to understand how it varies under different conditions. In the context of harsh environments, we observed a notable trend in path loss as the elevation angle of the ground nodes (GNs) changed from \(0^{\circ}\) to \(90^{\circ}\), as depicted in Fig. 1. The variation is primarily attributed to weather conditions such as rain, fog, and snow. In particular, when GNs are situated in harsh weather conditions, the path loss substantially increases, ranging from 0 dB to 310 dB as the GN elevation angle varies. The dramatic change in path loss indicates the challenges imposed by adverse weather. Notably, in a rainy environment, the path loss experiences a more modest increase, rising from 0 dB to 51 dB. In contrast, the fog environment displays a path loss that escalates from 100 dB to 175 dB. Finally, for the snow environment, path loss undergoes a considerable increase, ranging from 0 dB to 310 dB, as the GN elevation angle varies from \(0^{\circ}\) to \(90^{\circ}\) due to the specific characteristics of a single city model.
### _Energy Efficiency_
In this context, we delve into the analysis of both the UAV and the GN energy efficiency, shedding light on how these efficiency metrics behave under varying conditions. Fig. 3 visually represents our findings. We observe that energy efficiency experiences a decline as the distance between the GNs and the UAV increases; a decrease in energy efficiency is a significant aspect to consider, as it signifies that more energy is required to maintain communication over longer distances. However, an exciting observation arises as we examine energy efficiency across different scenarios. As the transmission distance of the ground nodes increases, the energy efficiency values for each scenario start to converge and become closely aligned. The phenomenon suggests that as the ground node distance expands, the impact of interference on energy efficiency diminishes. When GNs are farther apart, interference plays a less significant role in reducing energy efficiency.
Increasing the transmission power is one effective way to counteract this reduction in energy efficiency due to interference, particularly in scenarios with longer transmission distances [22]. The signal can better overcome interference by providing more power, leading to improved energy efficiency. Furthermore, it is essential to highlight that co-channel interference primarily affects energy efficiency, where multiple
Fig. 1: Variation in Path Loss with GN Elevation Angle in Different Harsh Environments (Rain, Fog, Snow)
communication channels overlap and interfere. In this context, directional antennas can be a valuable strategy. Directional antennas can focus and concentrate the signal in a specific direction, reducing interference and enhancing the overall efficiency metrics. Hence, incorporating directional antennas is a practical approach to mitigate the adverse effects of interference on energy efficiency in UAV-to-GN communication scenarios.
### _UAV Communication Coverage_
Fig 4 illustrates the impact of harsh environmental conditions, including rain, fog, and snow, on UAV communication systems' cell radius and operating altitude. In scenarios with moderate rainfall, the coverage radius and the optimal UAV altitude are relatively small, indicating the need for lower operating altitudes for adequate coverage. Conversely, the coverage radius significantly expands under light snow conditions, enabling UAVs to operate at higher altitudes while providing extensive coverage. This demonstrates that the severity of weather conditions plays a pivotal role in determining the trade-off between coverage area and UAV altitude in UAV communication, with milder conditions allowing for greater altitude and more extensive coverage areas compared to scenarios with more moderate rain.
## IV Conclusion
In this paper, we have explored the pivotal role of UAV-facilitated ground node communication in challenging meteorological conditions, encompassing rain, fog, and snow. Our assessment has focused on key performance metrics, including energy efficiency, outage probability, spectrum efficiency, and path loss. In our proposed system model, UAVs offer invaluable coverage support to empower GNs operating in harsh environments, leading to enhanced network scalability, reduced routing overhead, optimized throughput, and expanded coverage. UAV-assisted cellular communication is poised to become a cornerstone technology, catering to the evolving demands of dynamic and diverse communication scenarios. Furthermore, UAVs are indispensable for swift recovery when traditional communication infrastructure succumbs to dysfunction during natural disasters and adverse environmental conditions. Our results affirm the viability of the UAV-assisted system, showcasing its capability to perform on par with GNs in harsh environments. UAVs emerge as a fitting replacement for dysfunctional GNs in these challenging scenarios, offering reliability and rapid restoration of communication services during critical situations.
## Acknowledgment
This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number SFI/12/RC/2289_P2.
|
2309.15976 | Perturbative nonlinear feedback forces for optical levitation
experiments | Feedback control can be used to generate well-determined nonlinear effective
potentials in an optical trap, a goal whose applications may range from
non-equilibrium thermodynamics to the generation of non-Gaussian states of
mechanical motion. Here, we investigate the action of an effective
feedback-generated quartic potential on a levitated nanoparticle within the
perturbation regime. The effects of feedback delay are discussed and
predictions from the perturbation theory of a Brownian particle subjected to a
quartic anharmonicity are experimentally verified. | Oscar Kremer, Daniel Tandeitnik, Rafael Mufato, Igor Califrer, Breno Calderoni, Felipe Calliari, Bruno Melo, Guilherme Temporão, Thiago Guerreiro | 2023-09-27T19:50:12Z | http://arxiv.org/abs/2309.15976v1 | # Perturbative nonlinear feedback forces for optical levitation experiments
###### Abstract
Feedback control can be used to generate well-determined nonlinear effective potentials in an optical trap, a goal whose applications may range from non-equilibrium thermodynamics to the generation of non-Gaussian states of mechanical motion. Here, we investigate the action of an effective feedback-generated quartic potential on a levitated nanoparticle within the perturbation regime. The effects of feedback delay are discussed and predictions from the perturbation theory of a Brownian particle subjected to a quartic anharmonicity are experimentally verified.
## I Introduction
Optical levitation of nanoparticles provides a robust setup for both fundamental and applied physics [1; 2], from classical stochastic thermodynamics [3; 4; 5; 6] to mesoscopic quantum science [7; 8; 9]. In the typical levitated optomechanics experiment, a dielectric particle is trapped in a tightly focused Gaussian beam providing, to leading order approximation, a confining harmonic potential [10; 11]. The particle undergoes Brownian motion due to interaction with its surrounding medium and measurements of its position correlation functions, notably the auto-correlation and the associated power spectrum, allows for the characterization of the trap's parameters [11; 12].
While the harmonic approximation is commonly employed in optical trapping, the ability to engineer potential landscapes beyond the quadratic approximation is central to optomechanics. Nonlinear force landscapes are a valuable resource to nonequilibrium Brownian machines [13; 14], the preparation of non-classical and non-Gaussian quantum states [15] and matter wave interference experiments [16], to mention just a few examples. Nonlinear potential landscapes also appear in structured light optical tweezers [17], as in double-well landscapes [18; 19; 20; 21], structured light beams with pattern revivals [22], cylindrical vector beams [23] and dark focus traps [24; 25].
In these nonlinear potential landscapes, to which we refer here as _nonlinear optical tweezers_, quantitative statistical description of the stochastic particle motion is significantly more complicated as it involves nonlinear stochastic differential equations. To make quantitative predictions regarding the statistical correlators of the trapped particle's motion we can, however, resort to perturbation theory [26].
A perturbative method for nonlinear optical tweezers has been developed in [27], wherein it is possible to compute corrections to the statistical moments of particle motion, in particular the position power spectrum. The purpose of the present work is to experimentally validate these methods. Since nonlinearities in standard Gaussian optical tweezers are typically small [28; 29], we turn to effective feedback potential landscapes to implement nonlinear position-dependent forces upon a levitated nanosphere. We implement the nonlinearity via electric feedback and characterize its effects on the particle motion.
This paper is organized as follows. In the next section, we briefly review the perturbation theory for computing corrections to the correlation functions of a trapped particle under the influence of a nonlinear force, and generalize it to include the effect of delayed forces. Since we deal with artificial electric feedback potentials relying on measurements and processing of the trapped particle's position, they imply an inherent delay to the nonlinear force and therefore accounting for the effects of this delay is essential to validating the methods of [27]. We then describe the experimental setup used to generate nonlinear potential landscapes through electric feedback on the particle and numerically compute the effects of delay, showing that within the range of parameters employed in our experiment they are negligible. We implement a cubic force (quartic potential) on the particle and finally verify the perturbation theory by comparing the predicted center frequency of the position power spectral density with experimental results. We conclude with a brief discussion on the applications of artificial nonlinear forces to levitated optomechanics experiments.
## II Theory
### Formulation of the perturbation theory
We model the stochastic motion of a particle in a fluid at thermal equilibrium at temperature \(T_{\rm eff}\) and under a force field \(\vec{F}(\vec{r})\) using the Langevin equation,
\[\ddot{\vec{r}}(t)=-\Gamma_{m}\dot{\vec{r}}(t)+\vec{F}(\vec{r}(t))/m+\sqrt{C }\vec{\eta}(t), \tag{1}\]
where \(m\) is the particle's mass, \(\Gamma_{m}=\Gamma/m\), \(C=2\Gamma k_{B}T_{\rm eff}/m^{2}\) with \(\Gamma\) the drag coefficient and \(\vec{\eta}(t)\) is
isotropic Gaussian white noise, whose components satisfy
\[\mathbb{E}[\eta_{i}(t)\eta_{j}(t^{\prime})]=\delta_{ij}\delta(t-t^{\prime}). \tag{2}\]
Concentrating in the motion along the longitudinal \(z\)-direction, Eq. (1) reduces to a one dimensional Langevin equation
\[\ddot{z}(t)=-\Gamma_{m}\dot{z}(t)+F_{z}(z(t))/m+\sqrt{C}\eta(t). \tag{3}\]
For an approximately linear trapping force perturbed by nonlinear corrections, the steady state position auto-correlation \(A(t)\equiv\mathbb{E}[z(t)z(0)]\) can be perturbatively approximated. We next summarize the perturbation theory outlined in [27] and used throughout this work.
Consider the force acting on the particle,
\[F_{z}(z)=-m\omega_{0}^{2}z-G_{fb}z^{3}, \tag{4}\]
where the first term accounts for an optical trap with resonance frequency \(\omega_{0}\) and the second term is a small nonlinear correction, which in the experiment originates from a feedback force on the particle proportional to the _feedback gain_\(G_{fb}\) times a nonlinear function of the particle's position. We define the Green's function
\[G(t)=\frac{\sin(\Omega\,t)}{\Omega}\ \exp\!\left(-\frac{\Gamma_{m}t}{2}\right)H( t), \tag{5}\]
where \(\Omega=\sqrt{\omega_{0}^{2}-\Gamma_{m}^{2}/4}\) and \(H(t)\) is the Heaviside step function with \(H(t)=1\) for \(t>0\) and \(H(t)=0\) for \(t\leq 0\). We introduce the response paths \(\tilde{x}(s)\) and define the Wick sum bracket \(\langle(\cdots)\rangle_{0}\):
\[\langle z(t_{1})\cdots z(t_{n})\tilde{z}(s_{1})\cdots\tilde{z}(s_{m})\rangle_ {0}=\delta_{nm}\sum_{\sigma}\prod_{j=1}^{n}G(t_{j}-s_{\sigma(j)}) \tag{6}\]
where the sum goes over all permutations \(\sigma\) of indexes \(\{1,\ldots,n\}\). Note that the second order correlator is given by the Green function, \(\langle z(t)\tilde{z}(s)\rangle_{0}=G(t-s)\). The perturbation theory is summarized by the expression for the position auto-correlation function,
\[A(t)\equiv\mathbb{E}[z(t)z(0)]=\\ \langle z(t)z(0)e^{\frac{C}{2}\int\tilde{x}^{2}(s)ds}e^{\frac{G_ {fb}}{m}\int\tilde{z}(t^{\prime})z(t^{\prime})^{3}dt^{\prime}}\rangle_{0}, \tag{7}\]
where the right-hand side is defined by expanding both exponentials inside the brackets as a power series in \(C\) and in \(G_{fb}/m\) and interchanging summations and integrations by applying the Wick bracket \(\langle(\cdots)\rangle_{0}\).
The first non-vanishing term in the expansion of Eq. (7) is
\[\frac{C}{2}\int\langle z(t)z(0)\tilde{z}(s)^{2}\rangle_{0}\,ds=C\int G(t-s)G(- s)ds\, \tag{8}\]
which gives the auto-correlation for the case of a linear force \(F_{z}(x)=-m\omega_{0}^{2}z\),
\[A(t)_{(G_{fb}=0)}=\frac{Ce^{-\Gamma_{m}|t|/2}(2\Omega\cos\Omega|t|+\Gamma_{m} \sin\Omega|t|)}{\Gamma_{m}\Omega(\Gamma_{m}^{2}+4\Omega^{2})}. \tag{9}\]
The leading order correction in the feedback gain reads,
\[\Delta A(t)\equiv\\ \frac{C^{2}G_{fb}}{8m}\int\langle\tilde{z}(s_{1})^{2}\tilde{z}(s _{2})^{2}\tilde{z}(t_{1})z(t_{1})^{3}z(t)z(0)\rangle_{0}\,ds_{1}ds_{2}dt_{1}. \tag{10}\]
Expanding the brackets using (6) would produce a sum with \(5!=120\) terms, but many of these vanish since \(\langle\tilde{z}(t_{1})z(t_{1})\rangle=G(0)=0\). Moreover, by symmetry of the integration variables \(s_{1}\) and \(s_{2}\), the contribution to the integral of the non-vanishing terms is equal to the contribution of \(G(t-t_{1})G(-s_{1})G(t_{1}-s_{1})G(t_{1}-s_{2})^{2}\) or \(G(-t_{1})G(t-s_{1})G(t_{1}-s_{1})G(t_{1}-s_{2})^{2}\), represented by the diagrams depicted in Figure 1. Therefore, the integral in (10) is computed by integrating these two terms over \(t_{1},s_{1},s_{2}\) and multiplying both integrals by a multiplicity factor \(2^{3}(3!)=48\).
From the auto-correlation function perturbation \(\Delta A\) we can obtain the corrected power spectral density (PSD) of the particle motion by taking the Fourier transform. We obtain the PSD correction [27],
\[\Delta S=\frac{3G_{fb}C^{2}}{\Gamma_{m}\omega_{0}^{2}}\frac{\omega^{2}-\omega_ {0}^{2}}{[\Gamma_{m}^{2}\omega^{2}+(\omega^{2}-\omega_{0}^{2})^{2}]^{2}} \tag{11}\]
Note the total PSD, \(S_{(G_{fb}=0)}+\Delta S\), can be approximated to linear order in \(G_{fb}\) as,
\[\frac{C}{\Gamma_{m}^{2}\omega^{2}+[\omega^{2}-(\omega_{0}+\Delta \Omega)^{2}]^{2}}\approx\frac{C}{\Gamma_{m}^{2}\omega^{2}+(\omega^{2}-\omega_ {0}^{2})^{2}}\\ +4C\omega_{0}\Delta\Omega\frac{\omega^{2}-\omega_{0}^{2}}{[\Gamma _{m}^{2}\omega^{2}+(\omega^{2}-\omega_{0}^{2})^{2}]^{2}}, \tag{12}\]
where the _frequency shift_\(\Delta\Omega\) is defined by
\[\frac{\Delta\Omega}{2\pi}=\frac{3k_{b}T_{\rm eff}}{4\pi m^{2}\omega_{0}^{3}}G_{ fb}\equiv\kappa G_{fb}. \tag{13}\]
We see that effectively, the nonlinear perturbation manifests as a shift in the PSD central frequency scaling linearly with the feedback gain \(G_{fb}\) and with a slope given by the constant \(\kappa\). This is valid for small \(G_{fb}\),
\[G_{fb}\ll\frac{m^{2}\omega_{0}^{4}}{2k_{b}T_{\rm eff}}. \tag{14}\]
The right-hand side of (14) can be used to delimit the validity region of perturbation theory. It is the shift \(\Delta\Omega\) in the PSD which we will use as an experimental signature of the effect of a nonlinear perturbation.
Figure 1: Diagrams for leading order correction to the position auto-correlation function of a Brownian particle subject to a non-linear optical tweezer.
### Delayed nonlinearities
Besides nonlinear force perturbations, we will be interested in delayed forces. Artificially produced feedback forces will naturally be subject to electronic delay. Accounting for the effects of such delays in perturbation theory allows us to understand the limits of validity of Eq. (7) for modelling the artificial feedback forces. More broadly, understanding the role of delays might also enable the study of perturbative nonlinear non-Markovian stochastic dynamics [30].
We consider the generalized Langevin equation,
\[\ddot{z}(t)=-\Gamma_{m}\dot{z}(t)-\omega_{0}^{2}z(t)-\frac{G_{fb}}{m}\,z(t- \tau)^{3}+\sqrt{C}\,\eta(t), \tag{15}\]
where \(\tau>0\) is a fixed (constant) time delay. The perturbation expansion for \(\tau=0\) (Eq. (7)) can be generalized to
\[A(t,\tau)\equiv\] \[\mathbb{E}[z(t)z(0)]=\langle z(t)z(0)e^{\frac{C}{2}\int\dot{z}^{2 }(s)ds}e^{\frac{C_{fb}}{m}\int\ddot{z}(t^{\prime})z(t^{\prime}-\tau)^{3}dt^{ \prime}}\rangle_{0}. \tag{16}\]
Expanding the exponentials in power series and using the Wick sum as defined in (6), the leading correction to the auto-correlation function (9) is given by the following integrals,
\[\Delta A(t,\tau)\propto\] \[\int G(t\!-\!t_{1})G(-s_{1})G(t_{1}\!-\!s_{1}\!-\!\tau)G(t_{1}\!- \!s_{2}\!-\!\tau)^{2}dt_{1}ds_{1}ds_{2}\] \[+\!\int G(-t_{1})G(t\!-\!s_{1})G(t_{1}\!-\!s_{1}\!-\!\tau)G(t_{1} \!-\!s_{2}\!-\!\tau)^{2}dt_{1}ds_{1}ds_{2}\,. \tag{17}\]
We note both integrals are multiplied by the constant \(3G_{fb}C^{2}/m\), which we omit to avoid cluttering the notation. Evaluating the integrals leads to the corrected auto-correlation function to first order in the perturbation,
\[A(t,\tau) =\frac{Ce^{-\Gamma_{m}|t|/2}(2\Omega\cos\Omega|t|+\Gamma_{m}\sin \Omega|t|)}{\Gamma_{m}\Omega(\Gamma_{m}^{2}+4\Omega^{2})}+\frac{3C^{2}G_{fb}e ^{-\Gamma_{m}|t|/2}}{64m\Gamma_{m}^{3}\Omega^{4}\omega_{0}^{6}}\Bigg{\{}\] \[\quad\quad e^{\Gamma_{m}\tau/2}[8\Gamma_{m}\Omega^{4}-4\omega_{0} ^{2}\Gamma_{m}^{2}\Omega^{2}(|t|-\tau)]\cos(\Omega(|t|-\tau))\] \[\quad+e^{\Gamma_{m}\tau/2}[8\Gamma_{m}\Omega^{3}\omega_{0}^{2}(|t |-\tau)+8\Omega^{5}+4\Gamma_{m}^{2}\omega_{0}^{2}\Omega+6\Gamma_{m}^{2}\Omega^ {3}]\sin(\Omega(|t|-\tau))\] \[\quad+e^{-\Gamma_{m}\tau/2}[\Omega^{2}(2\Gamma_{m}^{2}\Omega-8 \Omega^{3})\sin(\Omega(|t|+\tau))+8\Gamma_{m}\Omega^{4}\cos(\Omega(|t|+\tau))] \Bigg{\}}+\mathcal{O}\big{(}G_{fb}^{2},C^{3}\big{)}, \tag{18}\]
The quantity \(A(0,\tau)\) can be experimentally obtained from the area under the PSD of the particle's motion, which in turn can be related to the mean occupation number of the mechanical modes. In what follows, we use these expressions to account for the effects of delay in the artificially generated nonlinear forces, and to show that perturbation theory in the absence of delay provides a good approximation to current experiments.
## III Experiment
A simplified schematic of the experimental setup is shown in Figure 2. A CW laser at 780 nm (Toptica
Figure 2: Experimental setup. A silica nanoparticle is trapped by an optical tweezer in vacuum. The forward scattered light is collected and sent to a photodiode, producing a signal proportional to the particle’s axial coordinate, \(z(t)\). An FPGA processes the signal to produce a voltage that induces a force on the trapped particle proportional to \(z^{3}(t-\tau)\). Amplification prior to and after the FPGA enhance the maximum resolution of its analog-to-digital converter, enabling the exploration of a broader range of values for the applied electrical force.
DL-Pro) is amplified using a tapered amplifier (Toptica BoosTa) producing up to \(1.5\,\mathrm{W}\) at the output of a single mode fiber, yielding a high quality Gaussian beam. The beam is expanded to overfill an aspheric lens of numerical aperture \(\mathrm{NA}=0.77\) (LightPath 355330) mounted inside a vacuum chamber, which provides a tightly focused Gaussian beam to form the optical trap. A solution of silica spheres of diameter \(2R=143\,\mathrm{nm}\) (MicroParticles GmbH) is mono-dispersed in ethanol and delivered into the optical trap using a nebulizer. Once a single particle is trapped, the pressure in the chamber is reduced to \(10\,\mathrm{mbar}\). The trapped particle's axial center-of-mass (COM) motion, \(z(t)\), is recorded by collecting forward scattered light with an aspheric lens of numerical aperture \(\mathrm{NA}=0.50\), and directing it to a photodiode (Thorlabs PDA100A2), generating an electric signal proportional to \(z(t)\).
The signal from the detector is sent to a wide bandpass filter, amplified and then input into an FPGA. The FPGA introduces a tunable delay, raises the signal to the third power and multiplies it by a tunable gain. The output signal is then amplified once again and applied to the mount of the trapping lens, producing a voltage difference with respect to the mount of the collection lens, which is grounded. This generates an electric force at the particle position given by \(G_{fb}z(t-\tau)^{3}\), where \(\tau\) is the total delay introduced by the electronics and \(G_{fb}\) is the overall feedback gain. For more details on the generated electric field and electronics, see Appendices A and B.
The electronics naturally introduce a delay to the applied position-dependent electric forces, which could lead to deviations from the predictions of the perturbation theory discussed in Sec. II.1. To qualitatively understand the effects of a delayed feedback nonlinear force, we have exaggerated the electronic delay \(\tau\) applying a cubic force of the form \(G_{fb}x(t-\tau)^{3}\) for \(\tau=(2\pi/4\omega_{0})=T/4\) and \(\tau=6\pi/4\omega_{0}=3T/4\), and subsequently measured the PSDs of the particle motion along the longitudinal direction. The results can be seen in Figure 3a), in comparison to the PSD of the trapped particle in the absence of nonlinear feedback. We see that depending on the delay, the particle undergoes cooling (\(\tau=T/4\)) or heating (\(\tau=3T/4\)). This can be understood as the nonlinear analogue of cold damping, where the delayed feedback signal acquires a force component proportional to the velocity [31; 32; 33].
We can quantify the effect of delay for the case of our experiment using the theory described in Sec. II.2. To do that, we have simulated the particle dynamics under the influence of a delayed feedback cubic force for two different values of the feedback gain \(G_{fb}\) within the regime of perturbation theory. For each simulation, we extract the particle motion traces and compute the position variance, from which the effective temperature \(T_{\mathrm{eff}}\) of the mechanical oscillator can be obtained. The results are plotted in Figure 3b) as a function of \(\tau\), in comparison to the theoretical prediction given by Eq. (18). The simulations confirm the qualitative cooling/heating results shown in Figure 3 and are in good agreement to the perturbation theory with the inclusion of delay. Notably, for the electronic delay in our experiment, characterized to be \(\tau=(0.518\pm 0.074)\times 10^{-6}\,\mathrm{s}\), we verify that the
Figure 3: Effect of a delayed nonlinearity. a) Longitudinal position PSDs for the reference measurement (\(\rTozenge\)) in comparison to cubic feedback forces at a gain of \(G_{fb}=5.31\times 10^{6}\,\mathrm{N}/\mathrm{m}^{3}\) and delays of \(\tau=T/4\) (\(\rTozenge\)) and \(\tau=3T/4\) (\(\rTozenge\)). Here, \(T\) represents the period of the particle motion along the longitudinal direction. These comparisons reveal how the introduction of a delayed cubic force can either cool or heat the particle motion. b) Numerically simulated effective temperature \(T_{\mathrm{eff}}\) of particle motion as a function of the delay in the cubic feedback force, displaying cooling and heating in accordance to the predictions of nonlinear delayed perturbation theory described in Sec. II.2. With this analysis, we conclude that the electronic delay present in our experiment, measured to be \(\tau/T=0.042\pm 0.006\), can be safely neglected.
expected cooling/heating effects due to a delayed nonlinear feedback provide a correction to the auto-correlation at the level of 1.10% and are buried within experimental uncertainties. With this analysis we conclude that any effect associated to electronic delay in our experiment is negligible and the perturbation theory in the absence of delay can be used to model the effect of nonlinear perturbations.
We next proceed to verify the perturbation theory as described in Sec. II.1. We apply an effective quartic potential (cubic perturbation force) on the trapped particle generated via the position measurement feedback as described previously. PSDs of particle motion under the influence of the cubic feedback force with positive and negative feedback gains can be seen in Figure 4a). These measurements qualitatively confirm the effect of the cubic force predicted by perturbation theory as a shift in the PSD central frequency. Note the shift depends on the sign of the feedback gain, in accordance to Eq. (13), indicating an effective hardening or softening of the optical trap due to the cubic actuation.
To quantitatively compare the frequency shifts with the prediction from perturbation theory, we acquired the longitudinal motion PSD for different values of feedback gain \(G_{fb}\). Fitting Lorentzian functions to the PSDs we obtained the central frequency as a function of feedback gain. The result of these measurements is shown in Figure 4b), in comparison to the theoretical prediction given in Eq. (13) for our experimental parameters. Good agreement between the data and the theoretical prediction was observed within the perturbation regime, indicated by the non-shaded region of the plot. Note also that outside the regime of perturbation theory (grey shaded regions in Figure 4b)), the measured shifts fall systematically slightly bellow the predicted first order correction, consistent with the second-order correction scaling of \(\mathcal{O}(G_{fb}^{2})\)[27]. Finally, the experimentally obtained angular coefficient \(\kappa_{e}\) was measured to be
\[\kappa_{e}=(5.46\pm 0.10)\times 10^{-4}\,\mathrm{Hz}\,\mathrm{m}^{3}\,\mathrm{N}^ {-1} \tag{19}\]
which compares to the theoretical prediction given the parameters for our experiment,
\[\kappa_{t}=5.69\times 10^{-4}\,\mathrm{Hz}\,\mathrm{m}^{3}\,\mathrm{N}^{-1}. \tag{20}\]
## IV Conclusions
In conclusion, we have implemented a cubic nonlinear force based on position measurement feedback acting on an underdamped levitated nanoparticle. Effects of the cubic force on the particle's stochastic dynamics have been experimentally studied. In particular, shifts introduced in the particle motion power spectrum due to the presence of the cubic feedback force have been measured. We have verified that these shifts are in accordance to the predictions of the stochastic path integral perturbation theory for nonlinear optical tweezers introduced in [27]. To account for the experimental imperfections due to electronic delay in the feedback, we have also extended the perturbation theory and showed that for feedback
Figure 4: Verifying the predictions of perturbation theory: a) PSDs of the trapped particle’s longitudinal motion under cubic force, displaying central frequency shifts. The data was taken at 293 K and a pressure of 10 mbar. The reference PSD () has a central frequency of 77.8 kHz and a shift of \(\pm 1.4\) kHz was measured for \(G_{fb}=\pm 1.2\times 10^{6}\) N/m\({}^{3}\). b) Frequency shifts as a function of \(G_{fb}\), verifying the prediction of perturbation theory given by Eq. (13) (dashed line). The grey shaded region marks the regime of validity for perturbation theory described in Eq. (14). Each point corresponds to 250 seconds of data acquisition at 500 kHz divided into 1000 traces and organized into batches of 5 traces each. All data points were collected using the same nanoparticle.
schemes currently available in levitated optomechanics experiments the effects of electronic delay can be made negligible.
We anticipate that nonlinear electric feedback potentials will find a number of applications in levitated optomechanics experiments, both in the classical stochastic and quantum regimes. For instance, 'artificial' - i.e. feedback - nonlinear forces could be employed in non-Gaussian state preparation protocols beyond the nonlinearities naturally present in optical potentials [16, 34]. Moreover, delayed nonlinear feedbacks could also be used to engineer non-conservative systems with nonlinear damping, for example of the Van der Pol type [35]. In this context, the delayed perturbation theory we have introduced could be used to provide analytical predictions for feedback cooling.
## Acknowledgements
We acknowledge Bruno Suassuna for helpful discussions. T.G. acknowledges the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001, Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), Fundacao de Amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ Scholarship No. E-26/202.830/2019) and Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP processo 2021/06736-5). D.T. acknowledges CAPES - Finance Code 001, and CNPq - Scholarship No. 140197/2022-2. Code and data availability: GitHub. [https://github.com/QuantumAdventures/non-linearity-experiment](https://github.com/QuantumAdventures/non-linearity-experiment)
|
2309.15962 | Smoothing 3-manifolds in 5-manifolds | We show that every locally flat topological embedding of a 3-manifold in a
smooth 5-manifold is homotopic, by a small homotopy, to a smooth embedding. We
deduce that topologically locally flat concordance implies smooth concordance
for smooth surfaces in smooth 4-manifolds. | Michelle Daher, Mark Powell | 2023-09-27T19:19:09Z | http://arxiv.org/abs/2309.15962v1 | # Smoothing 3-manifolds in 5-manifolds
###### Abstract.
We show that every locally flat topological embedding of a 3-manifold in a smooth 5-manifold is homotopic, by a small homotopy, to a smooth embedding. We deduce that topologically locally flat concordance implies smooth concordance for smooth surfaces in smooth 4-manifolds.
Key words and phrases:Smoothing submanifolds, concordance of surfaces 2020 Mathematics Subject Classification: 57K10, 57N35, 57N70
## 1. Introduction
Let \(Y^{3}=Y_{1}\sqcup\cdots\sqcup Y_{m}\) be a compact 3-manifold with connected components \(Y_{i}\), and let \(N^{5}\) be a compact, connected, smooth 5-manifold. Note that \(Y\) and \(N\) are possibly nonorientable and can have nonempty boundary. Since \(Y\) is 3-dimensional it admits a unique smooth structure up to isotopy [10], [11, Theorem 6.3], [12, Corollary 1.18].
**Theorem A**.: _Let \(f\colon Y\to N\) be a locally flat proper topological embedding that is smooth near \(\partial Y\). Then \(f\) is homotopic rel. boundary, via an arbitrarily small homotopy, to a smooth embedding._
Here _proper_ means that \(f^{-1}(\partial N)=\partial Y\). It is not possible in general to isotope \(f\) to a smooth embedding, so the homotopy in the theorem is necessary. For instance, Lashof [13] constructed a locally flat knot \(L\cong S^{3}\subseteq S^{5}\) that is not isotopic, in fact not even concordant, to any smooth knot. We will make crucial use of Lashof's knot in our proof of Theorem A.
In the rest of the introduction, we explain an application to concordance of surfaces, then we compare with the situation for codimension two embeddings in other dimensions, before finally outlining our proof of Theorem A.
### Topological concordance implies smooth concordance for surfaces in 4-manifolds
Let \(\Sigma\) be a closed, smooth surface, possibly disconnected, and possibly nonorientable. We consider a smooth, closed, connected 4-manifold \(X\), again possibly nonorientable, and two smooth submanifolds \(\Sigma_{0}\) and \(\Sigma_{1}\) in \(X\) with \(\Sigma_{0}\cong\Sigma\cong\Sigma_{1}\).
**Definition 1.1**.: We say that \(\Sigma_{0}\) and \(\Sigma_{1}\) are _topologically concordant_ (respectively _smoothly concordant_) if there is a locally flat (respectively smooth) submanifold \(C\cong\Sigma\times I\), properly embedded in \(X\times I\), whose intersection with \(X\times\{0,1\}\) is precisely \(\Sigma_{0}\subseteq X\times\{0\}\) and \(\Sigma_{1}\subseteq X\times\{1\}\). We call \(C\) a _topological concordance_ (respectively _smooth concordance_).
**Corollary 1.2**.: _Suppose that \(C\) is a topological concordance between \(\Sigma_{0}\subseteq X\times\{0\}\) and \(\Sigma_{1}\subseteq X\times\{1\}\). Then the inclusion map \(C\to X\times I\) is homotopic rel. \(\Sigma_{0}\cup\Sigma_{1}\), via an arbitrarily small homotopy, to an embedding whose image is a smooth concordance between \(\Sigma_{0}\) and \(\Sigma_{1}\)._
This follows immediately from Theorem A by taking \(Y=\Sigma\times I\), \(N=X\times I\), and \(f\colon Y\to N\) to be an embedding with \(C=f(Y)\).
Special cases of Corollary 1.2 were known before. First, Kervaire [14] proved that every 2-knot is slice. This holds in both categories, from which it follows that smooth and topological concordance coincide for 2-knots. Sunukjian [15] proved more generally that homologous connected surfaces in a simply-connected 4-manifold \(X\) are both smoothly and topologically concordant. Again, it follows immediately that smooth and topological concordance coincide. Similarly Cha-Kim [13, Corollary J] proved that smooth and topological concordance coincide for smoothly embedded spheres with a common smoothly embedded geometrically dual framed sphere.
Work defining surface concordance obstructions includes [14, 15, 16, 17, 18]. Other than in [14], the authors restricted to the smooth category. Our result implies that one automatically obtains topological concordance obstructions.
### Comparison with other dimensions
We start with low dimensions. In dimension \(3\), every locally flat embedding \(Y^{1}\subseteq N^{3}\) is isotopic to a smooth embedding. On the other hand, in dimension \(4\), the existence of topologically slice knots that are not smoothly slice implies the existence of a locally flat embedding \(D^{2}\hookrightarrow D^{4}\) that is not even homotopic rel. boundary to a smooth embedding. There are also examples of closed locally flat surfaces in closed \(4\)-manifolds, in particular in \(S^{2}\times S^{2}\), \(\mathbb{CP}^{2}\), and \(S^{2}\widetilde{\times}S^{2}\), that cannot be smoothed up to homotopy [14, 15, 16].We deal with dimension \(5\) in this article. The analogue of Theorem A for locally flat embeddings of smooth \(4\)-manifolds embedded in smooth \(6\)-manifolds is open, and we intend to investigate it in future work.
Now we discuss high dimensions. For codimension \(2\) proper embeddings \(f\colon Y^{m}\to N^{m+2}\), when the dimension \(m\) of \(Y\) is greater or equal to \(5\), Schultz [15] proved the following cf. [15].
**Theorem 1.3** (Schultz).: _Let \(m\geq 5\) and \(n>m\). Let \(N^{n}\) be a smooth compact \(n\)-manifold, and let \(Y^{m}\) be a compact topological manifold equipped with a smooth structure near \(\partial Y\). Let \(f\colon Y\to N\) be a locally flat proper topological embedding that is smooth near \(\partial Y\). Then there is a smooth structure on \(Y\), extending the given smooth structure on \(\partial Y\), such that \(f\) is isotopic rel. boundary to a smooth embedding if and only if \(Y\) has a topological vector bundle neighbourhood._
Topological vector bundle neighbourhoods always exist for locally flat codimension \(1\) and \(2\) embeddings [14, 15], so Schultz [15] deduced the following result.
**Theorem 1.4** (Schultz).: _Let \(k=1\) or \(2\), let \(m\geq 5\), and let \(n=m+k\). Let \(N^{n}\) be a smooth compact \(n\)-manifold, and let \(Y^{m}\) be a compact topological manifold equipped with a smooth structure near \(\partial Y\). Let \(f\colon Y\to N\) be a locally flat proper topological embedding that is smooth near \(\partial Y\). Then there is a smooth structure on \(Y\), extending the given smooth structure on \(\partial Y\), such that \(f\) is isotopic rel. boundary to a smooth embedding._
Note that in the statements of Theorems 1.3 and 1.4, \(Y\) is not a priori smoothable. The existence of a topological vector bundle neighbourhood for an embedding \(f\colon Y\to N\) guarantees a smooth structure on \(Y\times\mathbb{R}^{p}\) for some \(p\in\mathbb{N}^{*}\). Hence, for \(m\geq 5\), the Product Structure Theorem [15, Essay I] implies that \(Y\) is smoothable. The proofs of Theorems 1.3 and 1.4 then proceed by using smoothing theory and the Concordance Implies Isotopy Theorem [15, Essay I] for smooth structures on \(Y\).
If one first fixes a smooth structure on \(Y\), then the induced structure on \(Y\) that emerges from Schultz's argument need not be isotopic to the fixed one. This is a feature of the problem, not a failure of the proof. In fact, if we fix a smooth structure on \(Y\), in general in high dimensions \(f\) is not even homotopic to a smooth embedding. For example, Hsiang-Levine-Szczarba [13] showed that the exotic \(16\)-sphere does not embed smoothly in \(S^{18}\). Certainly it does embed topologically.
The results from [15] do not apply in the same way for \(Y^{3}\subseteq N^{5}\). Our approach is rather different. We fix a smooth structure on a tubular neighbourhood of \(f(Y)\) and try to extend it to all of \(N\). As we will describe next, we face obstructions along the way that will require us in general to modify the embedding by a small homotopy to obtain a smooth embedding of \(Y\) in \(N\).
### Outline of the proof of Theorem A
For a submanifold \(K\) of a manifold \(X\) with an open tubular neighbourhood i.e. the image of an embedding of a normal bundle, denote the tubular neighbourhood by \(\nu K\). Write \(\overline{\nu}K\) for the closure of the tubular neighbourhood of \(K\) in \(X\), which has the structure of a disc bundle over \(K\). Given a closed subset \(C\) of \(X\), _a smooth structure on \(C\)_ will always mean a smooth structure on an open neighbourhood \(U\) of \(C\) in \(X\).
The proof of Theorem A breaks naturally into two distinct steps, the outlines of which we shall explain next. For a smooth structure \(\sigma\) on a topological manifold \(X\), we write \(X_{\sigma}\) to specify that \(X\) is equipped with the smooth structure \(\sigma\). In what follows, we will write \(N_{\mathrm{std}}\) for \(N\) equipped with the given smooth structure.
**Step 1:**_We show that \(f\colon Y\to N\) is homotopic, by a small homotopy, to a smooth embedding \(g\colon Y\to N_{\sigma}\), for some \(\sigma\)._
We write \(M:=f(Y)\) for the image of \(f\). The idea of the proof is to consider a standard smooth structure on \(\overline{\nu}M\) and on \(\partial N\), and then to try to extend this to all of \(N\). We denote the exterior of \(M\) by \(W_{f}:=N\setminus\nu M\). Smoothing theory (recapped in Section 2) gives a Kirby-Siebenmann obstruction in \(H^{4}(W_{f},\partial W_{f};\mathbb{Z}/2)\), that vanishes if and only if the smooth structure on \(\partial W_{f}\) extends to all of \(W_{f}\). It turns out that this obstruction does not always vanish, but that by taking ambient connected sums of \(M\) with copies of Lashof's nonsmoothable \(3\)-knot \(L\cong S^{3}\subseteq S^{5}\) from [10], which we discuss in Section 2.2, we can arrange that this obstruction vanishes. Whence \(f\) is homotopic, via a small homotopy, to \(g\colon Y\to N\) such that \(M^{\prime}:=g(Y)\) is smooth in some smooth structure \(\sigma\) on \(N\), that restricts to the given smooth structure on \(\partial N\).
**Step 2:**_We show that \(g\colon Y\to N_{\sigma}\) is homotopic, via a small homotopy, to a smooth embedding \(g^{\prime}\colon Y\to N_{\operatorname{std}}\)._
Smoothing theory implies that we can arrange for the smooth structure \(\sigma\) on \(N\) and the given smooth structure \(\operatorname{std}\) to agree away from a tubular neighbourhood \(\nu S\) of a surface \(S\subseteq N\). By transversality we can assume that \(g(Y)\) intersects \(S\) in finitely many points, in a neighbourhood of which \(M^{\prime}:=g(Y)\) is smooth in \(\sigma\) but not in \(\operatorname{std}\). This reduces the smoothing problem for \(M^{\prime}\) in \(\operatorname{std}\) to finitely many local problems, which can be resolved using a proof analogous to Kervaire's proof [12, Theoreme III.6] that every \(2\)-knot is smoothly slice. Kervaire's result was generalised by Sunukjian [21], who showed that homologous connected surfaces in \(1\)-connected \(4\)-manifolds are smoothly concordant, and it is Sunukjian's arguments that apply in our situation.
_Remark 1.5_.: The changes to \(f\) in Steps 1 and 2 can be characterised as topological isotopies, together with adding and removing local knots.
### Organisation
In Section 2 we recap smoothing theory, prove lemmas on properties of the Kirby-Siebenmann invariant, and recall Lashof's nonsmoothable \(3\)-knot. We prove Step 1 in Section 3, and we prove Step 2 in Section 4. Then in Section 5 we give conditions under which smoothing up to isotopy is possible.
### Acknowledgements
MP thanks the participants of a discussion group on surfaces in \(4\)-manifolds in Le Croisic, June 2022, which brought this problem to his attention, and by extension thanks the organisers of this enjoyable conference. We thank Sander Kupers for his interest and suggesting a citation, and we are grateful to Jae Choon Cha for suggesting that we write Section 5.
MD was supported by EPSRC New Horizons grant EP/V04821X/2. MP was partially supported by EPSRC New Investigator grant EP/T028335/2 and EPSRC New Horizons grant EP/V04821X/2.
## 2. Smoothing theory
In this section we give a brief recap of smoothing theory, and recall the results we will need. Smoothing theory was developed by Cairns [15], Munkres [14, 15, 16, 17, 18], Milnor [19, 20], Hirsch [21], Lashof-Rothenberg [13], and Cerf [11, 12, 13, 14, 15], among others. Their goal, which they achieved to a large extent, was to understand which PL manifolds admit smooth structures, and if so how many. The theory was extended around 1970 by Kirby and Siebenmann [14] to allow one to start with a topological manifold, provided that one is not trying to understand smooth structures on a \(4\)-manifold. For the purposes of this article, since we work in dimensions four and five, the smooth and PL categories are interchangeable. Since it is more common nowadays to work in the smooth category, we shall also do so.
### Recap of smoothing theory
Let \(X\) be a topological \(n\)-manifold possibly with boundary, let \(C\) be a closed subset of \(X\), and let \(\sigma\) be a smooth structure on an open neighbourhood \(U\) of \(C\). Let \(V\subseteq U\) be a smaller open neighbourhood of \(C\). Denote the set of isotopy classes of smooth structures on \(X\) that agree with \(\sigma\) near \(C\) by \(\mathcal{S}_{\operatorname{Diff}}(X,C,\sigma)\). We write \(\operatorname{BTOP}(k)\) for the classifying space for topological \(\mathbb{R}^{k}\) bundles, and \(\operatorname{BO}(k)\) for the classifying space for
rank \(k\) smooth vector bundles. Define \(\operatorname{BTOP}:=\operatorname{colim}_{k}\operatorname{BTOP}(k)\) and \(\operatorname{BO}:=\operatorname{colim}_{k}\operatorname{BO}(k)\), the corresponding stable bundle classifying spaces. Consider the following diagram, which is induced by the stable classifying maps of the tangent bundle of a neighbourhood \(U\) of \(C\) and the stable tangent microbundle of \(X\):
Smoothing theory implies that for \(n\geq 6\) or \((n=5\) and \(\partial X\subseteq C)\), isotopy classes of smooth structures on \(X\) correspond to lifts \(X\to\operatorname{BO}\) of the map \(X\to\operatorname{BTOP}\), relative to the fixed lift on the smaller neighbourhood \(V\subseteq U\). The vertical sequence is a principal fibration, which implies that such a lift exists if and only if the composite \(X\to\operatorname{BTOP}\to\operatorname{B(TOP}\mathbin{/}\operatorname{O})\) is null-homotopic, and that homotopy classes of such lifts correspond to \([(X,V),(\operatorname{TOP}\mathbin{/}\operatorname{O},*)]\), homotopy classes of maps \(X\to\operatorname{TOP}\mathbin{/}\operatorname{O}\) that send \(V\) to the base point.
The main result of smoothing theory, applied to \(5\)-manifolds, reads as follows [13, Theorem IV.10.1].
**Theorem 2.1**.: _Let \(X\) be a \(5\)-dimensional topological manifold, let \(C\) be a closed subset of \(X\) with \(\partial X\subseteq C\), and fix a smooth structure \(\sigma\) on an open neighbourhood \(U\) of \(C\)._
1. _There is an obstruction_ \(\operatorname{ks}(X,C):=\operatorname{ks}(X,C,U,\sigma)\in H^{4}(X,C;\mathbb{Z }/2)\) _that vanishes if and only if_ \(X\) _admits a smooth structure extending the given smooth structure on some neighbourhood_ \(V\subseteq U\) _of_ \(C\)_._
2. _Given two smooth structures_ \(\sigma\) _and_ \(\pi\) _on_ \(X\) _extending the given smooth structure on_ \(U\supseteq C\)_, there is an obstruction_ \(\operatorname{ks}(\sigma,\pi)\in H^{3}(X,C;\mathbb{Z}/2)\) _that vanishes if and only if there is a neighbourhood_ \(V\subseteq U\) _of_ \(C\) _such that_ \(\sigma\) _and_ \(\pi\) _are isotopic rel._ \(V\)_, i.e. if there is a homeomorphism_ \(f\colon X\to X\) _with_ \(f|_{V}=\operatorname{Id}\)_, such that_ \(f^{*}(\pi)=\sigma\) _and such that_ \(f\) _is topologically isotopic rel._ \(V\) _to_ \(\operatorname{Id}_{X}\)_._
3. _The Kirby-Siebenmann obstructions_ \(\operatorname{ks}(X,C)\) _and_ \(\operatorname{ks}(\sigma,\pi)\) _from_ 1 _and_ 2 _are natural for restriction to open submanifolds of_ \(X\)_. More precisely, let_ \(W\) _be an open submanifold of_ \(X\) _and let_ \(i\colon W\hookrightarrow X\) _be the inclusion map. Then_ \(i^{*}\colon H^{4}(X,C;\mathbb{Z}/2)\to H^{4}(W,W\cap C;\mathbb{Z}/2)\) _sends_ \(\operatorname{ks}(X,C)\) _to_ \(\operatorname{ks}(W,W\cap C)\) _and_ \(i^{*}\colon H^{3}(X,C;\mathbb{Z}/2)\to H^{3}(W,W\cap C;\mathbb{Z}/2)\) _sends_ \(\operatorname{ks}(\sigma,\pi)\) _to_ \(\operatorname{ks}(\sigma|_{W},\pi|_{W})\)_._
4. _Given a smooth structure on some neighbourhood_ \(V\) _of_ \(C\) _in_ \(X\)_, the Kirby-Siebenmann obstruction_ \(\operatorname{ks}(X,C)\) _from_ 1 _is natural with respect to restriction to a neighbourhood_ \(V^{\prime}\subseteq V\) _of a closed subset_ \(C^{\prime}\subseteq C\)_. That is, the inclusion map_ \(H^{4}(X,C;\mathbb{Z}/2)\to H^{4}(X,C^{\prime};\mathbb{Z}/2)\) _sends_ \(\operatorname{ks}(X,C)\) _to_ \(\operatorname{ks}(X,C^{\prime})\)_._
Proof.: The first three items of the theorem for PL structures instead of smooth structures follows from [13, Theorem IV.10.1] and the fact that \(\operatorname{TOP}\mathbin{/}\operatorname{PL}\simeq K(\mathbb{Z}/2,3)\)[13, Section IV.10.12]. However PL \(5\)-manifolds with smooth boundary admit a unique smooth structure up to isotopy, by smoothing theory and since \(\operatorname{PL}\mathbin{/}\operatorname{O}\) is \(6\)-connected [11, 12, 13]. Hence it is legitimate to replace PL structures by smooth structures, as we have done.
The final item can be seen from the following diagram.
The obstructions \(\operatorname{ks}(X,C)\) and the obstructions \(\operatorname{ks}(X,C^{\prime})\) are both represented by the map \(X\xrightarrow{\tau_{X}}\operatorname{BTOP}\to\operatorname{B(TOP}\mathbin{/} \operatorname{O})\), and the inclusion induced map sends the former to the latter.
Next we apply Theorem 2.1 to deduce a naturality result for the Kirby-Siebenmann obstructions, that will be useful for submanifolds with corners.
Let \(K\) be a smooth \(5\)-manifold with corners. Suppose the corner set of \(K\), denoted by \(\angle K\), separates \(\partial K\) into \(\partial_{1}K\) and \(\partial_{2}K\). Note that \(\partial_{1}K\) and \(\partial_{2}K\) are smooth manifolds with boundary. Fix a smooth structure1\(\sigma\) on a neighbourhood \(U\) of \(\partial K\). By [22, Proposition 1.5.6], \(U\) contains a smooth embedding of \(\partial_{1}K\times[0,1)\) in \(K\). Let \(K^{\prime}\) denote the result of attaching \(\partial_{1}K\times[0,1)\) to \(K\) by the map \(h\colon\partial_{1}K\times\{0\}\to\partial_{1}K\) given by \(h(x,0)=x\), and extend \(\sigma|_{\partial_{1}K}\) to a product structure along \([0,1)\) in \(\partial_{1}K\times[0,1)\). Denote the resulting smooth structure on \(U^{\prime}:=U\cup\partial_{1}K\times[0,1)\) by \(\sigma^{\prime}\). Note that \(U^{\prime}\) is a smooth manifold with boundary, i.e. no corners. Apply the same procedure to \(K^{\prime}\) along \(\partial K^{\prime}\) to obtain an unbounded manifold \(K^{\prime\prime}\) and a smooth structure \(\sigma^{\prime\prime}\) on \(U^{\prime\prime}:=U^{\prime}\cup\partial K^{\prime}\times[0,1)\). Let \(j\colon K\hookrightarrow K^{\prime\prime}\) be the inclusion map.
Footnote 1: Note that this is a _smooth structure with corners_, which means that it is a maximal atlas in which two charts with corners \((U,\phi)\) and \((V,\theta)\) are smoothly compatible if \(\phi\circ\theta^{-1}\colon\theta(U\cap V)\to\phi(U\cap V)\) admits a smooth extension to an open neighbourhood of each point. See [16].
**Definition 2.2**.: Let \(K\), \(K^{\prime\prime}\), and \(j\) be as above. Let \(\sigma\) be a smooth structure on a neighbourhood \(U\) of \(\partial K\). Define the Kirby-Siebenmann obstruction \(\operatorname{ks}(K,\partial K)\) as \(j^{*}\operatorname{ks}(K^{\prime\prime},\partial K)\) where \(\operatorname{ks}(K^{\prime\prime},\partial K)=\operatorname{ks}(K^{\prime \prime},\partial K,U^{\prime\prime},\sigma^{\prime\prime})\) is the obstruction to extending the smooth structure \(\sigma^{\prime\prime}\) on \(U^{\prime\prime}\) to \(K^{\prime\prime}\).
Let \(X\) be a smooth \(5\)-manifold with boundary and let \(K\) be a smooth \(5\)-manifold with corners that is a submanifold of \(X\) such that the corner set of \(K\) separates \(\partial K\) into \(\partial_{1}K:=K\cap\partial X\) and \(\partial_{2}K\) with \(\operatorname{Int}\partial_{2}K\subseteq\operatorname{Int}X\). (By definition of a submanifold, \(\partial_{2}K\) intersects \(\partial X\) transversely.) Consider a smooth structure \(\sigma\) on a neighbourhood \(U\) of \(\partial X\cup\partial K\) such that \(\partial_{2}K\hookrightarrow U\) is smooth; this condition guarantees the existence of a smooth bicollar neighbourhood of \(\partial_{2}K\) in \(U\), which will be implicitly used in the proof of the next proposition.
**Proposition 2.3**.: _Let \(i\colon(K,\partial K)\hookrightarrow(X,\partial X\cup\partial K)\) be the inclusion. The induced map_
\[i^{*}\colon H^{4}(X,\partial X\cup\partial K;\mathbb{Z}/2)\to H^{4}(K, \partial K;\mathbb{Z}/2)\]
_sends \(\operatorname{ks}(X,\partial X\cup\partial K)\) to \(\operatorname{ks}(K,\partial K)\)._
Proof.: Let \(X^{\prime}\) be the open topological manifold obtained from \(X\) by attaching an exterior collar \(\partial X\times[0,1)\), where \(x\) in \(\partial X\) is identified with \((x,0)\) in \(\partial X\times[0,1)\). Extend \(\sigma\), by taking a product structure along \(\partial X\times[0,1)\), to \(U^{\prime}:=U\cup\partial X\times[0,1)\), which is a neighbourhood of \(\partial X\cup\partial K\) in \(X^{\prime}\). Let \(\sigma^{\prime}\) be the result smooth structure on \(U^{\prime}\). Then, by 'absorbing the boundary' [11, Proposition IV.2.1], this construction determines a natural bijection \(\theta\colon\mathcal{S}_{\operatorname{Diff}}(X,\partial X\cup\partial K, \sigma)\to\mathcal{S}_{\operatorname{Diff}}(X^{\prime},\partial X\cup\partial K,\sigma^{\prime})\) and it follows from [11, Theorem IV.10.1] and [11, Remark IV.10.2] that \(\operatorname{ks}(X,\partial X\cup\partial K)\mapsto\operatorname{ks}(X^{ \prime},\partial X\cup\partial K)\) under the isomorphism on \(H^{4}(-,\partial X\cup\partial K;\mathbb{Z}/2)\) induced by the obvious homotopy equivalence \(X^{\prime}\simeq X\). Consider the following diagram:
The map \(i_{1}^{*}\) is induced by the inclusion map \(i_{1}\colon(X^{\prime},\partial K)\to(X^{\prime},\partial X\cup\partial K)\), \(j^{*}\) is induced by the inclusion \(j\colon(K,\partial K)\to(K^{\prime\prime},\partial K)\) from Definition 2.2, and \(g^{*}\) is induced by the inclusion \((K^{\prime\prime},\partial K)\hookrightarrow(X^{\prime},\partial K)\). As per the above discussion, \(\operatorname{ks}(X,\partial X\cup\partial K)\) is sent to \(\operatorname{ks}(X^{\prime},\partial X\cup\partial K)\) by the top horizontal map. By Theorem 2.1 (iv), \(i_{1}^{*}\operatorname{ks}(X^{\prime},\partial X\cup\partial K)=\operatorname{ks} (X^{\prime},\partial K)\). It follows from Theorem 2.1 (iii) that \(g^{*}\operatorname{ks}(X^{\prime},\partial K)=\operatorname{ks}(K^{\prime\prime}, \partial K)\). Finally by Definition 2.2 we have that \(j^{*}\operatorname{ks}(K^{\prime\prime},\partial K)=\operatorname{ks}(K, \partial K)\). This concludes the proof that \(i^{*}\operatorname{ks}(X,\partial X\cup\partial K)=\operatorname{ks}(K, \partial K)\).
In practice, the manifold with corners \(K\) will be either a closed tubular neighbourhood \(\overline{\nu}f(Y)\) of a locally flat proper embedding \(f\colon Y\to N\), or the complement \(N\setminus\nu f(Y)\). In fact, let \(p\colon\overline{\nu}f(Y)\to f(Y)\) be a disc bundle and denote its boundary sphere bundle by \(\Sigma\). Then \(\overline{\nu}f(Y)\) is a smooth manifold with corners (note that it is not necessarily smooth in \(N\)) and \(\angle\overline{\nu}f(Y)=p^{-1}\partial f(Y)\cap\Sigma\) separates \(\partial\overline{\nu}f(Y)\) in two parts with closures \(p^{-1}f(\partial Y)\) and \(\Sigma\).
We also need the following more detailed characterisation of the obstruction \(\operatorname{ks}(\sigma,\pi)\). We write \(X_{\sigma}\) to denote a topological \(5\)-manifold \(X\) equipped with a smooth structure \(\sigma\), and let \(\pi\) be another smooth structure on \(X\) that agrees with \(\sigma\) near \(\partial X\).
**Proposition 2.4**.: _Suppose that \(S\subseteq X\) is a closed surface smoothly embedded in \(\operatorname{Int}X_{\sigma}\) whose \(\mathbb{Z}/2\)-fundamental class is Poincare dual to \(\operatorname{ks}(\sigma,\pi)\in H^{3}(X,\partial X;\mathbb{Z}/2)\). Then there is an arbitrarily small isotopy of \(\sigma\), supported away from \(S\) and \(\partial X\), to a smooth structure that agrees with \(\pi\) on \(X\setminus S\)._
Proof.: Using the inclusion \(X\setminus S\to X\) we have a map \(H^{3}(X,\partial X;\mathbb{Z}/2)\to H^{3}(X\setminus S,\partial X;\mathbb{Z}/2)\). By naturality of Kirby-Siebenmann invariants (Theorem 2.1 (iii)), this sends the Kirby-Siebenmann invariant \(\operatorname{ks}(\sigma,\pi)\) to the invariant of the restricted structures \(\operatorname{ks}(\sigma|_{X\setminus S},\pi|_{X\setminus S})\). We will denote restricted structures by \(\sigma|:=\sigma|_{X\setminus S}\), and similarly for \(\pi\), from now on. The long exact sequence of the triple \(\partial X\subseteq X\setminus S\subseteq X\) gives the top row of the following diagram.
The vertical isomorphisms are given by combining homotopy invariance of homology, excision, and Poincare-Lefschetz duality. It follows from the diagram that the Poincare dual to \([S]\in H_{2}(X;\mathbb{Z})\), which by hypothesis equals \(\operatorname{ks}(\sigma,\pi)\), lies in the kernel of the map \(H^{3}(X,\partial X;\mathbb{Z}/2)\to H^{3}(X\setminus S,\partial X;\mathbb{Z}/2)\). Thus \(\operatorname{ks}(\sigma|,\pi|)=0\in H^{3}(X\setminus S,\partial X;\mathbb{Z} /2)\).
By smoothing theory (Theorem 2.1) there is an isotopy of \(\sigma|\) to \(\pi|\) on \(X\setminus S\) rel. \(\partial X\). That is, we have an isotopy of homeomorphisms \(f_{t}\colon X\setminus S\to X\setminus S\), where \(f_{0}=\operatorname{Id}\), \(f_{t}|_{\partial X}=\operatorname{Id}_{\partial X}\), and \(f_{1}^{*}(\pi|)=\sigma|\). To prove the desired result we have to delve into the proof of Theorem 2.1 a little. Such an isotopy is constructed chart by chart, and within each chart via a decomposition into handles. Then the handles are smoothed iteratively using [10, Theorem I.3.1]. Let \(d\) be a metric on \(X\). We can and shall choose charts \(\{U_{i}\}\) covering \(X\setminus S\) to be such that if there exists \(x\in U_{i}\) with \(d(x,S)<\varepsilon\), then \(\operatorname{diam}(U_{i})<\varepsilon/10\). We can also make all charts have diameter smaller than an arbitrarily chosen global positive constant. The construction of \(f_{t}\) guarantees that for all \(i\), if \(x\in U_{i}\), then \(f_{t}(x)\in U_{i}\) for all \(t\in[0,1]\). It follows from this and the fact that we controlled the size of the charts as they approach \(S\) that \(f_{t}\) extends continuously to an isotopy \(F_{t}\colon X\to X\) that fixes \(S\) pointwise for all \(t\in[0,1]\). This gives the desired isotopy of \(\sigma\) to a smooth structure \(\sigma^{\prime}\) on \(X\) such that \(\sigma^{\prime}|_{X\setminus S}=\pi|_{X\setminus S}\), i.e. they agree away from \(S\). Since we controlled the global size of all charts, we can also arrange for the isotopy to be arbitrarily small.
### Lashof's nonsmoothable 3-knot
Lashof [16] constructed a locally flat 3-knot \(L\cong S^{3}\subseteq S^{5}\) that is not isotopic to any smooth knot. As observed by Kwasik and Vogel [10, 17], Lashof's knot bounds a Seifert 4-manifold \(V\) in \(S^{5}\) with \(\operatorname{sign}(V)/8\equiv 1\mod 2\). We can use this to explain why \(L\) is not smoothable. The proof is as follows. If \(L\) were smoothable, it would bound a smooth Seifert 4-manifold \(V^{\prime}\), which would be spin by naturality of \(w_{2}\), and therefore would satisfy \(\operatorname{sign}(V^{\prime})/8\equiv 0\mod 2\) by Rochlin's theorem. Since the signature of a Seifert 4-manifold is a knot invariant [10], we arrive at a contradiction and it follows that \(L\) cannot be smoothed. Since the signature is a concordance invariant, it follows also that \(L\) is not concordant to any smooth 3-knot. Let \(E_{L}:=S^{5}\setminus\nu L\) be the exterior of \(L\), and equip \(\partial E_{L}\cong S^{3}\times S^{1}\) with a standard smooth structure.
**Lemma 2.5**.: _The Kirby-Siebenmann invariant of \(E_{L}\) satisfies_
\[\operatorname{ks}(E_{L},\partial E_{L})=1\in H^{4}(E_{L},\partial E_{L}; \mathbb{Z}/2)\cong H_{1}(E_{L};\mathbb{Z}/2)\cong\mathbb{Z}/2.\]
Proof.: If \(\operatorname{ks}(E_{L},\partial E_{L})\) were trivial, then by smoothing theory there would be a smooth structure \(\tau\) on \(S^{5}\) extending the standard smooth structure on \(\overline{\nu}L\). Thus \(L\) would be smooth in \(\tau\). But in fact there is a unique smooth structure on \(S^{5}\) up to isotopy [10], and hence \(L\) would be isotopic to a smooth knot in the standard smooth structure on \(S^{5}\). Since Lashof proved this is not the case, we deduce that \(\operatorname{ks}(E_{L},\partial E_{L})\) is indeed nontrivial.
## 3. Smoothing the complement of an embedding
In this section we prove the following result, which proves Step 1 from Section 1.3.
**Proposition 3.1**.: _Let \(N\) be a compact, connected, smooth 5-dimensional manifold with \((\)possibly empty\()\) boundary, let \(Y\) be a compact 3-dimensional manifold with \((\)possibly empty\()\) boundary, and let \(f\colon Y\to N\) be a locally flat proper topological embedding such that \(f\) is smooth near \(\partial Y\). Then \(f\) is homotopic rel. boundary, via an arbitrarily small homotopy, to a smooth embedding in some smooth structure \(\sigma\) on \(N\) that agrees with the given smooth structure on \(N\) near \(\partial N\)._
Let \(Y=Y_{1}\sqcup\dots\sqcup Y_{m}\), where each \(Y_{i}\) is connected. We write \(M:=f(Y)\) and \(M_{i}:=f(Y_{i})\). By [10], \(M\) has a normal vector bundle in \(N\). Let \(\nu M\subseteq N\) denote the image of an embedding of the normal bundle. Let \(W_{f}:=N\setminus\nu M\), \(E_{i}:=\partial\overline{\nu}M_{i}\cap\partial W_{f}\), and define \(E:=\bigcup_{i=1}^{m}E_{i}\); see Figure 1.
We fix a smooth structure on a neighbourhood of \(\partial N\cup E=\partial\overline{\nu}M\cup\partial W_{f}\). To do this, we use that \(Y\), as a 3-manifold, admits an essentially unique smooth structure. Since \(\operatorname{TOP}(2)\simeq\operatorname{O}(2)\), we may assume the normal bundle of \(M\) has \(\operatorname{O}(2)\) structure group, and hence that the total space of the normal bundle is smooth. The closed tubular neighbourhood \(\overline{\nu}M\), which has the structure of a \(D^{2}\)-bundle \(\pi\colon\overline{\nu}M\to M\), therefore has the structure of a smooth manifold with corners, with the property that \(M\hookrightarrow\overline{\nu}M\) is a smooth map. The corner set gives rise to a decomposition
\[\partial\overline{\nu}M=E\cup\pi^{-1}(\partial M),\]
such that \(E\) and \(\pi^{-1}(\partial M)\) become smooth 4-manifolds with boundary. We have a smooth structure on a collar neighbourhood of \(\partial N\) in \(N\). Next, we choose a topological bicollar neighbourhood of \(E\) in \(N\) and endow it with a smooth structure that is compatible with the given smooth structure on a neighbourhood of \(\partial N\). Choose the collar neighbourhood of \(E\) in \(\overline{\nu}M\) to be smooth with respect to the smooth structure on \(\overline{\nu}M\). For the outside collar of \(E\) into \(W_{f}\), first consider that given a collar, we obtain a product smooth structure on that collar induced from the smooth structure of \(E\). In a neighbourhood of \(\partial N\), choose the outside collar of \(E\) so that the resulting smooth structure is compatible with the smooth structure we already have near \(\partial N\). We can do this because \(f\) is smooth near \(\partial Y\). Then extend the collar to the rest of \(E\). Now extend the smooth structure on \(E\) to its bicollar as a product structure. Since we chose collars carefully to arrange for the two smooth structures on the bicollar of \(E\) and the collar of \(\partial N\) to be compatible, we obtain a smooth structure on a neighbourhood of \(\partial N\cup E\). We obtain as well a smooth structure on a neighbourhood of \(\partial W_{f}\) in \(W_{f}\) which is the smoothly compatible collar neighbourhood of \(\partial N\setminus\operatorname{Int}(\pi^{-1}(\partial M))\) and the part of the bicollar neighbourhood of \(E\) that lies in \(W_{f}\). As such, the neighbourhood of \(\partial W_{f}\) is a smooth manifold with corners, with corner set \(\partial\pi^{-1}(\partial M)\).
By Theorem 2.1 we therefore have an obstruction
\[\operatorname{ks}(N,\partial\overline{\nu}M\cup\partial W_{f})\in H^{4}(N, \partial\overline{\nu}M\cup\partial W_{f})\]
Figure 1. A schematic diagram of \(N\) decomposed as \(N=W_{f}\cup_{E}\overline{\nu}M\), where \(M=f(Y)\), showing the case that \(Y=Y_{1}\sqcup Y_{2}\) has two connected components with nonempty boundary.
to extending this smooth structure to all of \(N\). It will be shown in Proposition 3.3 below that \(\operatorname{ks}(N,\partial\overline{\nu}M\cup\partial W_{f})\) is determined by
\[\operatorname{ks}(\overline{\nu}M,\partial\overline{\nu}M)\in H^{4}( \overline{\nu}M,\partial\overline{\nu}M)\text{ and }\operatorname{ks}(W_{f}, \partial W_{f})\in H^{4}(W_{f},\partial W_{f};\mathbb{Z}/2).\]
Since the smooth structure on \(E\) was obtained by restricting a structure on \(\overline{\nu}M\), it follows that \(\operatorname{ks}(\overline{\nu}M,\partial\overline{\nu}M)=0\). Thus, \(\operatorname{ks}(N,\partial\overline{\nu}M\cup\partial W_{f})\) is determined by \(\operatorname{ks}(W_{f},\partial W_{f})\). If the latter vanishes, then so does \(\operatorname{ks}(N,\partial\overline{\nu}M\cup\partial W_{f})\). We also note that \(\operatorname{ks}(N,\partial N)=0\), because \(N\) is a smooth manifold.
Our goal will therefore be to modify \(f\) by a small homotopy to arrange for \(\operatorname{ks}(W_{f},\partial W_{f})=0\). To begin, we record a homology computation in the next lemma.
**Lemma 3.2**.: _The homology of \(E\) satisfies \(H_{1}(E;\mathbb{Z}/2)\cong\oplus_{i=1}^{m}(H_{1}(M_{i};\mathbb{Z}/2)\oplus B_{ i})\), where \(B_{i}\) is a quotient of \(\mathbb{Z}/2\), and may depend on \(i\). If \(B_{i}\) is nontrivial then it is generated by a meridian of \(M_{i}\)._
Proof.: Since \(S^{1}\hookrightarrow E_{i}\to M_{i}\) is a fibration, \(M_{i}\) is path connected, and \(\pi_{1}(M_{i})\) acts trivially on \(H_{*}(S^{1};\mathbb{Z}/2)\), we will use the Leray-Serre spectral sequence to compute \(H_{1}(E_{i};\mathbb{Z}/2)\). We have
\[E_{p,q}^{2}\cong H_{p}(M_{i};H_{q}(S^{1};\mathbb{Z}/2))\cong\begin{cases}H_{p} (M_{i};\mathbb{Z}/2)&q=0,1\\ 0&\text{otherwise}.\end{cases}\]
and \(E_{p,q}^{3}=E_{p,q}^{\infty}\). Since the coefficient group is a field, the extension problem is trivial and
\[\begin{split} H_{1}(E_{i};\mathbb{Z}/2)&\cong E_{1,0}^{ \infty}\oplus E_{0,1}^{\infty}\cong H_{1}(M_{i};\mathbb{Z}/2)\oplus\mathbb{Z} /2/\operatorname{Im}(d^{2}\colon E_{2,0}^{2}\to E_{0,1}^{2})\\ &\cong H_{1}(M_{i};\mathbb{Z}/2)\oplus B_{i}.\end{split}\]
It follows that \(H_{1}(E;\mathbb{Z}/2)\cong\bigoplus_{i=1}^{m}H_{1}(M_{i};\mathbb{Z}/2)\oplus B _{i}\). The \(B_{i}\) are quotients of the terms on the \(E^{2}\)-page \(H_{0}(M_{i};H_{1}(S^{1};\mathbb{Z}/2))\cong H_{1}(S^{1};\mathbb{Z}/2)\), and so if \(B_{i}\) is nontrivial it is generated by a meridian to \(M_{i}\), as asserted.
Whether or not \(B_{i}\) is trivial depends on the differential \(d^{2}\). It will not be important for our later proofs whether \(B_{i}\) is nontrivial, and so we do not include an investigation of this.
Let \(A\subseteq H^{4}(W_{f},\partial W_{f};\mathbb{Z}/2)\) be the subgroup generated by \(\{PD^{-1}[\mu_{i}]\}_{i=1}^{m}\), where \([\mu_{i}]\in H_{1}(W_{f};\mathbb{Z}/2)\) is the class represented by a meridian to \(M_{i}\), and \(PD\) denotes the Poincare-Lefschetz duality isomorphism. That is, writing \(\iota\colon E\to W_{f}\) for the inclusion map, \(A\) is by definition the subgroup of \(H^{4}(W_{f},\partial W_{f};\mathbb{Z}/2)\) Poincare dual to \(\oplus_{i=1}^{m}\iota(B_{i})\subseteq H_{1}(W_{f};\mathbb{Z}/2)\).
**Proposition 3.3**.: _The Kirby-Siebenmann obstruction \(\operatorname{ks}(N,\partial\overline{\nu}M\cup\partial W_{f})\in H^{4}(N, \partial\overline{\nu}M\cup\partial W_{f})\) is determined by \(\operatorname{ks}(W_{f},\partial W_{f})\in H^{4}(W_{f},\partial W_{f}; \mathbb{Z}/2)\). Moreover, \(\operatorname{ks}(W_{f},\partial W_{f})\) lies in the subgroup \(A\)._
Proof.: All homology and cohomology in this proof will be with \(\mathbb{Z}/2\) coefficients, and so to save space we omit them from the notation. Decompose the pair \((N,\partial\overline{\nu}M\cup\partial W_{f})\) as
\[(\overline{\nu}M,\partial\overline{\nu}M)\cup(W_{f},\partial W_{f}).\]
The intersections are \(\overline{\nu}M\cap W_{f}=E=\partial\overline{\nu}M\cap\partial W_{f}\). Consider the relative cohomology Mayer-Vietoris sequence [1, p. 204]:
\[\cdots\to H^{n-1}(E,E)\to H^{n}(N,\partial\overline{\nu}M\cup\partial W_{f}) \to H^{n}(\overline{\nu}M,\partial\overline{\nu}M)\oplus H^{n}(W_{f},\partial W _{f})\to H^{n}(E,E)\to\cdots\]
Taking \(n=4\) and observing that \(H^{i}(E,E)=0\) for all \(i\), we deduce that
\[H^{4}(N,\partial\overline{\nu}M\cup\partial W_{f})\cong H^{4}(\overline{\nu}M, \partial\overline{\nu}M)\oplus H^{4}(W_{f},\partial W_{f})\]
where this isomorphism has coordinates the two restrictions to \((\overline{\nu}M,\partial\overline{\nu}M)\) and \((W_{f},\partial W_{f})\). Therefore, by Proposition 2.3 applied twice, once to the inclusion \(\overline{\nu}M\hookrightarrow N\) and once to the inclusion \(W_{f}\hookrightarrow N\), this isomorphism sends \(\operatorname{ks}(N,\partial\overline{\nu}M\cup\partial W_{f})\in H^{4}(N, \partial\overline{\nu}M\cup\partial W_{f})\) to
\[(\operatorname{ks}(\overline{\nu}M,\partial\overline{\nu}M),\operatorname{ks}(W_ {f},\partial W_{f}))=(0,\operatorname{ks}(W_{f},\partial W_{f}))\in H^{4}( \overline{\nu}M,\partial\overline{\nu}M)\oplus H^{4}(W_{f},\partial W_{f}).\]
Here we use that \(\operatorname{ks}(\overline{\nu}M,\partial\overline{\nu}M)=0\), which as mentioned above holds because our chosen smooth structure on \(\partial\overline{\nu}M\) was obtained by restricting a structure on \(\overline{\nu}M\). This proves the first statement of the proposition.
To prove the second sentence, consider the following diagram:
where the upper row is an excerpt from the cohomology long exact sequence of the triple \(\partial N\subseteq\partial\overline{\nu}M\cup\partial W_{f}\subseteq N\), the top left vertical isomorphism is by excision and the bottom vertical isomorphisms use Poincare-Lefschetz duality. Let \((0,\gamma)\in H_{1}(\overline{\nu}M)\oplus H_{1}(W_{f})\) be the Poincare-Lefschetz dual of
\[(\operatorname{ks}(\overline{\nu}M,\partial\overline{\nu}M),\operatorname{ks }(W_{f},\partial W_{f}))=(0,\operatorname{ks}(W_{f},\partial W_{f})).\]
Since
\[j^{*}(\operatorname{ks}(N,\partial\overline{\nu}M\cup\partial W_{f}))= \operatorname{ks}(N,\partial N)=0,\]
by Theorem 2.1 (iv) and the fact that \(N\) is smooth, it follows from exactness of the top row and commutativity of the diagram that \((0,\gamma)\in\operatorname{Im}k\). The map \(k\colon H_{1}(E)\to H_{1}(\overline{\nu}M)\oplus H_{1}(W_{f})\) is induced by the inclusions \(\kappa_{1}\colon E\hookrightarrow\overline{\nu}M\) and \(\kappa_{2}\colon E\hookrightarrow W_{f}\). By Lemma 3.2,
\[H_{1}(E)\cong\oplus_{i=1}^{m}(H_{1}(M_{i})\oplus B_{i})\cong H_{1}(M)\oplus_{ i=1}^{m}B_{i}.\]
Let
\[(\alpha,\beta_{1},\ldots,\beta_{m})\in H_{1}(M)\oplus_{i=1}^{m}B_{i}\]
be such that \(k(\alpha,\beta_{1},\ldots,\beta_{m})=(0,\gamma)\in H_{1}(\overline{\nu}M) \oplus H_{1}(W_{f})\). Note that \(\kappa_{1}|_{B_{i}}=0\) and \(\kappa_{1}|_{H_{1}(M)}\) is an isomorphism. Since \(\kappa_{1}|_{B_{i}}=0\) it follows that \(\kappa_{1}(\alpha,\beta_{1},\ldots,\beta_{m})=\kappa_{1}|_{H_{1}(M)}(\alpha)\). Since \(\kappa_{1}(\alpha,\beta_{1},\ldots,\beta_{m})=0\), we have that \(\kappa_{1}|_{H_{1}(M)}(\alpha)=0\). Using that \(\kappa_{1}|_{H_{1}(M)}\) is an isomorphism, we deduce that \(\alpha=0\). Thus \(PD(\operatorname{ks}(W_{f},\partial W_{f}))=(0,\beta_{1},\ldots,\beta_{m})\in \operatorname{Im}(\oplus_{i=1}^{m}B_{i})\) is a sum of meridians of the connected components \(M_{i}\) of \(M\). It follows that \(\operatorname{ks}(W_{f},\partial W_{f})\) lies in \(A\), as desired.
Let \(L\cong S^{3}\subseteq S^{5}\) denote Lashof's non-smoothable 3-knot (see Section 2.2). Write
\[\operatorname{ks}(W_{f},\partial W_{f})=\sum_{i=1}^{m}a_{i}(PD^{-1}[\mu_{i}]),\]
for \(a_{i}\in\mathbb{Z}/2\) defined by this equality and the stipulation that we take \(a_{i}=0\) if \(PD^{-1}[\mu_{i}]=0\) in \(H^{4}(W_{f},\partial W_{f};\mathbb{Z}/2)\). If \(a_{i}=1\) then we form a connected sum \(M_{i}\#L\) in an arbitrarily small 5-ball, while if \(a_{i}=0\) we leave \(M_{i}\) alone. Let \(g\colon Y\hookrightarrow N\) denote the resulting embedding. Define
\[\mathcal{I}:=\{i\mid a_{i}=1\}\subseteq\{1,\ldots,m\}.\]
Let \(W_{g}:=N\setminus\nu g(Y)\). Note that
\[W_{g}\cong W_{f}\cup_{\sqcup_{\mathcal{I}}(S^{1}\times D^{3})_{i}}\Bigm{|} \sum_{\mathcal{I}}E_{L_{i}}.\]
That is, \(W_{f}\) and \(\sqcup_{i}E_{L_{i}}\) attached along \(\sqcup_{\mathcal{I}}(S^{1}\times D^{3})_{i}\), where \((S^{1}\times 0)_{i}\) is identified with a meridian to \(M_{i}\) and a meridian to \(L_{i}\), for each \(i\in\mathcal{I}\), and we extend to a tubular neighbourhood \((S^{1}\times D^{3})_{i}\) in \(\partial W_{g}\) and \(\partial E_{L_{i}}\) respectively. Also, note that
\[\partial W_{g}=\Big{(}\partial W_{f}\setminus\sqcup_{\mathcal{I}}(S^{1}\times D ^{3})_{i}\Big{)}\cup_{\sqcup_{\mathcal{I}}(S^{1}\times S^{2})_{i}}\Bigm{(} \bigsqcup_{\mathcal{I}}\partial E_{L_{i}}\setminus\sqcup_{\mathcal{I}}(S^{1} \times\mathring{D}^{3})_{i}\Big{)}\]
Hence, \(\partial W_{g}\cup(\sqcup_{\mathcal{I}}(S^{1}\times D^{3})_{i})\) decomposes as \(\partial W_{f}\cup\bigcup_{\mathcal{I}}\partial E_{L_{i}}\). Figure 2 shows an illustration of \(W_{g}\) when one Lashof knot is attached.
**Proposition 3.4**.: _We have that \(\operatorname{ks}(W_{g},\partial W_{g})=0\in H^{4}(W_{g},\partial W_{g};\mathbb{ Z}/2)\)._
Proof.: All homology and cohomology in this proof will be with \(\mathbb{Z}/2\) coefficients, and so to save space we omit them from the notation. Recall that \(\operatorname{ks}(W_{f},\partial W_{f})=\sum_{\mathcal{I}}PD^{-1}[\mu_{i}]\in H^{ 4}(W_{f},\partial W_{f})\). From the relative cohomology Mayer-Vietoris sequence of the pair
\[(W_{g},\partial W_{g}\cup(\sqcup_{\mathcal{I}}(S^{1}\times D^{3})_{i}))=(W_{f},\partial W_{f})\cup(\sqcup_{\mathcal{I}}E_{L_{i}},\sqcup_{\mathcal{I}} \partial E_{L_{i}}),\]
using that \(W_{f}\cap(\sqcup_{\mathcal{I}}E_{L_{i}})=\partial W_{f}\cap(\sqcup_{\mathcal{I }}\partial E_{L_{i}})=\sqcup_{\mathcal{I}}(S^{1}\times D^{3})_{i}\), we get via an argument similar to that in the proof of Proposition 3.3, that
\[H^{4}(W_{g},\partial W_{g}\cup(\sqcup_{\mathcal{I}}(S^{1}\times D^{3})_{i})) \cong H^{4}(W_{f},\partial W_{f})\oplus_{\mathcal{I}}H^{4}(E_{L_{i}},\partial E _{L_{i}}).\]
Hence the image of \(\operatorname{ks}(W_{g},\partial W_{g}\cup(\sqcup_{\mathcal{I}}(S^{1}\times D ^{3})_{i}))\) under this isomorphism is
\[(\operatorname{ks}(W_{f},\partial W_{f}),\operatorname{ks}(E_{L_{1}}, \partial E_{L_{1}}),\ldots,\operatorname{ks}(E_{L_{k}},\partial E_{L_{k}})) \in H^{4}(W_{f},\partial W_{f})\oplus_{\mathcal{I}}H^{4}(E_{L_{i}},\partial E _{L_{i}}).\]
Recall that by Lemma 2.5 we have that \(\operatorname{ks}(E_{L_{i}},\partial E_{L_{i}})=1\) for each \(i\in\mathcal{I}\), represented by \(PD^{-1}[\mu_{L_{i}}]\), the Poincare dual to a meridian of \(L_{i}\).
Consider the following diagram:
The upper row is an excerpt from the cohomology long exact sequence of the triple
\[\partial W_{g}\subseteq\partial W_{g}\cup(\sqcup_{\mathcal{I}}(S^{1}\times D ^{3}))_{i}\subseteq W_{g},\]
the top left vertical isomorphism is by excision and the bottom vertical isomorphisms use Poincare-Lefschetz duality. By naturality of the Kirby-Siebenmann obstruction (Proposition 2.3 applied twice), the upper middle vertical isomorphism sends
\[\kappa:=\operatorname{ks}(W_{g},\partial W_{g}\cup(\sqcup_{\mathcal{I}}(S^{1} \times D^{3})_{i}))\]
to
\[\big{(}\operatorname{ks}(W_{f},\partial W_{f}),\operatorname{ks}(E_{L_{1}}, \partial E_{L_{1}}),\ldots,\operatorname{ks}(E_{L_{k}},\partial E_{L_{k}}) \big{)}\in H^{4}(W_{f},\partial W_{f})\oplus_{\mathcal{I}}H^{4}(E_{L_{i}}, \partial E_{L_{i}}).\]
On the other hand by Theorem 2.1 (iv), \(j^{*}(\kappa)=\operatorname{ks}(W_{g},\partial W_{g})\in H^{4}(W_{g},\partial W _{g})\). By commutativity of the top right square it follows that
\[\big{(}\operatorname{ks}(W_{f},\partial W_{f}),\operatorname{ks}(E_{L_{1}}, \partial E_{L_{1}}),\ldots,\operatorname{ks}(E_{L_{k}},\partial E_{L_{k}}) \big{)}\]
maps under the right hand map of the middle row to
\[\operatorname{ks}(W_{g},\partial W_{g})\in H^{4}(W_{g},\partial W_{g}).\]
By commutativity of the bottom right square of the diagram, the Poincare-Lefschetz dual of the former, \((\sum_{\mathcal{I}}[\mu_{i}],\sum_{\mathcal{I}}[\mu_{L_{i}}])\in H_{1}(W_{f}) \oplus_{\mathcal{I}}H_{1}(E_{L_{i}})\), is sent to
\[\gamma:=PD\big{(}\operatorname{ks}(W_{g},\partial W_{g})\big{)}\in H_{1}(W_{g}),\]
the Poincare-Lefschetz dual of \(\operatorname{ks}(W_{g},\partial W_{g})\in H^{4}(W_{g},\partial W_{g})\). Note that the bottom row is the Mayer-Vietoris homology sequence of the decomposition \(W_{f}\cup_{\mathcal{I}}E_{L_{i}}=W_{g}\), and for each \(i\in\mathcal{I}\)
Figure 2. A schematic diagram of \(W_{g}\) when one Lashof knot is attached.
we have \(\Phi([S^{1}\times 0]_{i})=([\mu_{i}],[\mu_{L_{i}}])\), where \([S^{1}\times 0]_{i}\) is the generator of \(H_{1}((S^{1}\times 0)_{i})\). Hence by linearity,
\[\Phi\big{(}\sum_{\mathcal{I}}[S^{1}\times 0]_{i}\big{)}=\big{(}\sum_{ \mathcal{I}}[\mu_{i}],\sum_{\mathcal{I}}[\mu_{L_{i}}]\big{)}.\]
Thus \(\gamma=0\) by exactness, and since \(PD\) is an isomorphism it follows that \(\operatorname{ks}(W_{g},\partial W_{g})=0\).
The main result of this section follows.
Proof of Proposition 3.1.: Since \(\operatorname{ks}(W_{g},\partial W_{g})=0\), we can extend the standard smooth structure on \(\partial N\cup\overline{g}(Y)\) to all of \(N\). Call the resulting smooth structure \(\sigma\). By construction, \(g(Y)\) is smooth in \(\sigma\), and \(\sigma\) agrees with the given smooth structure of \(N\) near \(\partial N\). Each connected sum of \(M_{i}\) with \(L_{i}\) can be done arbitrarily close to \(M_{i}=f(Y_{i})\), so we can assume that we altered \(f\) by an arbitrarily small homotopy.
## 4. Comparing with the standard smooth structure on \(N\)
Next, we need to compare the smooth structure \(\sigma\) we have just constructed with the given smooth structure \(\operatorname{std}\) on \(N\). The submanifold \(g(Y)\) is smooth in \(\sigma\), but is a priori not smooth in \(\operatorname{std}\). We aim to reduce to a finite collection of local problems, namely neighbourhoods \(V_{i}\subseteq\operatorname{Int}N\) where \(g(Y)\) need not be smooth in \(\operatorname{std}\). Then we will apply the argument that all 2-knots are smoothly slice [10, 11] to further modify \(g(Y)\) in each of these neighbourhoods \(V_{i}\), replacing \(g(Y)\cap V_{i}\) with a slice disc for \(g(Y)\cap\partial V_{i}\cong S^{2}\) that is smooth in the structure \(\operatorname{std}\). Our aim is the following proposition, which proves Step 2 from the introduction. The combination of Proposition 3.1 and Proposition 4.1 proves Theorem A.
**Proposition 4.1**.: _Let \(N\) be a compact, connected, smooth 5-dimensional manifold with \((\)possibly empty\()\) boundary, let \(Y\) be a compact 3-dimensional manifold with \((\)possibly empty\()\) boundary, and let \(g\colon Y\to N_{\sigma}\) be a smooth embedding for some \(\sigma\) such that \(\sigma\) and \(\operatorname{std}\) agree near \(\partial N\). Then \(g\) is homotopic rel. boundary, via an arbitrarily small homotopy, to a smooth embedding in \(N_{\operatorname{std}}\)._
To begin, recall that the structures \(\sigma\) and \(\operatorname{std}\) correspond via smoothing theory (Theorem 2.1) to two lifts \(\sigma,\operatorname{std}\colon N\to\operatorname{BO}\) of \(\tau_{N}\colon N\to\operatorname{BTOP}\). The difference between these lifts gives rise to a map \(N\to\operatorname{TOP}/\operatorname{O}\), and whence to an element \(\operatorname{ks}(\sigma,\operatorname{std})\in H^{3}(N,\partial N;\mathbb{Z }/2)\cong H_{2}(N;\mathbb{Z}/2)\). In this section we redefine \(M:=g(Y)\)
**Lemma 4.2**.: _The class \(PD(\operatorname{ks}(\sigma,\operatorname{std}))\in H_{2}(N;\mathbb{Z}/2)\) can be represented by a closed surface \(S\subseteq\operatorname{Int}N\), which is smoothly embedded in \(\sigma\) and is transverse to \(M:=g(Y)\)._
Proof.: We consider the group \(\mathcal{N}_{2}(N)\) of unoriented surfaces mapping to \(N\), up to bordism. The Atiyah-Hirzebruch spectral sequence for this has \(E^{2}\)-page
\[E^{2}_{p,q}\cong H_{p}(N;\mathcal{N}_{q}).\]
The unoriented bordism groups are given [13] in the range \(q\in\{0,1,2\}\) by \(\mathcal{N}_{0}\cong\mathbb{Z}/2\cong\mathcal{N}_{2}\), and \(\mathcal{N}_{1}=0\). Using that the \(q=1\) row on the \(E^{2}\)-page consists entirely of zeros, we have an exact sequence
\[H_{3}(N;\mathbb{Z}/2)\stackrel{{ d^{3}_{0,0}}}{{ \longrightarrow}}H_{0}(N;\mathbb{Z}/2)\to\mathcal{N}_{2}(N)\to H_{2}(N;\mathbb{ Z}/2)\to 0.\]
In particular every element of \(H_{2}(N;\mathbb{Z}/2)\) lifts to \(\mathcal{N}_{2}(N)\), and so can be represented by a map \(h\colon\Sigma\to N\) from some closed surface \(\Sigma\) into \(N\).
By [10, Theorem 2.2.6] we can approximate \(h\) by a smooth map in \([N]_{\sigma}\), and by [10, Theorems 2.2.12 and 2.2.14] we can approximate the result by an embedding, \(h^{\prime}\colon\Sigma\to N\). We write \(S:=h^{\prime}(\Sigma)\). Since both \(S\) and \(M\) are smooth in \(\sigma\), we apply transversality to complete the proof.
By Proposition 2.4, by an arbitrarily small isotopy of \(\sigma\) away from \(S\) and \(\partial N\), and hence of \(M\cap(N\setminus S)\), we can assume that the smooth structures \(\sigma\) and \(\operatorname{std}\) agree in the complement of the surface \(S\). Replace \(M\) and \(\sigma\) by the outcomes of this isotopy.
Let \(\nu S\) denote a smooth open tubular neighbourhood of \(S\) in the smooth structure \(\sigma\). We have that \(M\setminus\nu S\) is smooth in \([N\setminus\nu S]_{\operatorname{std}}\). By compactness and transversality, \(S\pitchfork M\) consists of finitely many points, \(p_{1},\dots,p_{n}\) say. Moreover, the intersection \(M\cap\partial\overline{\nu}S\) consists of a copy of
for each point \(p_{i}\in S\pitchfork M\), which bounds a 3-ball \(D^{3}_{i}\subseteq M\cap\overline{\nu}S\) with the centre of \(D^{3}_{i}\) equal to \(p_{i}\). In fact the intersection \(M\cap\overline{\nu}S\) comprises exactly \(\bigcup_{i=1}^{n}D^{3}_{i}\); the \(D^{3}_{i}\) are pairwise disjoint.
Since \(D^{3}_{i}\) is locally flat and codimension 2, it has a normal bundle [10]. We take a normal bundle of each \(D^{3}_{i}\) in \(\overline{\nu}S\). We obtain an inclusion of pairs
\[(D^{3}_{i}\times\mathbb{R}^{2},S^{2}\times\mathbb{R}^{2})\subseteq(\overline{ \nu}S,\partial\overline{\nu}S).\]
Pull back the smooth structure std to this to obtain
\[V_{i}:=[D^{3}_{i}\times\mathbb{R}^{2}]_{\text{std}}.\]
This \(V_{i}\) is a smooth manifold that is homeomorphic to \(D^{3}\times\mathbb{R}^{2}\), with boundary \(\partial V_{i}\) identified with \(S^{2}\times\mathbb{R}^{2}\). In the boundary, \(M\cap\partial V_{i}\) is a 2-sphere \(T_{i}\) that is identified with \(S^{2}\times\{0\}\subseteq S^{2}\times\mathbb{R}^{2}\).
We remark that \((V_{i},\partial V_{i})\) may not be diffeomorphic rel. boundary to \((D^{3}\times\mathbb{R}^{2},S^{2}\times\mathbb{R}^{2})\). In addition while \(M\cap V_{i}\) is smooth in \(\sigma\), this need not be the case in std.
**Lemma 4.3**.: _The 2-sphere \(T_{i}\subseteq\partial V_{i}\) bounds a compact, orientable 3-manifold \(Z_{i}\) smoothly embedded in \(V_{i}=[D^{3}_{i}\times\mathbb{R}^{2}]_{\text{std}}\)._
Proof.: Consider the sequence of maps
\[f\colon S^{2}\times D^{2}\hookrightarrow S^{2}\times\mathbb{R}^{2}\to \mathbb{R}^{2}\xrightarrow{\cong}S^{2}\setminus\{*\}\hookrightarrow S^{2} \hookrightarrow\mathbb{CP}^{2}\hookrightarrow\mathbb{CP}^{\infty}.\]
These are given respectively by the inclusion, the projection, the inverse of stereographic projection, the inclusion, identification with \(\mathbb{CP}^{1}\), and the standard inclusion again. Choose another embedding \(\mathbb{CP}^{1}\to\mathbb{CP}^{2}\) that intersects our original \(\mathbb{CP}^{1}\) transversely in exactly the image of \(\{0\}\in\mathbb{R}^{2}\) under \(\mathbb{R}^{2}\xrightarrow{\cong}S^{2}\setminus\{*\}\hookrightarrow S^{2} \xrightarrow{\mathbb{CP}^{1}}\). Let \(\ell\colon\mathbb{CP}^{1}\to\mathbb{CP}^{\infty}\) denote the composition of this embedding with the inclusion \(\mathbb{CP}^{2}\hookrightarrow\mathbb{CP}^{\infty}\). We observe that \(f^{-1}(\ell(\mathbb{CP}^{1}))=T_{i}\).
Let \(V^{\prime}_{i}:=D^{3}\times D^{2}\subseteq D^{3}\times\mathbb{R}^{2}\). We seek an extension of the form:
Since \(\mathbb{CP}^{\infty}\simeq K(\mathbb{Z},2)\), there is a unique obstruction in
\[H^{3}(D^{3}\times D^{2},S^{2}\times\mathbb{R}^{2};\pi_{2}(\mathbb{CP}^{\infty }))\cong H^{3}(D^{3},S^{2};\pi_{2}(\mathbb{CP}^{\infty}))\cong H^{3}(D^{3},S^ {2};\mathbb{Z})\cong\mathbb{Z}\]
to extending \(f\) to \(F\). However the boundary of the \(D^{3}\) in question is \(T_{i}\subseteq S^{2}\times D^{2}\). The image \(f(S^{2}\times\{0\})\) is a point in \(\mathbb{CP}^{\infty}\), which represents the trivial element in \(\pi_{2}(\mathbb{CP}^{\infty})\cong\mathbb{Z}\). As a result the obstruction cocycle is trivial, and hence the obstruction cohomology class is too. Thus we obtain a map \(F\colon V^{\prime}_{i}\to\mathbb{CP}^{\infty}\) as desired.
Using that \(V^{\prime}_{i}\) is 5-dimensional, homotope \(F\) rel. \(F|_{S^{2}\times D^{2}}=f\) to a map \(F^{\prime}\) with image in \(\mathbb{CP}^{2}\). We can and shall assume, by perturbing \(F^{\prime}\) further if necessary, that the inverse image of \(\ell(\mathbb{CP}^{1})\subseteq\mathbb{CP}^{2}\) lies in the interior of \(V^{\prime}_{i}\). Next we perturb \(F^{\prime}\) to be smooth in the smooth structure on the interior of \(V^{\prime}_{i}\) induced by std, and so that \(F^{\prime}\) is transverse to \(\ell(\mathbb{CP}^{1})\). By making the perturbation sufficiently small, we can assume that the inverse image still lies in \(V^{\prime}_{i}\). The inverse image of \(\ell(\mathbb{CP}^{1})\) is thus a smooth 3-manifold \(Z_{i}\) in \(V_{i}\) with boundary \(S^{2}\times\{0\}\subseteq\partial V_{i}\). As it is the inverse image of a closed set, \(Z_{i}\) is closed, and since \(Z_{i}\subseteq V^{\prime}_{i}\) and \(V^{\prime}_{i}\) is compact, we see that \(Z_{i}\) is compact. Since \(V_{i}\) is orientable, \(w_{1}(Z_{i})=w_{1}(\nu Z_{i})\). However \(w_{1}(\nu Z_{i})\) is zero because \(\nu Z_{i}\) can be obtained as the pull back of the normal bundle of \(\ell(\mathbb{CP}^{1})\subseteq\mathbb{CP}^{2}\), and \(w_{1}(\nu\mathbb{CP}^{1})\) is necessarily trivial since \(H^{1}(\mathbb{CP}^{1};\mathbb{Z}/2)=0\). It follows that \(w_{1}(Z_{i})=0\) and so \(Z_{i}\) is orientable. Then recall that \(V^{\prime}_{i}\subseteq V_{i}\), to see that we have constructed the 3-manifold \(Z_{i}\subseteq V_{i}\) we desire.
Now we can prove the Proposition 4.1, which is the goal of this section.
Proof of Proposition 4.1.: To prove the proposition, it remains to find, for each \(i=1,\dots,n\), a smooth slice disc \(D^{3}\subseteq V_{i}\) with \(\partial D^{3}=T_{i}=S^{2}\times\{0\}\subseteq S^{2}\times\mathbb{R}^{2}= \partial V_{i}\). By Lemma 4.3, for each \(i\) we have a smooth, compact, orientable 3-manifold \(Z_{i}\) with \(\partial Z_{i}=T_{i}\). Since \(Z_{i}\) is orientable and 3-dimensional, it is parallelisable, and thus is in particular spin. We now apply the argument of Sunukjian [14] from his Section 5 and the proof of his Theorem 6.1. As mentioned in the
introduction, this is similar to and was inspired by Kervaire's theorem [13] that every \(2\)-knot is slice. For the convenience of the reader we give an outline here.
First perform ambient \(1\)-surgeries on \(Z_{i}\) to arrange that \(\pi_{1}(V_{i}\setminus Z_{i})\) is cyclic. By [14, Proposition 5.1 and Lemma 5.2], there is a spin structure on \(Z_{i}\) such that every spin structure preserving surgery on \(Z_{i}\) can be performed ambiently. Here we use that \(\pi_{1}(V_{i}\setminus Z_{i})\) is cyclic, so that every circle in \(Z_{i}\) bounds an embedded \(2\)-disc whose interior lies in \(V_{i}\setminus Z_{i}\). Using this spin structure, the union \(Z_{i}\cup D^{3}\) is a closed, smooth, spin \(3\)-manifold. The group \(\Omega_{3}^{\text{Spin}}=0\), so \(Z_{i}\cup D^{3}\) is spin null-bordant. By [14, Lemma 5.4], there is a sequence of spin structure compatible surgeries on circles in \(Z_{i}\) that convert it to \(D^{3}\). Perform these surgeries ambiently, and obtain a smoothly embedded \(D^{3}\subseteq[V_{i}]_{\text{std}}\), as desired, in the restriction to \(V_{i}\) of the smooth structure std. Replacing \(M\cap V_{i}\) with this \(3\)-ball, for each \(i\), yields a smooth embedding \(g^{\prime}\colon Y\hookrightarrow N\) in the smooth structure std. By making the \(V_{i}\) as small as we please, and using that \(\pi_{3}(V_{i},\partial V_{i})=0\), we can arrange that we changed \(g\) by an arbitrarily small homotopy.
As mentioned above, Propositions 3.1 and 4.1 combine to complete the proof of Theorem A, noting that in both cases all the modifications we made to the embedding, from \(f\) to \(g\) to \(g^{\prime}\), consisted of local homotopies or isotopies, in all cases supported outside a neighbourhood of \(\partial N\).
## 5. Conditions for smoothing up to isotopy
As shown by Lashof's \(3\)-knot [15] (Section 2.2), it is not in general possible to isotope a locally flat embedding of a \(3\)-manifold to a smooth embedding. Our main result shows this is possible with an arbitrarily small homotopy. Here we discuss the extent to which smoothing up to isotopy is possible.
As above let \(Y=Y_{1}\sqcup\dots\sqcup Y_{m}\) be a compact \(3\)-manifold with connected components \(Y_{i}\), and let \(N\) be a compact, connected, smooth \(5\)-manifold. We will use the Kirby-Siebenmann invariant \(\operatorname{ks}(W_{f},\partial W_{f})\in H^{4}(W_{f},\partial W_{f};\mathbb{ Z}/2)\) of the exterior \(W_{f}:=N\setminus\nu f(Y)\), and we will use the relative Kirby-Siebenmann invariant \(\operatorname{ks}(\sigma,\operatorname{std})\in H^{3}(N,\partial N)\) comparing the smooth structure \(\sigma\) on \(N\) arising from Step 1 (Proposition 3.1) with the given smooth structure std on \(N\). These invariants were recalled in detail in Section 2. In practice, these invariants are not always easy to evaluate. One way to do this for (i) could be to use the ideas of Kwasik and Vogel [11, 12] discussed in Section 2.2 to relate \(\operatorname{ks}(W_{f},\partial W_{f})\) to the signature of an appropriate \(4\)-manifold.
**Scholium 5.1**.: _Let \(f\colon Y\to N\) be a locally flat proper topological embedding that is smooth near \(\partial Y\)._
1. _If_ \(\operatorname{ks}(W_{f},\partial W_{f})=0\) _then there exists a smooth structure_ \(\sigma\) _on_ \(N\) _with respect to which_ \(f\) _is smooth._
2. _If in addition_ \(\langle\operatorname{ks}(\sigma,\operatorname{std}),[f(Y_{i})]\rangle=0\in \mathbb{Z}/2\) _for each connected component_ \(Y_{i}\) _of_ \(Y\)_, then_ \(f\) _is topologically isotopic rel. boundary, via an arbitrarily small isotopy, to a smooth embedding._
Proof.: If \(\operatorname{ks}(W_{f},\partial W_{f})=0\), then Step 1 (Proposition 3.1) can be completed without connect summing with any Lashof knots. We obtain a smooth structure \(\sigma\) on \(N\) in which \(f\) is smooth, that agrees with the standard smooth structure on \(N\) near \(\partial N\). Now suppose \(\langle\operatorname{ks}(\sigma,\operatorname{std}),[f(Y_{i})]\rangle=0\) for each \(i=1,\dots,m\). Let \(S\) be an embedded surface Poincare dual to \(\operatorname{ks}(\sigma,\operatorname{std})\) that intersects \(f(Y)\) transversely (such an \(S\) was produced in Lemma 4.2). The condition implies, by intersection theory, that for each \(i\) the count of transverse intersection points between \(S\) and \(f(Y_{i})\) is even. For every \(i\), tube \(S\) to itself, along \(f(Y_{i})\), to obtain a new surface \(S^{\prime}\), in the same \(\mathbb{Z}/2\)-homology class, \([S]=[S^{\prime}]\in H_{2}(N;\mathbb{Z}/2)\), and such that \(S^{\prime}\cap f(Y)=\emptyset\). It then follows from Proposition 2.4 that \(f\) is isotopic to a smooth embedding in std.
|
2310.20662 | Harmonics of Lepton-Jet Correlations in inclusive and diffractive
scatterings | Based on our previous work, we study the harmonic coefficient of both
inclusive and diffractive azimuthal angle dependent lepton-jet correlations in
Hadron-Electron Ring Accelerator and the future electron-ion collider.
Numerical calculations for inclusive and diffractive harmonics and the ratio of
harmonics in $e+\text{Au}$~and $e+p$ indicate their strong discriminating power
for non-saturation model and saturation model. Additionally, we demonstrate
that the t-dependent diffractive harmonics can serve as novel observables for
nuclear density profile. | Xuan-Bo Tong, Bo-Wen Xiao, Yuan-Yuan Zhang | 2023-10-31T17:28:13Z | http://arxiv.org/abs/2310.20662v2 | # Harmonics of Lepton-Jet Correlations in inclusive and diffractive scatterings
###### Abstract
Based on our previous work, we study the harmonic coefficient of both inclusive and diffractive azimuthal angle dependent lepton-jet correlations in Hadron-Electron Ring Accelerator and the future electron-ion collider. Numerical calculations for inclusive and diffractive harmonics and the ratio of harmonics in \(e+\text{Au}\) and \(e+p\) indicate their strong discriminating power for non-saturation model and saturation model. Additionally, we demonstrate that the t-dependent diffractive harmonics can serve as novel observables for nuclear density profile.
## I Introduction
In a recent paper [1], we have demonstrated how the harmonics of lepton-jet correlation can serve as a new probe for saturation phenomenon in deeply inelastic scattering (DIS). In this paper, we present a more detailed elaboration on harmonics of lepton-jet correlation and extend the discussion to diffractive lepton-jet production.
Gluon saturation [2; 3; 4; 5; 6; 7] is a phenomenon observed in proton/nucleus of high energy collisions. Large-\(x\) partons radiate small-\(x\) gluons, resulting in an increase in the density of small-\(x\) gluons. These small-\(x\) gluons come into close proximity, interact and recombine. These two effects compete until the small-\(x\) gluon density saturates. The typical transverse momentum associated with saturated gluons is refered to as the saturation scale \(Q_{s}\).
The Color Glass Condensate (CGC) effective theory is the theoretical framework used to describe saturated gluons. In the CGC effective theory, large-\(x\) parton are treated as static and localized color sources, while small-\(x\) partons are modeled as classical and dynamical fields. The relationship between the sources and fields is governed by the classical Yang-Mills equation. When considering the interaction of an energetic parton with the classical field, light-like Wilson lines emerge, which resum the multiple interactions between the high energy parton and the classical field. The two-point correlator of a quark wilson line and an anti-quark wilson line yields the dipole scattering matrix. The Wilson lines and dipole scattering matrix are the building blocks of small-\(x\) physics. More detailed descriptions of the CGC framework can be found in the reviews [8; 9; 10; 11; 12; 13].
Two particle correlation, such as di-jet [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47], di-hadron [48; 49; 50; 51; 52; 53] and jet plus color-neutral particle [54; 17] has been extensively utilized to employ various aspects of saturation in the future electron-ion collider (EIC) [55; 56; 57; 58; 59]. These correlations allow for investigations into Weizsacker-Williams gluon distributions [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28], including the linear polarized one [22; 23; 24; 25; 26; 27; 28; 29; 31; 32]. The measurement of unpolarized dipole gluon distribution [16; 17] and its linear polarized counterpart [31; 32] is also possible. Furthermore, multi-gluon correlations within the nucleus target can be probed [37; 14; 15; 29; 31], and the Wigner funtion can be investigated within the small-\(x\) framework [48; 41; 43]. The separation of Sudakov resummation and small-\(x\) resummation has been elucidated [18; 60]. Besides EIC, the two particle correlations have also been extensively discussed in LHC and RHIC (see e.g., [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47]).
Typically, two-particle correlation exhibit a back-to-back configuration in the transverse plane perpendicular to the beam direction. This is referred to as the correlation limit, where the imbalance momentum \(|\vec{q}_{\perp}|=|\vec{k}_{1\perp}+\vec{k}_{2\perp}|\) is much softer than the relative momentum \(|\vec{P}_{\perp}|=|(\vec{k}_{1\perp}-\vec{k}_{2\perp})/2|\). In this limit, the soft imbalanced momentum can reach the saturation region \(|\vec{q}_{\perp}|\lesssim Q_{s}\). Thus the two particle correlation in the correlation limit serves as a robust probe for saturation.
A recent addition to the repertoire of two-particle correlation is the lepton-jet correlation in DIS [110; 111].
Figure 1: Lepton-jet in transverse plane perpendicular to beam direction. The final jet radiates soft gluons, while the final lepton emits soft photons.
Fig. 1 shows the lepton-jet production in the transverse plane, with final jet radiating gluons and final lepton emitting photons. Lepton-jet correlation offers a valuable avenue for studying the transverse momentum dependent (TMD) quark distribution, as well as TMD quark Sivers function [110, 111, 112, 113, 114, 115, 116, 117], and Collins fragmentation function [112, 113, 114]. Notably, in the small-\(x\) region, the TMD quark distribution within the small-\(x\) framework contains critical information about gluon saturation. Consequently, the lepton-jet correlation emerges as an opportunity to probe this intriguing phenomenon [1].
Following the theoretical papers on the lepton-jet correlation, more and more experimental studies have emerged. The first measurement of lepton-jet correlation at Hadron-Electron Ring Accelerator (HERA) [118] with the H1 detector has been published [119]. Additionally, event generation and detector-response simulations are conducted to investigate the lepton-jet production at the EIC kinematics [112, 114, 120].
The azimuthal angle anisotropy or harmonics of the lepton-jet correlation serves as the observable in the search for gluon saturation. Previous studies have indicated that the azimuthal angle anisotropy is caused by soft gluon radiation [121, 122], as the soft gluon radiation from the final jet tends to align with the final jet. In small-\(x\) formalism, the initial TMD quark distribution can be seen from dipole-nucleus multiple scattering and small-\(x\) gluon radiation in the schematic diagram of dipole picture. The transverse momentum of the initial quark, which is determined by gluon saturation, does not exhibit a preferred angle. Therefore, gluon saturation tend to suppress the anisotropy.
We calculate the harmonics of the lepton-jet correlation within both saturation framework and non-saturation framework. The saturation framework is based on the small-\(x\) factorization and resummation [16, 123, 124, 125, 126], while non-saturation framework means TMD factorization [121, 122, 126, 127] with collinear PDFs. As the saturation scale is proportional to the nuclear size, \(Q_{s}^{2}\varpropto A^{1/3}\) with \(A\) representing the nucleon number, the suppression of harmonics is more pronounced in large nucleus compared to proton. We observe this effect when comparing the harmonics in proton and gold nucleus.
Motivated by recent papers [39, 128], we further study the harmonics of diffractive lepton-jet production in DIS. In the small-\(x\) region, diffractive parton disitirbutions [129] are related to color-dipole S-matrix [130, 131, 132, 133, 134, 135], allowing us to probe gluon saturation through DPDFs. For comprehensive overviews and recent progresses of diffraction within the dipole picture, refer to reviews [136, 137, 8, 138] and Refs. [138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165]. Similar to the discussion on semi-inclusive diffractive deep inelastic scattering (SIDDIS) at small-\(x\)[128], we expect that QCD factorization also holds for diffractive lepton-jet production. Although diffractive lepton-jet process is defined in lepton-nucleon center-of-mass frame, the rapidity gap is nearly the same as rapidity gap in the photon-nucleon center-of-mass frame \(Y_{\rm IP}\sim\ln(1/x_{\rm IP})\). We calculate the harmonics for diffractive process, and observe the decrease of harmonics when transitioning from a proton target to a gold target. The t-dependent harmonics are found to be sensitive to different nuclear density profiles.
The rest of this paper is organized as follows. In Sec. II, we discuss the inclusive lepton-jet correlation. In Sec. II.1, we derive the azimuthal angle dependent lepton-jet correlation in the small-\(x\) framework. Then, in Sec. II.2, we obtain the harmonics and its analytical expression by saddle point approximation. The QED correction to the harmonics is discussed in Sec. II.3. Comprehensive numerical calculations of the inclusive lepton-jet harmonics are presented in Sec. II.4. In Sec. III, we explore the diffractive lepton-jet correlation. In Sec. III.1, we demonstrate that the rapidity of the diffractive lepton-jet production is the same as in semi-inclusive diffractive DIS process. The numerical calculation about for harmonics are presented in Sec. II.4.
## II Lepton-jet correlation
In deeply inelastic scattering, an energetic lepton scatters off a proton or nucleus target
\[\ell(k)+\mathrm{A}(p)\to\ell^{\prime}\left(k_{\ell}\right)+\mathrm{Jet}\left(k_ {J}\right)+X. \tag{1}\]
In this process, we detect the scattered lepton and final jet, and measure the azimuthal angle between final lepton and jet. The momentum and rapidity of the outgoing lepton are denoted as \(k_{\ell}\) and \(y_{\ell}\), while the momentum and rapidity of the final jet are \(k_{J}\) and \(y_{J}\).
At leading order, the differential cross-section of lepton-jet correlation in the correlation limit can be expressed as
\[\frac{d^{5}\sigma^{(0)}}{dy_{l}d^{2}P_{\perp}d^{2}q_{\perp}}=\sigma_{0}\int d^{ 2}v_{\perp}\delta^{(2)}(q_{\perp}-v_{\perp})xf_{q}(x,v_{\perp}) \tag{2}\]
where \(\sigma_{0}=(\alpha_{\tilde{z}}^{2}/\hat{s}Q^{2})[2(\hat{s}^{2}+\hat{u}^{2})/Q ^{4}]\) with \(\hat{s}\), \(\hat{u}\) as Mandelstam variables of the partonic subprocess and \(Q^{2}=-(k-k_{\ell})^{2}\) as the virtuality of the photon. The \(x\) is the longitudinal momentum fraction of the incoming quark with respect to the target proton or nucleus. At this order and considering the small initial quark transverse momentum \(v_{\perp}\), the rapidities of two final particles are correlated due to the constraints \(1=\frac{k_{\ell m}}{\sqrt{s_{eN}}}\left(e^{y_{\ell}}+e^{y_{J}}\right)\) and \(x=\frac{k_{\perp}}{\sqrt{s_{eN}}}\left(e^{-y_{\ell}}+e^{-y_{J}}\right)\), where \(\sqrt{s_{eN}}\) is the center-of-mass energy of the incoming lepton and nucleon. The \(f_{q}(x,v_{\perp})\) is the unintegrated quark distribution in the small-\(x\) framework, its expression in coordinate space [166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 182, 184, 186, 187, 189, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 252, 254, 255, 258, 259, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 288, 289, 291, 285, 286, 288, 287, 289, 292, 288, 289, 293, 288, 289, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 308, 309, 310, 311, 311, 312, 313, 314, 315, 316, 317, 318, 329, 330, 318, 333, 320, 319, 321, 322, 324, 323, 324, 325, 326, 327, 328, 333, 333, 334, 335, 336, 337, 338, 338, 340, 338, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 388, 389, 390, 391, 390, 392, 393, 394, 395, 396, 397, 398, 399, 400, 40, 401, 402, 403, 404, 405, 405, 406, 407, 408, 409, 411, 412, 413, 414, 42, 415, 416, 42, 426, 43, 44, 436, 44, 45, 46, 45, 47, 48, 49, 500, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 69, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 80, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 100, 99, 111, 110
169] after the Fourier transform reads
\[xf_{q}(x,b_{\perp})=\frac{N_{c}S_{\perp}}{8\pi^{4}}\int d\epsilon_{ f}^{2}d^{2}r_{\perp}\frac{(\vec{b}_{\perp}+\vec{r}_{\perp})\cdot\vec{r}_{\perp}}{| \vec{b}_{\perp}+\vec{r}_{\perp}||\vec{r}_{\perp}|}\] \[\qquad\times\epsilon_{f}^{2}K_{1}(\epsilon_{f}|\vec{b}_{\perp}+ \vec{r}_{\perp}|)K_{1}\left(\epsilon_{f}|\vec{r}_{\perp}|\right)\] \[\qquad\times\left[1+\mathcal{S}_{x}(b_{\perp})-\mathcal{S}_{x}(b _{\perp}+r_{\perp})-\mathcal{S}_{x}(r_{\perp})\right]\,. \tag{3}\]
In the above expression, \(S_{\perp}\) represents the averaged transverse area of the target hadron, while \(\mathcal{S}_{x}(r_{\perp})\) denotes the dipole scattering matrix with \(r_{\perp}\) as the transverse size of the dipole. The \(\epsilon_{f}^{2}=z(1-z)Q^{2}\) involves the momentum fraction \(z\) for quark/anti-quark in the dipole. In CGC, the distribution of initial quark transverse momentum \(\vec{v}_{\perp}\) is isotropic, which results in the leading order lepton-jet correlation Eq.(2) being independent of the azimuthal angle.
### Azimuthal angle dependent lepton-jet correlation
#### ii.1.1 One soft gluon radiation
Soft gluon radiations from the final jet introduce the azimuthal angle dependence. The azimuthal angle \(\phi\) is defined as the angle between imbalanced momentum \(\vec{q}_{\perp}\) and relative momentum \(\vec{P}_{\perp}\). We start from one soft gluon radiation.
At one-loop order, one additional soft gluon radiation introduces the azimuthal angle dependence
\[\frac{d^{5}\sigma^{(1)}}{dy_{l}d^{2}P_{\perp}d^{2}q_{\perp}}= \sigma_{0}\int d^{2}v_{\perp}xf_{q}(x,v_{\perp}) \tag{4}\] \[\qquad\times\int d^{2}k_{g\perp}S(k_{g\perp})\delta^{(2)}(q_{ \perp}+k_{g\perp}-v_{\perp})\,\]
where \(S(k_{g\perp})\) is the eikonal formula, which represents the probability of one gluon radiation from initial quark and final jet,
\[S(k_{g\perp})= g^{2}C_{F}\int\frac{dy_{g}}{2(2\pi)^{3}}\frac{2k_{J}\cdot k_{q}}{k _{J}\cdot k_{g}\ k_{q}\cdot k_{g}}. \tag{5}\]
Here, \(k_{q}\) and \(k_{J}\) refer to the momenta of the incoming quark and final jet, respectively. In the calculation of the eikonal formula, it is necessary to subtract the soft gluon inside the jet cone by imposing the constraint
\[\Delta_{k_{g}k_{J}}=(y_{g}-y_{J})+(\phi_{g}-\phi_{J})^{2}>R^{2}. \tag{6}\]
The \(y_{g},y_{J}\) and \(\phi_{g},\phi_{J}\) represent the rapidities and azimuthal angles of the gluon and jet, respectively. The relative angle between one soft gluon and the jet \(\phi_{g}-\phi_{J}\) is the azimuthal angle \(\phi\) under the correlation limit. Fig. 2 demonstrates the subtraction of soft gluons inside the jet cone. After the subtraction, the eikonal formula \(S(k_{g\perp})\) is as follows:
\[S(k_{g\perp})=\frac{g^{2}C_{F}}{(2\pi)^{3}}\frac{1}{k_{g\perp}^{ 2}}\Big{\{}\ln\frac{Q^{4}}{k_{g\perp}^{2}k_{J\perp}^{2}}+\frac{2\cos\phi}{ \sin\phi}(\pi-\phi)-2y_{+}\] \[-\frac{2\cos\phi}{\sin\phi}[\tan^{-1}(\frac{e^{y_{+}}-\cos\phi}{ \sin\phi})-\tan^{-1}(\frac{e^{y_{-}}-\cos\phi}{\sin\phi})]\Big{\}}. \tag{7}\]
where \(y_{\pm}=\pm\sqrt{R^{2}-\phi^{2}}\). By performing the harmonic analysis with respect to the azimuthal angle, the eikonal formula can be expressed as \(S(k_{g\perp})=S_{\text{iso}}(k_{g\perp})+S_{\text{aniso}}(k_{g\perp})\). The isotropic and anisotropic components are given by
\[S_{\text{iso}}(k_{g\perp})= \frac{\alpha_{s}C_{F}}{2\pi^{2}k_{g\perp}^{2}}\Big{[}\ln\frac{Q^{ 2}}{k_{g\perp}^{2}}+\ln\frac{Q^{2}}{k_{J\perp}^{2}}+c_{0}(R)\Big{]}\, \tag{8}\] \[S_{\text{aniso}}(k_{g\perp})= \frac{\alpha_{s}C_{F}}{2\pi^{2}k_{g\perp}^{2}}2\sum_{n=1}^{\infty }c_{n}(R)\cos n\phi\.\]
The harmonic coefficients of the final jet cone \(R\) can be evaluated using the following formula
\[c_{n}(R)= \frac{2}{\pi}\int_{0}^{R}d\phi\left\{\frac{\cos\phi}{\sin\phi} \left[(\pi-\phi)-\tan^{-1}\left(\frac{e^{y_{+}}-\cos\phi}{\sin\phi}\right) \right.\right.\] \[\left.\left.+\tan^{-1}\left(\frac{e^{y_{-}}-\cos\phi}{\sin\phi} \right)\right]-y_{+}\right\}\cos n\phi\] \[+\frac{2}{\pi}\int_{R}^{\pi}d\phi\frac{\cos\phi}{\sin\phi}(\pi- \phi)\cos n\phi\, \tag{9}\]
The two integration regions come from constraint \(\phi\leq R\) for some terms of Eq.(7) containing \(y_{\pm}\). These above equations are general expressions for large and small \(R\). For a small jet cone with \(R\ll 1\), the simplified expressions of \(S_{\text{iso}}(k_{g\perp})\) and \(c_{n}(R)\) can be found in the paper by Hatta et al. [121].
In this one-loop oder calculation, we assume the validity of small-\(x\) factorization for the back-to-back lepton-jet production. A more rigorous demonstration of the small-\(x\) factorization of the lepton-jet correlation would involve subtracting the rapidity divergences and ensuring the cancellation of the infrared divergences, as demonstrated in previous studies [25; 26; 170; 171; 29]. The subtracted rapidity divergence from real and virtual diagrams is then
Figure 2: (a) Soft gluon radiation of the final jet before subtraction. (b) Soft gluon radiation of the final jet after the subtraction of contributions inside the jet cone.
renormalized into the small-\(x\) parton distribution [169; 170; 171], following the procedures in Refs. [18; 60]. The infrared divergences cancel between real and virtual diagrams, leaving finite term that include Sudakov type logarithms.
The delta function in Eq. (4) facilitates the Fourier transform of the cross-section to the \(b_{\perp}\)-space
\[\begin{split}\frac{d^{5}\sigma^{(1)}}{dy_{l}d^{2}P_{\perp}d^{2}q _{\perp}}=&\sigma_{0}\int\frac{d^{2}b_{\perp}}{(2\pi)^{2}}e^{i \vec{q}_{\perp}\cdot\vec{b}_{\perp}}xf_{q}(x,b_{\perp})\\ &\times[S_{\rm iso}(b_{\perp})+S_{\rm aniso}(b_{\perp})]\end{split} \tag{10}\]
When fourier transforming to \(b_{\perp}\)-space, the isotropic part has the corresponding virtual diagram contribution which cancel the infrared divergences. Afterwards, we still encounter single and double logarithms in terms of \(Q^{2}/\mu_{b}^{2}\) as follows
\[S_{\rm iso}(b_{\perp})=-\int_{\mu_{b}}^{Q}\frac{d\mu}{\mu}\frac{\alpha_{s}( \mu)C_{F}}{\pi}\Big{[}\ln\frac{Q^{2}}{\mu^{2}}+\ln\frac{Q^{2}}{P_{\perp}^{2}} +c_{0}(R)\Big{]}\, \tag{11}\]
where \(\mu_{b}=b_{0}/b_{\perp}\) with \(b_{0}\equiv 2e^{-\gamma_{E}}\) and \(\gamma_{E}\) the Euler constant.
The anisotropic part is convergent, which can be seen in \(b_{\perp}\)-space via the Fourier transform. By utilizing the Jacobi-Anger expansion formula
\[e^{iz\cos(\phi)}=J_{0}(z)+2\sum_{n=1}^{\infty}i^{n}J_{n}(z)\cos(n\phi) \tag{12}\]
and integration formula for Bessel function
\[\int_{0}^{\infty}\frac{dz}{z}J_{n}\left(z\left|b_{\perp}\right|\right)=\frac{1 }{n}\, \tag{13}\]
we obtain the expression for the anisotropic part in \(b_{\perp}\)-space
\[S_{\rm aniso}(b_{\perp})=\frac{\alpha_{s}C_{F}}{\pi}\sum_{n}i^{n}c_{n}\frac{2 \cos n\phi_{b}}{n}. \tag{14}\]
Here, \(\phi_{b}\) represents the angle between \(b_{\perp}\) and \(k_{J\perp}\).
#### ii.1.2 Multiple soft gluon resummation
When considering contributions from soft gluon emissions to all orders, the isotropic part has been resummed into the exponential factor
\[\frac{d^{5}\sigma}{dy_{l}d^{2}P_{\perp}d^{2}q_{\perp}}\approx \sigma_{0}\int\frac{d^{2}b_{\perp}}{(2\pi)^{2}}e^{i\vec{q}_{\perp }\cdot\vec{b}_{\perp}}xf_{q}(x,b_{\perp})\] \[\times e^{S_{\rm iso}(b_{\perp})}\Big{[}1+S_{\rm aniso}(b_{\perp}) \Big{]}. \tag{15}\]
The isotropic part corresponds to the Sudakov factor \(S_{\rm iso}(b_{\perp})=-{\rm Sud}(b_{\perp})\). The techniques of Sudakov resummation are developed in Refs. [18; 60] and [121; 122; 126; 127]. Compared to the Sudakov factor in the collinear factorization framework [121], there is a difference of a single logarithmic term with the coefficient \(-3/2\). In the TMD framework, this term arises from the collinear divergence. However, in small-\(x\) framework being considered here, this term is absent.
By using Eq. (12) and integrating over \(\phi_{b}\), we get the azimuthal angle dependent lepton-jet correlation
\[\frac{d^{5}\sigma(\ell P\to\ell^{\prime}J)}{dy_{\ell}d^{2}P_{ \perp}d^{2}q_{\perp}}=\sigma_{0}\int\frac{b_{\perp}db_{\perp}}{2\pi}xf_{q}(x,b _{\perp})e^{-{\rm Sud}(b_{\perp})}\] \[\Big{[}J_{0}(q_{\perp}b_{\perp})+\sum_{n=1}^{\infty}2\cos(n\phi )\frac{\alpha_{s}(\mu_{b})C_{F}c_{n}(R)}{n\pi}J_{n}(q_{\perp}b_{\perp})\Big{]}. \tag{16}\]
In the calculation, it is important to note that the angle between \(b_{\perp}\) and \(P_{\perp}\) is set to be \((\pi-\phi_{b})\) and thus the phase factor \(e^{i\vec{q}_{\perp}\cdot\vec{b}_{\perp}}\) can be written as \(e^{iq_{\perp}b_{\perp}\cos[\phi-(\pi-\phi_{b})]}\).
### Harmonics and its analytical expression
To quantify the azimuthal anisotropy of the lepton-jet correlation, we define the harmonics or Fourier coefficient of the azimuthal angle dependent lepton-jet correlation as
\[\langle\cos n\phi\rangle=\frac{\sigma_{0}\int b_{\perp}db_{\perp}J_{n}\left( q_{\perp}b_{\perp}\right)W(x,b_{\perp})\frac{\alpha_{s}(\mu_{b})C_{F}c_{n}(R)}{n \pi}}{\sigma_{0}\int b_{\perp}db_{\perp}J_{0}\left(q_{\perp}b_{\perp}\right)W (x,b_{\perp})}. \tag{17}\]
\(W\) function is defined as
\[W(x,b_{\perp})=xf_{q}(x,b_{\perp})e^{-{\rm Sud}(b_{\perp})}. \tag{18}\]
In the small \(q_{\perp}\) limit, we can expand the Bessel function by
\[J_{n}\left(q_{\perp}b_{\perp}\right)\sim(q_{\perp}b_{\perp}/2)^{n}/\Gamma(n+1). \tag{19}\]
The \(n\)-th harmonic is proportional to \(q_{\perp}^{n}\), \(\langle\cos n\phi\rangle\sim\mathcal{C}_{n}q_{\perp}^{n}\). We now elaborate on how to get an analytical expression for the power-law coefficient \(\mathcal{C}_{n}\).
The Sudakov factor contains the large logarithm \(\ln Q^{2}\) under the correlation limit \(Q\geq P_{\perp}\gg q_{\perp}\), which serves as a large parameter for the saddle point approximation [172; 173; 174; 175]. We evaluate the two integrals in the numerator and denominator using this formula
\[\begin{split}&\int_{-\infty}^{+\infty}dzF(z)e^{-E(z)}\\ \approx&\left[\frac{2\pi}{E^{\prime\prime}\left(z^{ \rm sp}\right)}\right]^{1/2}F\left(z^{\rm sp}\right)e^{-E(z^{\rm sp})},\end{split} \tag{20}\]
where \(z=\ln(\Lambda_{\rm QCD}b_{\perp})\). The saddle point can be determined by
\[\frac{dE\left(z\right)}{dz}|_{z=z_{\rm sp}}=0\quad\text{with}\quad E^{\prime \prime}(z)>0. \tag{21}\]
The harmonics are
\[\langle\cos n\phi\rangle \approx\left(\frac{q_{\perp}b_{0}}{2\Lambda_{\rm QCD}}\right)^{n} \frac{\alpha_{s}(\mu_{n}^{\rm sp})C_{F}c_{n}(R)}{\pi n\Gamma(n+1)}\frac{f_{q}(x,b_{\perp n}^{\rm sp})}{f_{q}(x,b_{\perp 0}^{\rm sp})}\] \[\times\left[\frac{2\beta_{1}+C_{F}}{(n+2)\beta_{1}+C_{F}}\right]^ {1+\frac{C_{F}}{2\beta_{1}}\ln\frac{x_{0}^{\rm sp}(R)\beta^{4}}{\Lambda_{\rm QCD }^{2}\Gamma}}\, \tag{22}\]
where \(\beta_{1}=(33-2n_{f})/12\), \(\mu_{n}^{\rm sp}=b_{0}/b_{\perp n}^{\rm sp}\), \(b_{0}\equiv 2e^{-\gamma x}\).The saddle points \(b_{\perp n}^{\rm sp}\) are
\[b_{\perp n}^{\rm sp}=\frac{b_{0}}{\Lambda_{\rm QCD}}\left[\frac{e^{\rm co(R)} Q^{4}}{\Lambda_{\rm QCD}^{2}P_{\perp}^{2}}\right]^{-\frac{C_{F}}{2(2+n)\beta_{1}+2C_ {F}}}\, \tag{23}\]
The saddle point approximation is a widely used technique in high energy physics. In particular, the saddle point with \(n=0\) in this context is similar to the saddle point discussed in Ref. [174]. The typical values of the saddle points for the lepton-jet correlation are estimated to be around 1.5 GeV\({}^{-1}\) for \(b_{\perp 0}^{\rm sp}\) and roughly 2.5 GeV\({}^{-1}\) for the cases where \(n\) equals 1, 2, or 3 for EIC kinematics.
From the analytical expression of the harmonics given in Eq. (22), we know the information about the parton saturation is encoded in the ratio of unintegrated quark distribution \(f_{q}(x,b_{\perp}^{\rm sp})\). It is observed that as \(Q\geq P_{\perp}\to\infty\), the saddle points \(b_{\perp n}^{\rm sp}\) approach zero. Hence, we can employ small-\(b_{\perp}\) approximation of \(f_{q}(x,b_{\perp}^{\rm sp})\) as follows:
\[f_{q}(x,b_{\perp}^{\rm sp})\propto Q_{s}^{2}\ln\frac{1}{Q_{s}b_{\perp}^{\rm sp}} \tag{24}\]
as explained in [168; 169]. Thus, the harmonics have the following asymptotic form
\[\langle\cos n\phi\rangle\propto\frac{f_{q}(x,b_{\perp n}^{\rm sp})}{f_{q}(x,b _{\perp 0}^{\rm sp})}\approx\frac{\ln(Q_{s}b_{\perp n}^{\rm sp})}{\ln(Q_{s}b_{ \perp 0}^{\rm sp})}. \tag{25}\]
The derivative of \(\langle\cos n\phi\rangle\) with respect to \(Q_{s}\) reads
\[\frac{\partial\langle\cos n\phi\rangle}{\partial Q_{s}}=\frac{\ln b_{\perp 0 }^{\rm sp}\,b_{\perp n}^{\rm sp}}{Q_{s}\ln^{2}(Q_{s}b_{\perp 0}^{\rm sp})}. \tag{26}\]
Since \(b_{\perp n}^{\rm sp}>b_{\perp 0}^{\rm sp}\) according to Eq. (23), the derivative is negative. As the saturation momentum \(Q_{s}\) increases, the harmonics decrease. We can observe this feature in the numerical calculations.
### QED radiation contribution to the harmonics
Soft gluon radiations can occur in the QCD sector of lepton-jet scattering, while soft photon radiations can occur in the QED sector [176; 121; 177]. Moreover, soft photon emissions from the final state lepton also contribute to the azimuthal anisotropy [1; 121], as depicted in Fig. 1.
Soft photons tend to align with the final lepton, which is the away side of final jet direction. That may reduce the odd harmonics and increase the even harmonics, since \(\cos n\phi\) with even \(n\) exhibits a symmetric shape, while \(\cos n\phi\) with odd \(n\) show an asymmetric shape between 0 and \(\pi\) in the azimuthal angle.
By calculating the similar eikonal formula
\[S_{\gamma}(k_{g\perp})=e^{2}\int\frac{dy_{\gamma}}{2(2\pi)^{3}}\frac{2k_{J} \cdot k_{q}}{k_{J}\cdot k_{\gamma}\ k_{q}\cdot k_{\gamma}}, \tag{27}\]
we obtain the isotropic and anisotropic part of eikonal formula for the one photon radiation
\[S_{\rm iso}^{\gamma}(b_{\perp})= -\int_{\mu_{b}}^{Q}\frac{d\mu}{\mu}\frac{\alpha_{e}}{\pi}\Big{[} \ln\frac{Q^{2}}{\mu^{2}}+\ln\frac{Q^{2}}{P_{\perp}^{2}}-\frac{3}{2}+c_{0}^{ \gamma}\Big{]}\,\] \[S_{\rm aniso}^{\gamma}(b_{\perp})= \frac{\alpha_{e}}{\pi}\sum_{n}i^{n}{c_{n}^{\gamma}}\frac{2\cos n \phi_{b}}{n},\]
with
\[c_{n}^{\gamma}=(-1)^{n}\Big{[}\ln\frac{P_{\perp}^{2}}{m_{e}^{2}}+\frac{2}{\pi} \int_{0}^{\pi}d\phi(\pi-\phi)\frac{\cos\phi}{\sin\phi}(\cos n\phi-1)\Big{]}\, \tag{29}\]
where \(m_{e}\) is the electron mass and \(\alpha_{e}\) is the QED coupling. When considering multiple soft photon radiations, the isotropic part can be resummed into QED Sudakov factor \({\rm Sud}^{\gamma}(b_{\perp})=-S_{\rm iso}^{\gamma}(b_{\perp})\). Although there are large logarithms of \(P_{\perp}^{2}/m_{e}^{2}\) present in the anisotropic part, we only retain the leading order contribution from anisotropic part, since the small QED coupling constant \(\alpha_{e}\approx 1/137\) compensates the large logarithms. The harmonics with QED correction reads
\[\langle\cos n\phi\rangle_{\rm QED}\] \[=\frac{\sigma_{0}\int b_{\perp}db_{\perp}J_{n}\left(q_{\perp}b_{ \perp}\right)W_{\rm QED}(x,b_{\perp})\frac{[\alpha_{s}(\mu_{b})C_{F}c_{n}(R)+ \alpha_{e}c_{n}^{\gamma}]}{n\pi}}{\sigma_{0}\int b_{\perp}db_{\perp}J_{0} \left(q_{\perp}b_{\perp}\right)W_{\rm QED}(x,b_{\perp})}\, \tag{30}\]
with
\[W_{\rm QED}(x,b_{\perp})=xf_{q}(x,b_{\perp})e^{-{\rm Sud}(b_{\perp})-{\rm Sud} ^{\gamma}(b_{\perp})}. \tag{31}\]
The QED Sudakov factor is negligible compared to the QCD Sudakov factor, due to the smallness of the QED coupling constant \(\alpha_{e}\). However, the QED correction to the coefficient \(\alpha_{s}C_{F}c_{n}+\alpha_{e}c_{n}^{\gamma}\) is sizable. The numerical calculations in the next section will show these two features.
### Numerical calculation of the harmonics
The first calculation involves computing the harmonics for both non-saturation and saturation models, considering both proton and nucleus target.
The harmonics of non-saturation model can be calculated using the same formula as Eq. (17), but with a different \(\widetilde{W}\) function
\[\widetilde{W}=\sum_{q}e_{q}^{2}xf_{q}(x,\mu_{b})e^{-\widetilde{\rm Sud}(b_{ \perp})}. \tag{32}\]
The \(f_{q}(x,\mu_{b})\) represents the collinear quark distribution, encompassing both valence and sea quarks. For the proton, we utilize the NLO PDF sets of CT18A [178], while for the gold nucleus, we adopt the EPPS21 [179] PDF sets. Compared with the Sudakov factor of saturation model in Eq.(11), the Sudakov factor \(\widetilde{\text{Sud}}(b_{\perp})\)
\[\widetilde{\text{Sud}}(b_{\perp})=\int_{\mu_{b}}^{Q}\frac{d\mu}{\mu}\frac{ \alpha_{s}(\mu)C_{F}}{\pi}\Big{[}\ln\frac{Q^{2}}{\mu^{2}}+\ln\frac{Q^{2}}{P_{ \perp}^{2}}-\frac{3}{2}+c_{0}(R)\Big{]} \tag{33}\]
has extra \(-3/2\) term, which corresponds to the collinear divergence [121]. In the numerical calculation, we introduce the non-perturbative Sudakov factor [180, 181]
\[\widetilde{\text{Sud}}(b_{\perp})\to\widetilde{\text{Sud}}(b_{*})+\widetilde {\text{Sud}}_{\text{NP}}^{q}(b_{\perp}) \tag{34}\]
with \(b_{*}\)-prescription \(b_{\perp}^{*}=b_{\perp}/\sqrt{1+b_{\perp}^{2}/b_{\text{max}}^{2}}\), and \(b_{\text{max}}=1.5\) GeV\({}^{-1}\). Here we only include the non-perturbative Sudakov factor associated with the initial quark \(\widetilde{\text{Sud}}_{\text{NP}}^{q}(b_{\perp})\), ignoring that of the final jet \(\widetilde{\text{Sud}}_{\text{NP}}^{jet}(b_{\perp})\)[121]. This choice allows for a direct comparison with the saturation model.
For saturation model, we consider two parameterizations for the dipole scattering matrix \(\mathcal{S}_{x}(r_{\perp})\) in the unintegrated quark distribution \(f_{q}(x,v_{\perp})\) as given in Eq. (3). The first one is the GBW model [182],
\[\mathcal{S}_{x}(r_{\perp})=e^{-\frac{r_{\perp}^{2}\,Q_{s}^{2}(x)}{4}}\, \tag{35}\]
where saturation momentum squared for proton is \(Q_{s,p}^{2}(x)=(x_{0}/x)^{0.28}\) GeV\({}^{2}\) with \(x_{0}=3\times 10^{-4}\). The saturation momentum squared of gold nucleus is approximately \(Q_{s,A}^{2}\approx 5Q_{s,p}^{2}\). The other parameterization for the dipole scattering matrix is the solution of the running-coupling Balitsky-Kovchegov (rcBK) equation [183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193], with the modified McLerran-Venugopalan (MV) [187, 194] model as the initial condition
\[\mathcal{S}_{x_{0}}(r_{\perp})=e^{-\frac{(r_{\perp}^{2}\,Q_{s0}^{2})^{\gamma} }{4\,\text{Au}}\frac{1}{(x_{\perp}^{2})^{\gamma+\varepsilon}}} \tag{36}\]
with \(\gamma=1.118\), \(\Lambda=0.241\) GeV, and \(Q_{s0,p}^{2}=0.16\) GeV\({}^{2}\) at \(x_{0}=0.01\). For \(e+\text{Au rcBK}\) calculation, we solve the rcBK equation with \(Q_{s,A}^{2}\approx 5Q_{s,p}^{2}\). More realistic initial condition for dipole-nucleus amplitude[195] can be chosen.
Since the non-perturbative region is usually dominated by the small-x dipole distribution, we do not need to introduce the non-perturbative Sudakov factor for saturation model. Thus, we simply write
\[\text{Sud}(b_{\perp})\to\text{Sud}(b_{*})\, \tag{37}\]
which limits the perturbative Sudakov factor in the small \(b_{\perp}\) region.
The kinematics bins for future EIC that we use to calculate the \(q_{\perp}\)-distribution of \(\langle\cos n\phi\rangle\) are \(\sqrt{s_{eN}}=89\) GeV, \(y_{\ell}=2.41\), \(0.008\leq x\leq 0.01,\ 4\) GeV\(\leq P_{\perp}\leq 4.4\) GeV, \(5.6\) GeV\(\leq Q\leq 5.9\) GeV. The choice of kinematic region aligns with the simulation study [120] of EIC. The lower cut for \(x\) is determined by given \(s_{eN},P_{\perp}\), specifically \(x_{\text{min}}\approx 4P_{\perp}^{2}/s_{\text{eN}}\). The limited collision energy \(s_{eN}\) makes it difficult to probe lower \(x\) value (\(x\leq 1\times 10^{-3}\)) in the lepton-jet correlation.
Fig. 3 presents the \(q_{\perp}\)-distribution of \(\langle\cos n\phi\rangle\) for different models, using both proton and nucleus targets with a jet cone size \(R=0.4\). The results exhibit a common trend: all lines sharply rise from zero in the small-\(q_{\perp}\) region and gradually approach a plateau in the large-\(q_{\perp}\) region. In the large-\(q_{\perp}\) region, our results appear to be more flat than the lines shown in Fig. 4 of [121]. The discrepancy arises because our calculation is performed within the \(4\) GeV\(\leq P_{\perp}\leq 4.4\) GeV bin, while their calculation is specific to a single \(P_{\perp}\) value. Besides the common trend, we also observe the hierarchy of harmonics with the harmonic number \(n\). What's more, the harmonics of the saturation model show a sizable decrease from the proton to the gold nucleus target. First, Eq. (26) shows \(\partial\langle\cos n\phi\rangle/\partial Q_{s}<0\) and it indicates that the harmonics decreases with an increase in \(Q_{s}\). Furthermore, the saturation scale squared \(Q_{s,A}^{2}\propto A^{1/3}Q_{s,p}^{2}\) is larger in the gold nucleus than in the proton. Therefore, we can explain the observed decrease in harmonics from proton to Au in Fig. 3. The results in Fig. 3 have included the QED correction, different from similar plots in our previous paper[1].
In Fig. 4, we plot the analytical expression of harmonics for small-\(q_{\perp}\) Eq. (22) from section II.2 and compare it with harmonics obtained for the rcBK model with an Au target. The harmonics are calculated for the specific value of \(x=0.008\). The comparison validates the analytical expression of harmonics at small-\(q_{\perp}\).
To further quantify the suppression of the anisotropy in \(e+\text{Au}\) collisions compared to \(e+p\) collisions, we define the nuclear modification factor as follows:
\[R_{eA}^{(n)}=\frac{\langle\cos n\phi\rangle_{eA}}{\langle\cos n\phi\rangle_{ep }}. \tag{38}\]
In Fig. 5, we plot the nuclear modification factor for the non-saturation model and the saturation model. The non-saturation model utilizes the EPPS21 gold nucleus PDFs and the CT18A proton PDFs, with error band at 90% confidence-level. We neglect the uncertainties from the baseline proton PDFs, as they are small. On the other hand, the saturation model employs the rcBK solution, where the gold saturation scale squared \(Q_{s,A}^{2}\) varies from \(3Q_{s,p}^{2}\) (upper bound in each band) to \(5Q_{s,p}^{2}\) (lower bound). The \(R_{eA}^{(n)}\) predicted from EPPS21 PDFs (non-saturation model) and rcBK solution (saturation model) show distinct behaviors at small-\(q_{\perp}\) region and converge to unity at large-\(q_{\perp}\) region. This difference justifies the nuclear modification factor as a tool to distinguish the saturation and non-saturation frameworks.
In Fig. 5, we also observe the hierarchy of the nuclear modification factor of the saturation models. This can be explained by the asymptotic expression of the harmonics,
Eq. (25). By substituting the \(Q_{s,A}^{2}\approx 5Q_{s,p}^{2}\) in Eq. (25), we find
\[R_{eA}^{(n)}\propto\frac{\ln(5Q_{s,p}^{2}b_{\perp n}^{\rm sp})}{\ln(Q_{s,p}^{2} b_{\perp n}^{\rm sp})}\,\frac{\ln(Q_{s,p}^{2}b_{\perp 0}^{\rm sp})}{\ln(5Q_{s,p}^{2}b_{ \perp 0}^{\rm sp})} \tag{39}\]
and knowing \(b_{\perp n}^{\rm sp}\) increasing with \(n\), we understand why the nuclear modification factor \(R_{eA}^{(n)}\) decreases with the increase of \(n\) as shown in the numerical results.
Fig. 6 shows the harmonics and nuclear modification factor with and without QED correction, using the rcBK solution as the input. The QED corrections for the harmonics are quite evident, reducing the odd harmonics and increasing the even harmonics. This evident correction to the harmonics can be explained by the sizable correction to the coefficient \(\alpha_{s}C_{F}c_{n}+\alpha_{e}c_{n}^{\gamma}\) in Eq. (30). However, the QED correction to the nuclear modification factor is found to be negligible, as the coefficient \(\alpha_{s}C_{F}c_{n}+\alpha_{e}c_{n}^{\gamma}\) cancels between \(\langle\cos n\phi\rangle_{eA}\) and \(\langle\cos n\phi\rangle_{ep}\). In order to compare with experimental data, all subsequent calculations incorporate QED correction.
Our calculation can be compared with a recent experimental study [196] at HERA, where electron and proton collide at energy of 27.6 GeV and 920 GeV, respectively. The kinematics bin are \(0.2<y<0.7\), \(-1<\eta_{\rm lab}<2.5\), \(k_{J\perp}>10\) GeV, \(Q^{2}>150\) GeV\({}^{2}\). Here, \(y=P\cdot q/P\cdot k\) represents the energy fraction taken by the photon from the lepton in lab frame, and \(\eta_{\rm lab}\) is the rapidity range that the detector can cover. The jet cone size is \(R=1.0\). In our calculation, we compute the harmonics in this HERA kinematics and present the results in Fig. 7. In the calculation, the kinematic restrictions constrain the rapidity \(y_{J}\) (or \(y_{l}\)), \(k_{J\perp}\), initial quark momentum fraction \(x\) and their combination, since \(Q^{2},y,\eta_{\rm lab}\) are expressed in terms of \(y_{J},k_{J\perp},x\). We compute both saturation framework with the GBW model, and non-saturation framework with CT18A proton PDFs. For the saturation framework, we apply the extra cut \(x<0.01\), while for the non-saturation framework, we have two different cuts \(x<0.01\) and \(x<1\). In our calculation, we also include the QED correction.
In Fig. 7, we observe that the harmonics \(\cos\phi\) are sizable, while the harmonics \(\langle\cos 2\phi\rangle\) and \(\langle\cos 3\phi\rangle\) are almost zero. This behavior can be attributed to the fact that the Fourier coefficient \(c_{n}(R)\) decreases with \(R\), as evident from Fig. 3 in Ref [121]. The negative values of \(\langle\cos 2\phi\rangle\) and \(\langle\cos 3\phi\rangle\) also come from the Fourier coefficient \(c_{n}(R)\) with \(R=1.0\). The observed trend of these harmonics is consistent with the previous results obtained for the EIC kinematics, as shown in Fig. 3.
Figure 4: Comparison of the exact results of harmonics and their small-\(q_{\perp}\) asymptotic behaviors. Solid lines stand for the exact results, while dash-dotted lines depict the small-\(q_{\perp}\) asymptotic expansions given by Eq. (22). The EIC kinematics for calculation is \(\sqrt{s_{eN}}=89\) GeV, \(x=0.008\), \(y_{\ell}=2.41\) with jet cone size \(R=0.4\). The QED corrections are not included.
Figure 3: (a) First three harmonics of inclusive lepton-jet production in \(e+p\) collisions using inputs from the rcBK solution, GBW model, and CT18A PDFs. (b) First three harmonics of inclusive lepton-jet production predicted for \(e+\rm{Au}\) collisions using inputs: the rcBK solution, GBW model, and EPPS21 PDFs. The calculation is for EIC kinematics: \(\sqrt{s_{eN}}=89\) GeV, \(0.008<x<0.01\), \(y_{\ell}=2.41\) with jet cone size \(R=0.4\). The QED corrections are included.
## III Diffractive lepton-jet correlation
In high energy \(ep\) and \(eA\) collisions, the diffractive lepton-jet process occurs when we observe a large rapidity gap \(Y_{\rm IP}\) between the hard interaction part and the remnant proton/nucleus, in addition to measuring the scattered lepton and one jet, as shown in Fig. 8.
The diffractive process can be understood as the proton/nucleus exchanging colorless multiple gluon with the hard interaction part. The momentum transfer in the diffraction is denoted as \(t=(p^{\prime}-p)^{2}=\Delta^{2}\approx-\vec{\Delta}_{\perp}^{2}\), while the momentum fraction carried by the pomeron from the incoming nucleon is \(x_{\rm IP}=n\cdot(p-p^{\prime})/n\cdot p\), where \(n=(0,1,0_{\perp})\). In Fig. 8, the hard interaction production is denoted as \(X\), and its mass is defined as \(M\). From the definition of the mass \((x_{\rm IP}p+q)^{2}=M^{2}\), we obtain
\[x_{\rm IP}=\frac{M^{2}+Q^{2}}{W^{2}+Q^{2}}\, \tag{40}\]
where \(W^{2}=(p+q)^{2}\) represents the center-of-mass energy squared of the photon-nucleon system.
The semi-inclusive diffractive DIS process [128] has been shown to be factorized in terms of TMD diffractive parton distribution function (DPDF). We assume that the diffractive lepton-jet production can also be factorized in terms of quark TMD DPDF, where the longitudinal momentum fraction carried by quark from the pomeron is \(\beta=x/x_{\rm IP}\). In the small-\(x\) framework, the quark TMD DPDF [129; 197] is related to dipole S-matrix and encode information about gluon saturation [128].
The expression of the quark TMD DPDF in \(k_{\perp}\) space
Figure 5: Nuclear modification factors of the inclusive lepton-jet harmonics for the cases where \(n=1\), \(2\), and \(3\). The upper bands represent \(R_{eA}^{(n)}\) based on the inputs of the EPPS21 gold nuclear PDFs with uncertainties. The lower bands yield \(R_{eA}^{(n)}\) calculated with the rcBK solution, where the gold saturation scale \(Q_{s,A}^{2}\) varies from \(3Q_{s,p}^{2}\) (upper bound in each band) to \(5Q_{s,p}^{2}\) (lower bound). The EIC kinematics for calculation is \(\sqrt{s_{eN}}=89\) GeV, \(0.008<x<0.01\), \(y_{t}=2.41\) with jet cone size \(R=0.4\).
Figure 6: QED modification to the first three harmonics of the inclusive lepton-jet correlation and nuclear modification factors with the input from the rcBK solution for \(e+\rm{Au}\) collisions. Solid and dashed lines represent the harmonics or nuclear modification factor without and with QED modifications, respectively. The calculation is performed in the following EIC kinematics: \(\sqrt{s_{eN}}=89\) GeV, \(0.008<x<0.01\),\(y_{t}=2.41\) with jet cone size \(R=0.4\).
is taken from Ref. [128]
\[\frac{df_{q}^{D}(\beta,k_{\perp},t;x_{\rm IP})}{dY_{\rm IP}dt}= \frac{N_{c}\beta}{2\pi}\int d^{2}k_{1\perp}d^{2}k_{2\perp}\mathcal{ F}_{x_{\rm IP}}\left(k_{1\perp},\Delta_{\perp}\right) \tag{41}\] \[\times\mathcal{F}_{x_{\rm IP}}\left(k_{2\perp},\Delta_{\perp} \right)\mathcal{T}_{q}\left(k_{\perp},k_{1\perp},k_{2\perp}\right)\]
with \(\mathcal{T}_{q}\) defined as sum of four terms \(\mathcal{T}_{q}\equiv T_{q}(k_{\perp},k_{1\perp},k_{2\perp})-T_{q}(k_{\perp},0,k_{2\perp})-T_{q}(k_{\perp},k_{1\perp},0)+T_{q}(k_{\perp},0,0)\), where
\[T_{q}(k_{\perp},k_{1\perp},k_{2\perp}) \tag{42}\] \[=\frac{(k_{\perp}-k_{1\perp})\cdot(k_{\perp}-k_{2\perp})k_{\perp }^{2}}{[\beta k_{\perp}^{2}+(1-\beta)(k_{\perp}-k_{1\perp})^{2}]\,[\beta k_{ \perp}^{2}+(1-\beta)(k_{\perp}-k_{2\perp})^{2}]},\]
and \(\mathcal{F}_{x_{\rm IP}}\left(k_{1\perp},\Delta_{\perp}\right)\) represents the Fourier transform of the dipole S-matrix in the fundamental representation
\[\mathcal{F}_{x_{\rm IP}}\left(k_{1\perp},\Delta_{\perp}\right)=\int\frac{d^{2} b_{\perp}d^{2}r_{\perp}}{(2\pi)^{4}}e^{i\vec{k}_{1\perp}\cdot\vec{r}_{\perp} +i\vec{\mathcal{L}}_{\perp}\cdot\vec{\mathcal{L}}_{\perp}}\mathcal{S}_{x}(r_{ \perp},b_{\perp})\, \tag{43}\]
where \(r_{\perp}\) is the dipole separation, and \(b_{\perp}\) is the impact parameter.
The azimuthal angle dependent cross-section and harmonics of diffractive process have similar definitions as Eq. (44) and Eq. (17). By replacing the small-\(x\) unintegrated quark distribution \(f_{q}(x,b_{\perp})\) with small-\(x\) quark TMD DPDF, we get the angle dependent diffractive lepton-jet cross section
\[\frac{d^{5}\sigma(\ell P\to\ell^{\prime}J)}{dy_{\ell}d^{2}P_{ \perp}\,d^{2}q_{\perp}dY_{\rm IP}dt}=\sigma_{0}\int\frac{b_{\perp}db_{\perp}}{ 2\pi}W_{\rm diff} \tag{44}\] \[\left[J_{0}(q_{\perp}b_{\perp})+\sum_{n=1}^{\infty}2\cos(n\phi) \frac{\alpha_{s}(\mu_{b})C_{F}c_{n}(R)}{n\pi}J_{n}(q_{\perp}b_{\perp})\right]\,.\]
and diffractive harmonics
\[\langle\cos n\phi\rangle_{\rm diff} \tag{45}\] \[=\frac{\sigma_{0}\int b_{\perp}db_{\perp}J_{n}\left(q_{\perp}b_{ \perp}\right)W_{\rm diff}\frac{\alpha_{s}(\mu_{b})C_{F}c_{n}(R)}{n\pi}}{\sigma_ {0}\int b_{\perp}db_{\perp}J_{0}\left(q_{\perp}b_{\perp}\right)W_{\rm diff}}\.\]
where \(W_{\rm diff}\) function is defined as
\[W_{\rm diff}(x,\beta,b_{\perp};x_{\rm IP}) \tag{46}\] \[=e^{-{\rm Sud}(b_{\perp})}\int d^{2}k_{\perp}e^{i\vec{k}_{\perp} \cdot\vec{b}_{\perp}}x\frac{df_{q}^{D}(\beta,k_{\perp},t;x_{\rm IP})}{dY_{\rm IP }dt}.\]
### The rapidity gap of diffractive lepton-jet production
The rapidity gap for semi-inclusive diffractive DIS (SIDDIS) follows the traditional rapidity gap for the
Figure 8: Diffractive lepton-jet production process in DIS. The final state lepton with one jet can be measured, while the incoming nucleon exchanges multiple gluon in a color singlet state with the virtual photon, with the exchanged longitudinal momentum fraction \(x_{\rm IP}\). A large rapidity gap exists between the nucleon remnant \(p^{\prime}\) and the hard interaction production \(X\).
Figure 7: First three harmonics in \(e+p\) collisions with inputs from the GBW model and CT18A PDFs in the HERA kinematics, with additional cut on the initial quark momentum fraction \(x\). The HERA kinematics are \(0.2<y<0.7\), \(-1<\eta_{\rm lab}<2.5\), \(k_{J\perp}>10\) GeV, \(Q^{2}>150\) GeV\({}^{2}\) with jet cone \(R=1.0\).
diffractive process \(Y_{\rm IP}\sim\ln 1/x_{\rm IP}\)[198]. SIDDIS is defined in the Breit frame, which is the photon-nucleon center-of-mass frame (frame \(C\)), while the diffractive lepton-jet process is measured in the lepton-nucleon center-of-mass frame (frame \(A\)). The rapidity gap of the diffractive lepton-jet is different from that of SIDDIS due to the Lorentz transformation between the two frames, which involves a Lorentz rotation.
The frame transformation from the lepton-nucleon center-of-mass frame (frame \(A\)) to the photon-nucleon center-of-mass frame (frame \(C\)) can be understood in three steps: (1) The Lorentz boost from frame A to the nucleon rest frame with the lepton moving in the \(-z\) direction (frame \(B\)); (2) The rotation from frame \(B\) to the nucleon rest frame with photon moving in the \(-z^{\prime}\) direction (frame \(B^{\prime}\)); (3) The Lorentz boost from frame \(B^{\prime}\) to frame \(C\).
The demonstration of the Lorentz rotation [199] can be seen in Fig. 9, with rotation angle denoted as \(\theta\). The rotation angle can be determined from the four-momentum of the virtual photon in these two frame,
\[\tan\theta=-\frac{q_{B}^{1}}{q_{B}^{3}}\, \tag{47}\]
where \(q_{B}=(q_{B}^{0},q_{B}^{1},0,q_{B}^{3})\) is the photon four-momentum in frame \(B\). By choosing the kinematics \(\sqrt{s_{eN}}=89\) GeV, \(y_{\ell}=y_{J}=2.41\),\(x=0.008\), \(P_{\perp}=4\) GeV, \(Q=5.6\) GeV and \(\beta=0.94\), we find that \(\theta=0.00187\) for the Lorentz rotation. The Lorentz rotation matrix is nearly an identity matrix, indicating that the rapidity is barely changed by the Lorentz rotation. Since the rapidity gap is invariant under Lorentz boost, the rapidity gap in the photon-nucleon center-of-mass frame (frame \(A\)) is almost the same value as the rapidity gap in the lepton-nucleon center-of-mass frame (frame \(C\)). Therefore, we can use \(Y_{\rm IP}\sim\ln 1/x_{\rm IP}\) to represent the rapidity gap for diffractive lepton-jet production.
### Numerical calculation of diffractive harmonics
In the numerical calculation, we first neglect the impact parameter \(b_{\perp}\) dependence of the dipole \(\mathbb{S}\)-matrix and utilize two models for \(\mathcal{S}_{x}(r_{\perp})\): the GBW model Eq. (35) and the solution of the rcBK equation with the modified MV model as the initial condition, shown in Eq. (36).
We calculate the \(q_{\perp}\)-distribution of the harmonics \(\langle\cos n\phi\rangle_{\rm diff}\). The kinematics bin of diffractive lepton-jet production at the future EIC is defined as follows: \(\sqrt{s_{eN}}=89\) GeV, \(y_{\ell}=2.41\), \(0.008\leq x\leq 0.0094,\beta=0.94,x_{\rm IP}=x/\beta\), \(4\) GeV\(\leq P_{\perp}\leq 4.32\) GeV, \(5.6\) GeV\(\leq Q\leq 5.89\) GeV. The value of \(\beta\) can vary, but it should be chosen such that \(x_{\rm IP}\) falls within the range of \([0.008,0.01]\).
In Fig. 10, we plot the harmonics of diffractive lepton-jet production for saturation models, considering both proton and gold nucleus target, with jet cone size \(R=0.4\). The decrease of harmonics from proton to gold nucleus target are also observed in Fig. 10. Notably, the harmonics of the diffractive process are nearly two times the value of the harmonics of the inclusive lepton-jet process. This behavior can be explained by the asymptotic form of the harmonics, as given in Eq. (25). For the same choice of \(Q,P_{\perp}\) for inclusive and diffractive lepton-jet process, the saddle point \(b_{\perp n}^{\rm sp}\) values are the same. For example, if \(x=0.008\), \(P_{\perp}=4\) GeV, \(Q=5.6\) GeV, the saddle points are \(b_{\perp n}^{\rm sp}=1.68\) GeV\({}^{-1}\), \(b_{\perp 1}^{\rm sp}=2.22\) GeV\({}^{-1}\), \(b_{\perp 2}^{\rm sp}=2.59\) GeV\({}^{-1}\), and \(b_{\perp 3}^{\rm sp}=2.87\) GeV\({}^{-1}\). We plot the quark TMD DPDF and PDF for \(b_{\perp}\in[0,3]\) GeV\({}^{-1}\) in Fig. 11. It is evident that in small \(b_{\perp}\) region the flat DPDF gives
\[\frac{df_{q}^{D}(\beta,b_{\perp n}^{\rm sp};x_{\rm IP})}{dY_{\rm IP}dt}/\frac {df_{q}^{D}(\beta,b_{\perp 0}^{\rm sp};x_{\rm IP})}{dY_{\rm IP}dt}\approx 1\, \tag{48}\]
while the steeply declining PDF results in
\[\frac{f(x,b_{\perp n}^{\rm sp})}{f(x,b_{\perp 0}^{\rm sp})}\ll 1. \tag{49}\]
This difference causes \(\langle\cos n\phi\rangle_{\rm diff}\) a couple times of \(\langle\cos n\phi\rangle\).
We also plot the nuclear modification factor for diffractive harmonics in Fig. 12, using both the GBW model and rcBK solution as inputs. Surprisingly, the nuclear modification factor of diffractive lepton-jet harmonics is nearly identical to that of inclusive lepton-jet harmonics in Fig. 5. The larger harmonics and nearly identical nuclear modification factor, compared to inclusive lepton-jet production, make them even better observables for studying saturation phenomenon.
We investigate the \(t\) dependence of the harmonics, by restoring the impact factor \(b_{\perp}\) dependence of the quark diffractive PDF. In essence, this calculation enables us to explore the sensitivity of diffractive harmonics to nuclear density profile. To demonstrate this feature, we choose two different density profiles for proton(nucleus): one being a uniform cylinder, the other a uniform sphere.
Figure 9: The rotation from the nucleon rest frame with the lepton in the \(-z\) direction (frame \(B\)) to the nucleon rest frame with the photon in the \(-z^{\prime}\) direction (frame \(B^{\prime}\)), is depicted in the lepton plane defined by the incoming lepton with momentum \(k\) and the outgoing lepton with momentum \(k_{l}\).
By considering the proton(nucleus) as a uniform cylinder with radius \(r_{p}(r_{A})\), we employ the GBW model for the dipole S-matrix. The Fourier transform of the dipole S-matrix reads
\[\mathcal{F}_{x_{\rm IP}}\left(k_{1\perp},\Delta_{\perp}\right)= \int\frac{d^{2}b_{\perp}d^{2}r_{\perp}}{(2\pi)^{4}}e^{i\vec{k}_{1 \perp}\cdot\vec{r}_{\perp}+i\vec{\Delta}_{\perp}\cdot\vec{\mathcal{F}}_{\perp}} e^{-\frac{r_{\perp}^{2}Q_{s,p}^{2}(\mu)}{4}}\] \[= \frac{r_{p}J_{1}(r_{p}\Delta_{\perp})}{2\pi\Delta_{\perp}}\int \frac{d^{2}r_{\perp}}{(2\pi)^{2}}e^{i\vec{k}_{1\perp}\cdot\vec{r}_{\perp}}e^{- \frac{r_{\perp}^{2}Q_{s,p}^{2}(\mu)}{4}} \tag{50}\]
In this case, the \(t(\Delta_{\perp})\) dependence factorizes. Therefore, it cancels out between numerator and denominator of the diffractive harmonics in Eq.(45). Consequently, the harmonics of "cylinder-like" proton do not exhibit \(t\) dependence.
For a uniform sphere proton(nucleus), we employ the modified GBW model
\[\mathcal{S}_{x}(r_{\perp},b_{\perp})=e^{-\frac{r_{\perp}^{2}Q_{s,p}^{2}(r,b_{ \perp})}{4}}\, \tag{51}\]
where
\[Q_{s,p}^{2}(x,b_{\perp})=c_{s}\sqrt{1-\frac{b_{\perp}^{2}}{r_{p}^{2}}}. \tag{52}\]
The radius of the proton is \(r_{p}=4.2\) GeV\({}^{-1}\) (for the gold nucleus \(r_{A}=32.5\) GeV\({}^{-1}\)). To compare with the above cylinder profile, we require that the impact parameter dependent saturation scale squared \(Q_{s,p}^{2}(x,b_{\perp})\) satisfies the normalization condition
\[\int d^{2}b_{\perp}Q_{s,p}^{2}(x,b_{\perp})=\pi r_{p}^{2}Q_{s,p}^{2}(x). \tag{53}\]
The right-hand side saturation scale squared of the traditional GBW model is given by \(Q_{s,p}^{2}(x)=(x_{0}/x)^{0.28}\) GeV\({}^{2}\) with \(x_{0}=3\times 10^{-4}\). For the gold nucleus, we choose \(Q_{s,A}^{2}(x)=5Q_{s,p}^{2}(x)\). As the conjugate variable of \(b_{\perp}\) in the Fourier transform, the \(t(\Delta_{\perp})\) dependence of
Figure 12: Nuclear modification factor of the first three harmonics for diffractive lepton-jet production, with the rcBK solution and GBW model as inputs. The EIC kinematics for calculation is \(\sqrt{s_{eN}}=89\) GeV, \(\beta=0.94\), \(0.008<x<0.0094\), \(y_{l}=2.41\) with jet cone size \(R=0.4\).
Figure 10: First three harmonics of diffractive lepton-jet production in (a) \(e+p\) collisions and (b) \(e+\mathrm{Au}\) collisions with the inputs from the rcBK solution, GBW model. The EIC kinematics for calculation is \(\sqrt{s_{eN}}=89\) GeV, \(\beta=0.94\), \(0.008<x<0.0094\), \(y_{l}=2.41\) with jet cone size \(R=0.4\).
Figure 14: The comparison of diffractive harmonics of lepton-jet production in \(e+p\) collisions in the EIC kinematics, for cylinder and sphere proton shape. The \(t\) has two value (a) \(-t=0.5\) GeV\({}^{2}\)(b) \(-t=1.5\) GeV\({}^{2}\). The \(t\) dependent model assume a sphere shape of the proton. The EIC kinematics are \(\sqrt{s}_{eN}=89\) GeV, \(x=0.008\), \(y_{l}=2.41\) with \(\beta=0.94,x_{\rm IP}<0.01,R=0.4\).
Figure 15: The comparison of diffractive harmonics of lepton-jet production in \(e+\mathrm{Au}\) collisions in the EIC kinematics, for cylinder and sphere gold nucleus shape. The \(t\) has three value (a) \(-t=0.5\) GeV\({}^{2}\)(b) \(-t=1.5\) GeV\({}^{2}\)(c)(b) \(-t=5\) GeV\({}^{2}\). The \(t\) dependent model assume a sphere shape of the gold nucleus. The EIC kinematics are \(\sqrt{s}_{eN}=89\) GeV, \(x=0.008\), \(y_{l}=2.41\) with \(\beta=0.94,x_{\rm IP}<0.01,R=0.4\).
diffractive harmonics opens a new dimension to distinguish different nuclear density profiles.
We plot the diffractive harmonics for the cylinder and sphere proton(nucleus) in both the HERA and the EIC kinematics. Figure 13 displays the \(e+p\) results in the HERA kinematics. For EIC kinematics, we predict both \(e+p\) and \(e+\) Au results, in Fig. 14 and Fig. 15, respectively. For the proton, we compute with \(-t=0.5\) GeV\({}^{2}\) and \(-t=1.5\) GeV\({}^{2}\). Regarding the gold nucleus, we select \(-t=0.5\) GeV\({}^{2}\), \(-t=1.5\) GeV\({}^{2}\) and \(-t=5\) GeV\({}^{2}\). The sizable difference between cylinder and sphere proton(nucleus) suggests harmonics as new probes for the density profile of the target. Various density profiles can be tested in diffractive harmonics in the future study, such as Gaussian [43] or a more flexible parameterization [156]. The notable sharp peaks of diffractive harmonics for sphere shape Au at \(-t=0.5\) GeV\({}^{2}\) in Fig. 15 originate from the diffractive nature of this process. We will explain this behavior in the following discussion.
The \(t\)-distribution of diffractive scattering cross-sections in nuclear and hadronic physics always show a pulse shape[11, 198, 200], resembling the diffraction pattern in optics. We present the \(t\)-distribution of diffractive harmonics of \(e+p\) collisions with a spherical proton in Fig. 16, with \(q_{\perp}=0.5\) GeV and \(q_{\perp}=1.5\) GeV. Since the sphere-shaped proton is circular in the transverse plane, the Fourier transform of a circle is the Bessel function of the first kind \(J_{1}(r_{p}\Delta_{\perp})\). The positions of the minima can be determined by zeros of the bessel function \(J_{1}(r_{p}\Delta_{\perp})\) at \(r_{p}\Delta_{\perp}=3.8,7.0,10.2,...\). The first minima of Fig. 16 can be calculated as \(-t=[3.8/r_{p}]^{2}\approx 0.9\) GeV\({}^{2}\). Furthermore, we calculate the \(t\)-distribution of diffractive harmonics for \(e+\) Au collisions with spherical gold nucleus in Fig. 17, with \(q_{\perp}=0.5\) GeV and \(q_{\perp}=1.5\) GeV. The positions of minima are the same as those of the t-distribution for \(J/\psi\) photoproduction[200], with first minima as \(-t=[3.8/r_{A}]^{2}\approx 0.014\) GeV\({}^{2}\). In Fig. 15, the sharp peaks of \(q_{\perp}\) distribution of \(e+\)Au diffractive harmonics with \(-t=0.5\) GeV\({}^{2}\) arise from the divergent behavior at \(-t=0.5\) GeV\({}^{2}\), which coincides with one of the minima. In contrast, if the density profile were cylinder-like, the harmonics in Fig. 16 and Fig. 17 would be constant as \(t\) varies.
The above calculation provides quantitative predictions to the future experimental studies on diffractive lepton-jet production at HERA and EIC.
## IV Conclusion
In this paper, we propose novel observables for studying gluon saturation: harmonics and its nuclear modification factors of inclusive and diffractive lepton-jet correlation.
Using the small-\(x\) factorization, the detailed derivation of the azimuthal angle dependent lepton-jet correlation in small-\(x\) framework is presented. We obtain analytical expressions for the harmonics, which predicts the suppression of the harmonics with increasing of saturation scale \(Q_{s}\). This behavior is confirmed by numerical calculation. Furthermore, we find that the impact of QED radiation corrections on the harmonics are sizable, while negligible for nuclear modification factor. These can be seen both from expressions and numerical calculation. The striking difference in the nuclear modification factor for non-saturation and saturation framework make it a robust observable to distinguish these two frameworks.
In addition, the parallel study on the diffractive lepton-jet production is carried out. Numerical calculation demonstrate that the diffractive harmonics are twice the
Figure 16: The \(t\) distribution of harmonics for diffractive lepton-jet production in \(e+p\) collisions in the EIC kinematics, with the spherical density profile. Two different imbalanced momentum \(q_{\perp}=0.5\) GeV and \(q_{\perp}=1.5\) GeV have been chosen. The EIC kinematics are \(\sqrt{s}_{{}_{eN}}=89\) GeV, \(x=0.008\), \(y_{l}=2.41\) with \(\beta=0.94,x_{\rm IP}<0.01,R=0.4\).
Figure 17: The \(t\) distribution of harmonics for diffractive lepton-jet production in \(e+\) Au collisions in the EIC kinematics, with the spherical density profile. Two different imbalanced momentum \(q_{\perp}=0.5\) GeV and \(q_{\perp}=1.5\) GeV have been chosen. The EIC kinematics are \(\sqrt{s}_{{}_{eN}}=89\) GeV, \(x=0.008\), \(y_{l}=2.41\) with \(\beta=0.94,x_{\rm IP}<0.01,R=0.4\).
value of the inclusive harmonics, while the nuclear modification factors are almost the same. These findings suggest that the diffractive harmonics and their nuclear modification factors serve as sensitive observables for probing gluon saturation phenomenon. In particular, the t-dependent diffractive harmonics can distinguish different nuclear density profiles.
###### Acknowledgements.
We thank Feng Yuan and Heikki Mantysaari for discussions. This work is supported by the CUHK-Shenzhen university development fund under the grant No. UDF01001859. Xuan-Bo Tong is also supported by the Research Council of Finland, the Centre of Excellence in Quark Matter and under the European Union's Horizon 2020 research and innovation programme by the European Research Council (ERC, grant agreement No. ERC-2018-ADG-835105 YoctoLHC) and by the STRONG-2020 project (grant agreement No. 824093). The content of this article does not reflect the official opinion of the European Union and responsibility for the information and views expressed therein lies entirely with the authors.
|
2309.09393 | Reactive Base Control for On-The-Move Mobile Manipulation in Dynamic
Environments | We present a reactive base control method that enables high performance
mobile manipulation on-the-move in environments with static and dynamic
obstacles. Performing manipulation tasks while the mobile base remains in
motion can significantly decrease the time required to perform multi-step
tasks, as well as improve the gracefulness of the robot's motion. Existing
approaches to manipulation on-the-move either ignore the obstacle avoidance
problem or rely on the execution of planned trajectories, which is not suitable
in environments with dynamic objects and obstacles. The presented controller
addresses both of these deficiencies and demonstrates robust performance of
pick-and-place tasks in dynamic environments. The performance is evaluated on
several simulated and real-world tasks. On a real-world task with static
obstacles, we outperform an existing method by 48\% in terms of total task
time. Further, we present real-world examples of our robot performing
manipulation tasks on-the-move while avoiding a second autonomous robot in the
workspace. See https://benburgesslimerick.github.io/MotM-BaseControl for
supplementary materials. | Ben Burgess-Limerick, Jesse Haviland, Chris Lehnert, Peter Corke | 2023-09-17T23:04:34Z | http://arxiv.org/abs/2309.09393v1 | # Reactive Base Control for On-The-Move Mobile Manipulation in Dynamic Environments
###### Abstract
We present a reactive base control method that enables high performance mobile manipulation on-the-move in environments with static and dynamic obstacles. Performing manipulation tasks while the mobile base remains in motion can significantly decrease the time required to perform multi-step tasks, as well as improve the gracefulness of the robot's motion. Existing approaches to manipulation on-the-move either ignore the obstacle avoidance problem or rely on the execution of planned trajectories, which is not suitable in environments with dynamic objects and obstacles. The presented controller addresses both of these deficiencies and demonstrates robust performance of pick-and-place tasks in dynamic environments. The performance is evaluated on several simulated and real-world tasks. On a real-world task with static obstacles, we outperform an existing method by 48% in terms of total task time. Further, we present real-world examples of our robot performing manipulation tasks on-the-move while avoiding a second autonomous robot in the workspace. See beburgesslimerick.github.io/MotM-BaseControl for supplementary materials.
## I Introduction
Performing mobile manipulation tasks while a robot's base remains in motion can significantly reduce execution time compared with approaches in which the robot pauses to perform manipulations. This is particularly valuable in multi-step tasks where, for example, the robot can be driving towards a second location while picking up an object. Recent works have explored control methods for achieving such Manipulation on-the-Move (MotM) [1, 2].
Systems that rely on planning are able to claim optimality of generated trajectories and avoid apriori-known obstacles in the scene. However, planning approaches suffer in real-world environments where they cannot react to perception errors, localisation error, or inaccurate control and are unable to perform in environments with unpredictably moving objects and obstacles.
In our previous work [1], we introduced an architecture for achieving reactive MotM and demonstrated grasping of unpredictable, dynamic objects while on the move. However, the implementation presented previously uses a mobile base controller that does not consider obstacle avoidance. In this work, we develop a mobile base controller for integration into the reactive MotM architecture that minimises task time while avoiding static and dynamic obstacles. In addition, we augment the quadratic program solved in the redundancy resolution module with a constraint that provides obstacle avoidance for the arm.
Our reactive approach enables robust performance in complex environments with dynamic obstacles. Fig. 0(a) shows a frame from a real-world trial where our system is performing a grasp while on-the-move and avoiding a second robot moving in the scene. The digital twin illustrated in Fig. 0(b) shows the system's understanding of the space, including the real-time detections of the obstacles in the environment.
The principal contributions of this work are:
1. a reactive base control system that minimises total task time for multi-step tasks by performing manipulation tasks on-the-move
2. a redundancy resolution module that enables obstacle avoidance for the arm and base while performing manipulation on-the-move tasks
3. the first demonstration of reactive manipulation on-the-move in real-world environments with static and dynamic obstacles
These capabilities are demonstrated in numerous simulated and real-world experiments.
Fig. 1: Our method can perform robust, reactive manipulation on-the-move while avoiding static and dynamic obstacles.
## II Related Works
Base control methods for mobile manipulation guide the mobile base to a pose from which a manipulation target can be reached without violating kinematic constraints or colliding with obstacles. This is often achieved through explicit base control where an optimal base pose is determined and the robot is navigated to that pose with a conventional mobile robot controller [3]. Alternatively, implicit or holistic base control methods typically start with a desired end effector motion and use the combined kinematics of the base and manipulator to achieve the motion.
We review common methods for base control in mobile manipulation and the additional challenges for base control while performing manipulation on-the-move. Recent surveys of control strategies for mobile manipulation include [4].
### _Explicit Base Control_
#### Ii-A1 Optimal Base Placement
Numerous planners have been proposed to compute the base pose from which to best perform a manipulation task. In general, the goal is to find a pose for which the target is reachable with a collision-free configuration of the robot [5]. Approaches typically aim to generate solutions that are optimal against some other metric such as manipulability [6, 7], or stiffness of the robot [8]. Other approaches aim to minimise task time by calculating base poses from which multiple targets can be reached without repositioning [9, 10]. Time efficiency can be optimised on multi-step tasks by choosing a pose based on where the robot must go after the immediate target [3, 11]. Reactivity can be achieved by frequently recomputing the optimal base pose [3].
#### Ii-A2 Mobile Base Control
Once an optimal base pose has been computed, the robot is driven to the pose with a mobile base controller. Typically, a hierarchical planner is used that combines a global and local planner to enable reactive obstacle avoidance while driving to the goal pose [12].
A reactive, Short Term Aborting A* (STAA*) method is presented in [13] that demonstrates improved performance in environments with static and dynamic obstacles compared to commonly used local planners. STAA* plans a collision-free global path by searching through a visibility graph, and then computes the intersection of the global path and the border of a local planning region to develop an intermediate goal. A discretised acceleration space is used to sample new states for exploration in a time-bounded A* search. The search uses an obstacle-aware heuristic that ensures exploration along a collision-free path and avoids local minima.
In most cases, mobile manipulators using explicit base controllers consider the base and manipulator motion entirely separately and the arm does not start moving until the base has achieved the desired pose. More recent methods have improved task time by coordinating the arm motion with the base such that the hand arrives at the target at the same time as the base arrives at the desired pose [3].
### _Holistic Control_
Rather than considering the base and arm motion separately, some approaches combine the subsystems with a holistic controller [14]. These methods use the combined degrees of freedom from the mobile base and manipulator to achieve a desired end-effector motion. Holistic control of the robot enables reduced task time by moving both components together, as well as an improved ability to optimise secondary objectives such as manipulability [14], obstacle avoidance [15], or visibility [16] through exploitation of additional degrees of freedom.
The input to the holistic control system can be from a motion planner and executed under open-loop control [15], or reactive, where the controller uses visual feedback for closed-loop control [14, 17]. Model Predictive Control formulations of the problem have been used to enable collision avoidance for both arm and base [18, 19]. A learned base controller is presented in [20] which translates a desired end-effector velocity and local occupancy map to base motions that ensure the omnidirectional robot avoids obstacles.
These works demonstrate holistic control of a mobile manipulator and can avoid obstacles. However, they focus only on execution of the immediate goal and do not consider the time efficiency for multi-step tasks achieved by performing a manipulation task while on the move toward the next target.
### _Manipulation On-The-Move_
The first MotM approaches restricted base motion to constant speed, straight-line motion [21]. Recent works plan collision-free trajectories in cluttered environments that can complete mobile manipulation tasks in minimum time [2, 22, 23, 24, 25]. However, these approaches execute the planned trajectory open-loop and cannot react to dynamic changes in the scene, or compensate for perception and control errors. Consequently, these systems are often failure prone in real-world environments.
In this work, we present a reactive base control method that avoids obstacles while performing manipulation tasks on-the-move. Further, we enable collision avoidance for the manipulator by implementing the approach described in [26].
## III Base Controller
We introduce several modifications to the global and local planners described in STAA* [13] that improve performance for manipulation on-the-move scenarios.
### _Goal Orientation_
The most important addition to STAA* is the inclusion of an orientation to the goal state. Where STAA* considers driving to a point only, we include orientation which enables poses to be achieved that smoothly connects the immediate target with the next goal.
### _Rotation in Global Planner Cost_
The addition of orientation to the goal state requires modification to the node cost computation used in the global A* search. STAA* uses only the cumulative distance between
nodes along a path to compute the cost. Instead, we consider the PathRTR metric that estimates the time required to both translate and rotate between nodes along a path. Further explanation of the PathRTR metric is presented in [13] where it is used for the local planner. Fig. 2 illustrates the value of including rotation costs in the global path planner. For the scenario shown, the path in red generated by our modified version encourages the robot to drive a smooth curve around the obstacle connecting the start and end pose. Without rotation costs, the shortest path passes the obstacle on the opposite side and requires significantly more turning.
### _Search Termination Conditions_
The implementation of STAA* presented in [13] terminates its search when an explored node is sufficiently close to the goal. To perform manipulation on-the-move, we want to encourage the robot to drive through the goal at high speed. Therefore, we also terminate when the path passes sufficiently close to the target.
### _Reduction of Proximity Grid Penalty_
STAA* includes a cost on visiting nodes based on their proximity to obstacles using an inflated occupancy grid. However, to complete mobile manipulation tasks such as picking and placing objects from a table, the robot must necessarily travel close to the table while interacting with objects on it. For example, in Fig. 2, the occupancy grid is represented by the colour of the ground around the robot, with green representing free space, and red representing occupied space. When the target pose for the base is near an obstacle, as is the case when grasping from a table, the penalty applied to nodes near obstacles inhibits the exploration of states near the goal. To limit this effect, we reduce the weight of the occupancy grid cost based on proximity to the object pick or place location. We scale the grid penalty by \(k=\max{(0.1,\min{(t_{h}/3,1)})}\) where \(t_{h}\) is the estimated time until the goal is achieved.
### _Bezier Heuristic_
The PathRTR heuristic used for the A* graph search in the local planner assumes that translation and rotation of the robot will be performed at maximum speed, but sequentially. This has a tendency to overestimate costs for states where the optimal path to the goal is a smooth arc of simultaneous rotation and translation. We introduce an additional heuristic based on a Bezier curve that is used in place of PathRTR only when the Bezier path results in a lower cost. This encourages the exploration of states that can be connected to the goal through smooth curves. The Bezier curve is constructed by adding two control points alongside the start and end points given by the current pose and target. The first control point is positioned in front of the robot's current forward direction, and the second is positioned an equal distance behind the desired end pose. The offset distance is chosen to be 25% of the distance between the current and target pose. The optimal offset distance is a complicated function of the relative target pose and the ratio of the maximum linear and angular velocity capabilities of the robot. However, we find that a value of 25% results in curves that work well for our robot in practice. Fig. 3 illustrates an example Bezier path.
## IV Base Placement
The optimal base placement is selected from a discretised set by evaluating the path cost from the current robot pose to candidate base placements as well as from the candidates to the next target (Fig. 4). Candidates are evenly spaced around the target object in 10\({}^{\circ}\) increments for a total of 36 possible base positions. Each position is assigned two possible orientations: the robot's forward vector can be tangential to the circle facing either clockwise or counter-clockwise which gives a total of 72 candidates.
The radius of the circle on which the candidates are placed is dynamically adjusted between 0.6 \(\mathrm{m}\) and 0.8 \(\mathrm{m}\). When no collision-free candidates are available close to the object the radius is increased until a solution is found. The radius limits are defined by the geometry of our robot, 0.6 \(\mathrm{m}\) is calculated from the radius of the robot base plus a safety margin, and 0.8 \(\mathrm{m}\) is the largest distance at which the robot can still perform manipulation tasks. When no viable candidates are found within these limits, the robot will drive toward the collision-free position closest to the target with the assumption that the system may identify a valid candidate as the map updates with more recent lidar data.
Fig. 3: Illustration of the Bézier path evaluation. The PathRTR evaluation between the two presented poses consists of Rotate(30\({}^{\circ}\)), Translate(4\(\mathrm{m}\)), Rotate(90\({}^{\circ}\)). By comparison, the Bézier curve shown has a total length of 4.35\(\mathrm{m}\). Assuming the robot has maximum linear and angular velocities of 0.5\(\mathrm{ms}^{-1}\) and 100\({}^{\circ}\)s\({}^{-1}\), the estimated sequential PathRTR cost is 9.2s, while travelling the Bézier path and combining rotation and translation requires only 8.7s.
Fig. 2: A comparison of global paths generated by our modified STAA* (path in red) and the original (shown in yellow) for an example goal pose. The red \(x\) axis of the target frame represents the desired forward direction.
The path cost for each candidate is evaluated using the PathRTR metric described in [13]. The total path cost for the \(i\)-th candidate is given by the weighted sum of two components
\[C_{i}=C_{i,\text{C}}+1.05\cdot C_{i,\text{N}}\]
where \(C_{i,\text{C}}\) is the estimated cost from robot to candidate and \(C_{i,\text{N}}\) is the estimated cost from the candidate to the next target. The weighting that biases towards minimising \(C_{i,\text{N}}\) ensures that the robot continues to drive toward the next target even when close to the object. Decreasing this weight below 1 will tend towards a greedy solution that optimises for the immediate task only without considering the next task. Increasing the weight further will encourage the robot to sacrifice time efficiency on the immediate task in favour of minimising the expected travel time for the next task.
The candidate with lowest \(C_{i}\) is used as the goal for navigation. The method selects base placements that are within manipulation range of the target and efficiently connect the current robot pose with the next target. For example, the spheres in Fig. 4 are coloured based on their path cost, with brighter green representing lower cost. The paths for the optimal candidate are also shown.
The path costs for each candidate are reevaluated at each controller step (20 \(\mathrm{Hz}\)) to enable reactive control and response to changes in the environment.
## V Arm Obstacle Avoidance
Obstacle avoidance for the manipulator is enabled by modifying the redundancy resolution controller described in [1]. This controller solves a Quadratic Program (QP) to calculate joint velocities for a given desired end-effector and base velocity. Using a similar controller, reactive obstacle avoidance for a mobile manipulator is demonstrated in [26]. The controller allows for slack in the achieved end-effector velocity which can be exploited along with redundant degrees of freedom to avoid obstacles.
Obstacle avoidance is implemented in the QP through the addition of an inequality constraint which limits the velocity of points on the arm when they are close to an obstacle. Further details on the implementation are available in [26].
Accurately modelling the environment in sufficient detail for robust collision avoidance in 3D is a difficult problem and outside the scope of this work. In [26], real-world trials are performed with simulated objects whose pose can be directly observed. In our system, we use the 2D lidar in the robot's base to construct an obstacle map for the arm. Any obstacles detected by the lidar are assumed to be tall enough that the arm should avoid them. When the obstacles are short this is a conservative assumption and the arm will unnecessarily avoid the space above the obstacle. However, when an obstacle is larger above the plane of the lidar (for example a table supported by a central post), the system cannot observe the geometry at the height of the arm and there is the risk of a collision. We mitigate this by providing the system with a pre-generated map of most of the collision geometry in the environment. Additional obstacles detected by the lidar are added to the map. In future work, more detailed collision geometry could be modelled online with a 3D lidar or depth camera, enabling improved obstacle avoidance.
We query the constructed manipulator obstacle map with a single point in the centre of the robot's gripper rather than computing the closest point on link collision meshes as in [26]. This simplifies the implementation and we find that it achieves acceptable performance in our real-world testing. The end-effector speed toward the object is limited to
\[\dot{d_{ro}}\leq\xi\frac{d-d_{s}}{d_{i}-d_{s}}\]
where \(\dot{d_{ro}}\) is the distance between the end-effector and the closest obstacle, \(\xi=0.6\) is a gain controlling the aggressiveness of the obstacle avoidance, \(d_{s}=0.25~{}\mathrm{m}\) is the minimum distance allowed between end-effector and obstacle, and \(d_{i}=0.6~{}\mathrm{m}\) is the threshold at which the limit is enabled. For \(d_{ro}>d_{i}\) the constraint is removed from the QP.
## VI Experiments
Real-world experiments are performed with our Frankie3 mobile manipulator that consists of an Omron LD-60 differential-drive mobile base and a Franka-Emika Panda 7-DOF manipulator. Robot control is implemented using the Robotics Toolbox [27]. The simulation and digital twin environments are implemented in Unity.
Footnote 3: [https://github.com/qcr/frankie_docs](https://github.com/qcr/frankie_docs)
### _Baseline Comparisons_
The method presented in [3] optimises task time by selecting base poses that are conditioned on the next target location in a higher-level task. This is similar to the method described in IV, but does not consider the challenges of performing the tasks on-the-move. To meaningfully compare with the work presented in [3] we recreate their experiments as closely as possible in simulation and the real-world. [3] provides performance data for two baselines, as well as two versions of their proposed approach. These methods are described briefly here:
1. **Fixed Set Baseline:** The robot chooses a base placement from a set of 7 fixed candidates spaced around and facing the object table. Selection amongst the candidates is based on maximising manipulability.
Fig. 4: Examples of optimal base pose selection for two different drop locations. The red and cyan lines illustrate the path to the candidate and path from candidate to next target respectively for the optimal base placement.
2. **Inverse Reachability Maps (IRM) Baseline:** The optimal base placement is selected based on evaluation of a manipulability metric only and does not consider navigation costs.
3. **Greedy [3]:** The base placement includes both manipulability and navigation costs associated with a candidate but does not include navigation to the next location.
4. **Sequential [3]:** Optimal base placement is determined based on a weighted cost combining manipulability, navigation costs to a candidate, and navigation costs from the candidate to the next location.
Further details on these methods are available in [3]. All data relating to these approaches has been reproduced from [3].
### _Experiment 1: Static Obstacles_
A collection of 6 objects were placed on a 2.4 \(\times\) 0.8 \(\mathrm{m}\) table and must be transported to two drop locations. Fig. 5 shows the experiment layout - a dimensioned diagram is available on our project website. The table and drop points locations are consistent with those used in [3], allowing for meaningful comparison.
The 6 objects were placed randomly on the 12 possible object locations shown in Fig. 5 and a random order was assigned. The objects must be picked up in order and transported to alternating drop locations. This is to ensure fair comparison with the experiments in [3], however it should be noted that performance improvements are possible by optimally selecting the order and drop location.
Our experiment differs from [3] in two respects. First, we use red 40 \(\mathrm{mm}\) cubic objects to simplify perception and grasp synthesis which is not the focus of this work. Second, we limit object positions to a set of 12 candidates around the perimeter of the table. Our robot has an arm reach of 0.855 \(\mathrm{m}\), significantly less than the 1.55 \(\mathrm{m}\) of the ARMAR-6 robot used in [3]. Positioning the objects closer to the table's edge allows more room for our robot to perform reactive grasping without colliding with the table. The reach of ARMAR-6 is sufficient to grasp objects on the far side of the table, whereas our robot must drive around the table, increasing the distance to be travelled. We limit the positions to a set of 12 candidates to improve reproducibility of the experiment.
We conducted 50 trials with random object arrangements in simulation and performed 10 trials in the real world. The first real-world object arrangement is hand-crafted for best comparison with the real-world trial presented in [3], and the remaining 9 were randomly chosen. Details of the 10 real-world scenarios are available on our project website.
### _Experiment 1a: Additional Static Obstacles_
The 50 simulated experiments conducted in Experiment 1 were repeated with the addition of 2 cuboid obstacles (shown in yellow on Fig. 5). These objects were positioned to be consistent with those included in scenario 2 presented in [3].
### _Experiment 2: Dynamic Obstacles_
This experiment uses the same basic arrangement as experiment 1 but introduces additional autonomous robots to the scene which function as dynamic obstacles. In simulation, four Temi robots were tasked with autonomously driving around the table (Fig. 6). The large number of robots relative to the size of the environment ensures frequent interactions and therefore avoidance manoeuvres. We also perform a real-world trial with a single Temi robot. The complex and unpredictable interactions between the robots make the task time metric difficult to interpret for these experiments. Instead, we demonstrate that our system can complete the task with dynamic obstacles in the real world and examine the data from an example simulated trial to investigate the performance of the obstacle avoidance functionality.
## VII Results
### _Experiment 1: Table Clearing_
#### Vii-A1 Simulated Results
The results presented in Fig. 7 compare the time to complete the 6 object pick-and-place task with the baselines and approach presented in [3]. Our MotM method reduces the total task time by 43% compared to the sequential task optimised method presented in [3].
#### Vii-A2 Real-world Results
Table I compares the real-world performance of our method with the IRM baseline and Sequential optimised method from [3]. On a single trial (6 object pickups) with approximately the same object arrangement and order, we successfully cleared the table
Fig. 5: Layout of objects and drop locations for experiment 1. For each trial, 6 of the 12 object positions were randomly selected and an order was generated for them to be alternately delivered to the two drop locations. The yellow squares indicate additional static obstacles for experiment 1a.
Fig. 6: Simulated experiment 2 with four autonomous Temi robots providing dynamic obstacles.
in 91 s where the method presented in [3] takes 176 s, a task time reduction of 48%. Across 10 trials (60 object pickups) with random object arrangement, the task was completed in an average of 100.4 s with a success rate of 58 out of 60. The two failure cases resulted from grasp failures where imperfect control of the robot while the fingers were closing caused a collision between the object and fingers that destabilised the grasp. These failures could be recovered by detecting the failure and allowing the robot to reopen the gripper and reattempt the grasp.
The consistency between the real-world and simulated timing data validates the simulation as a meaningful representation of real-world performance. In general, tasks are completed slightly faster in simulation, which we attribute to idealisations of the perception and low-level controllers.
### _Experiment 1a: Table Clearing with Static Obstacles_
Fig. 8 presents the simulated task times for the task described in Fig. 5 with additional static obstacles and compares our method to the baselines and approaches presented in [3]. We demonstrate a reduction in total task time of 46% which is consistent with the performance from experiment 1. These results demonstrate that our approach is robust to the inclusion of additional obstacles in the environment, even when they are close to the grasp locations and interfere with the ideal trajectory for grasping an object on the move.
### _Experiment 2: Table Clearing with Dynamic Obstacles_
We demonstrate real-world manipulation on-the-move in an environment with unpredictably moving dynamic obstacles. The time required to complete the task is dependent on the random interactions between our robot and the Temi acting as a dynamic obstacle. The complexity of the interactions makes the experiments impossible to properly replicate, and therefore it is meaningless to compare the task-time metric over relatively few trials. However, in an example real-world scenario, our system completed the task in 94 s in the presence of a dynamic obstacle. In this trial, there were 4 interactions where an evasive manoeuvre was required to avoid the other robot. For the same object arrangement without the dynamic obstacle, the task was completed in 91 s. This demonstrates that our system can robustly perform tasks in environments with dynamic obstacles while incurring minimal increase in overall task execution time.
Fig. 9 shows the minimum distance between the end-effector and the closest obstacle over an example simulated trial with 4 dynamic obstacles. These results demonstrate the effectiveness of the arm obstacle avoidance constraints described in Section V. When the distance dips into the region of influence (shown in orange), the constraint limits the velocity toward the obstacle. The minimum distance allowed between the end-effector and obstacles is 0.25 m.
Fig. 10 illustrates what the behaviour of the robot looks like in practice. After reaching out to grasp the object, the system retracts the arm to avoid the Temi while driving on towards the drop location to complete the task.
## VIII Discussion and Future Work
There are numerous avenues that can be explored in future work to improve performance.
#### Vi-A1 Obstacle Motion Prediction
Our current implementation has no predictive capability for the motion of dynamic
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Method** & **Simulation** & \multicolumn{2}{c}{**Real-World**} \\ & Task Time & Task Time & Success Rate \\ \hline IRM Baseline [3] & 192.9 s & 202 s & 6/6 \\ Sequential [3] & 172.3 s & 176 s & 6/6 \\ Ours (Single Trial) & 85.6 s & 91 s & 6/6 \\ Ours (Ten Trials) & 95.9 s & 100.4 s & 58/60 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison of task times for experiment 1 in simulation and the real world. Our real-world results are averaged across 10 different scenarios, where the results for the IRM baseline and Sequential methods are from single trials conducted in [3].
Fig. 8: Time required to complete the experiment 1a task with additional obstacles in simulation. Note that as in Fig. 7 results taken from [3] include grasping times.
Fig. 7: Time required to complete the simulated experiment 1 task. The box plots for the Fixed Set, IRM, Greedy, and Sequential methods have been recreated from the results published in [3] which do not include the time required to complete the grasps, which they estimate at about 40s. Consequently, the timings from [3] have been increased by 40s to compare with our results. For all experiments the objects are rectangular prisms, with comparable grasp complexity.
obstacles. Instead, dynamic obstacles are detected at each time step and treated the same as static obstacles for path planning. We have noticed that this results in a tendency for our system to cut in front of other robots and people in the environment. For example, consider the case illustrated in Fig. 11. If we assume that the Temi is stationary then the optimal path to the object (red cube) passes in front of the obstacle. This is the path that is currently taken by our system. However, if the Temi was modelled as a dynamic obstacle, its time history (shown as transparent) could be used to predict that the obstacle will keep driving forward. In that case, a better path is to drive directly toward the object allowing the Temi to move out of the way before our robot arrives.
The implementation of STAA* presented in [13] includes support for dynamic obstacle motion prediction and demonstrates the desired behaviour. However, obstacle motion prediction is not included in this work due to the difficulty of accurately segmenting obstacles from real-world lidar data. A system that explicitly modelled dynamic objects would allow for improved performance in environments with other moving agents. We view this as a perception challenge that could be addressed in future work.
#### V-A2 3D Collision Geometry for Arm
As discussed in Section V, collision avoidance for the arm is enabled by assuming that all obstacles detected by the base lidar are tall enough that they should be avoided by the arm. This is both overly conservative for short obstacles and does not prevent collision with obstacles above the plane of the 2D lidar. Including additional sensors and a perception pipeline that accurately models 3D geometry would enable improved manipulator obstacle avoidance and manipulation in cluttered environments such as reaching into a bin or a cupboard.
#### V-A3 Failure Recovery
The base control system presented in this work reactively adjusts the base motion to keep the object in reach until the grasp is completed. This is more thoroughly investigated in our related work presented in [28]. However, we have not implemented a robust method for detecting grasp failures in the real world. The robot's gripper provides infrequent feedback on the finger position and applied forces, which makes it difficult to detect failures through proprioceptive methods. Instead, additional tactile or vision sensors could be used for feedback. If grasp failures could be reliably detected, then the presented system would control the robot such that grasps can be reattempted until success is achieved. This would enable recovery from manipulation failures where a second attempt can be easily executed. However, failures such as the object falling to the floor will likely require more sophisticated recovery behaviours including searching for the object.
#### V-A4 Gripper Speed
The relatively slow speed of the Franka-Emika Panda gripper requires the system to stabilise the gripper over the object for a significant period of time (approximately 0.8 s) while the fingers close. This limits the speed at which the robot can be driving past while the grasp is attempted. A gripper that closed more quickly could be used to enable faster grasping on-the-move.
#### V-A5 Phantom Collisions
The Franka-Emika Panda manipulator is a cobot with in-built collision detection implemented through joint torque measurement. This is a valuable safety feature, however, accelerations of the mobile base caused by changes in commanded velocity as well as bumps in the terrain can impart torques on the robot joints that are registered as collisions and result in a pause or shutdown of the arm's controller. Decreasing the frequency of these
Fig. 11: Demonstration of our system’s tendency to cut in front of dynamic obstacles. Transparent robots illustrate Temi’s history and Franka’s planned path.
Fig. 10: A series of snapshots from our system’s arm and base successfully avoiding another robot while performing a pick-and-place task on-the-move.
Fig. 9: Minimum distance between end-effector and closest detected obstacle over a simulated trial with dynamic obstacles. Discontinuities are caused by obstacles coming into view as the robot moves through the environment. The red and orange regions indicate the minimum end-effector to obstacle distance and region of influence of the constraint described in Section V.
detections by increasing arm compliance warrants further investigation for higher speed performance.
#### 3.2.6 Grasp Synthesis for Complex Objects
The simple perception and grasp synthesis method used in this work limits the system to grasping objects of simple geometry and uniform colour. The addition of a more sophisticated grasp synthesis method would enable the grasping of more complex objects. The key challenge in this respect is developing a grasp synthesis system that provides closed-loop feedback throughout the grasping action, all the way until the fingers close. Although not essential, closed-loop feedback throughout the grasping action improves performance in the highly dynamic environment of manipulation on-the-move, and provides reliable performance that is robust to imprecise perception and localisation, and inaccurate robot control [29].
## 4 Conclusion
We presented a base control method that integrates with the architecture presented in [1] to enable reactive manipulation on-the-move in complex environments with dynamic obstacles. Further, the QP in the redundancy resolution module is augmented to include an inequality constraint that provides collision avoidance for the manipulator. We have explored the system's performance in a number of simulated and real-world experiments. We demonstrated a reduction in task execution time of 48% compared to a state-of-the-art method on an example real-world pick-and-place task. In addition, we show that the mobile manipulator can perform pick-and-place tasks while its arm and base both avoid dynamic obstacles. Several limitations of the system are explored which highlight interesting opportunities for further research.
|
2308.16824 | Can Programming Languages Boost Each Other via Instruction Tuning? | When human programmers have mastered a programming language, it would be
easier when they learn a new programming language. In this report, we focus on
exploring whether programming languages can boost each other during the
instruction fine-tuning phase of code large language models. We conduct
extensive experiments of 8 popular programming languages (Python, JavaScript,
TypeScript, C, C++, Java, Go, HTML) on StarCoder. Results demonstrate that
programming languages can significantly improve each other. For example,
CodeM-Python 15B trained on Python is able to increase Java by an absolute
17.95% pass@1 on HumanEval-X. More surprisingly, we found that CodeM-HTML 7B
trained on the HTML corpus can improve Java by an absolute 15.24% pass@1. Our
training data is released at https://github.com/NL2Code/CodeM. | Daoguang Zan, Ailun Yu, Bo Shen, Jiaxin Zhang, Taihong Chen, Bing Geng, Bei Chen, Jichuan Ji, Yafen Yao, Yongji Wang, Qianxiang Wang | 2023-08-31T15:53:51Z | http://arxiv.org/abs/2308.16824v2 | # Can Programming Languages Boost Each Other
###### Abstract
When human programmers have mastered a programming language, it would be easier when they learn a new programming language. In this report, we focus on exploring whether programming languages can boost each other during the instruction fine-tuning phase of code large language models. We conduct extensive experiments of \(8\) popular programming languages (Python, JavaScript, TypeScript, C, C++, Java, Go, HTML) on StarCoder. Results demonstrate that programming languages can significantly improve each other. For example, CodeM-Python \(15\)B trained on Python is able to increase Java by an absolute \(17.95\)% pass\(@1\) on HumanEval-X. More surprisingly, we found that CodeM-HTML 7B trained on the HTML corpus can improve Java by an absolute \(15.24\)% pass\(@1\). Our training data is released at [https://github.com/NL2Code/CodeM](https://github.com/NL2Code/CodeM).
Large Language Model Code Generation Programming Language Instruction Tuning
## 1 Introduction
Code large language models (code LLMs) are blooming recently [22]. A lot of code LLMs are released in succession, e.g., Codex [Chen et al., 2021], AlphaCode [Li et al., 2022], PaLM-Coder [Chowdhery et al., 2022], CodeGen [Nijkamp et al., 2023], CodeGeeX [Zheng et al., 2023], StarCoder [Li et al., 2023], and Code Llama [Roziere et al., 2023]. Owing to their amazing code generation performance, code LLMs have attracted considerable attention from both academic and industrial circles. Recent works [Ouyang et al., 2022] have witnessed the instruction tuning technique that can teach LLMs how to follow instructions. In the realm of code generation, WizardCoder [Luo et al., 2023] and PanGu-Coder [Shen et al., 2023] also adopt this technique to elicit their code generation capabilities. Although some code LLMs, such as CodeGen-Multi Nijkamp et al. [2023] and StarCoder-base Li et al. [2023], are trained on corpora spanning multiple programming languages, the interplay among these languages remains unexplored. In programming practice, once a human programmer has mastered a programming language, it would be easier to learn a new one due to the homogeneity between programming languages. Motivated by this, we would like to explore whether different programming languages can boost each other during instruction fine-tuning of code LLMs.
To explore this idea, we craft the training corpus for each of \(8\) popular programming languages (Python, JavaScript, TypeScript, C, C++, Java, Go, HTML), where each language includes about \(9\)K programming exercises. We train StarCoder \(7\)B using the instruction tuning technique on each programming language corpus separately, and test the performance of each fine-tuned model across every programming language. Our findings reveal that programming languages can significantly boost each other. Meanwhile, we found that the improvement margin of different programming languages to each other is related to the language similarity between them. For example, CodeM-JavaScript 7B trained on JavaScript data can yield an absolute \(11.80\)% pass\(@1\) improvement in TypeScript. More interestingly,
CodeM-HTML 7B trained on the markup language HTML also can achieve an absolute \(15.24\)% pass\(@1\) improvement in Java.
In a nutshell, our contributions can be listed as follows: (1) Our findings suggest that programming languages can significantly boost each other during code LLMs' instruction fine-tuning phase. (2) We glean valuable insights on the correlation between multiple programming languages, paving the way for future research on code generation. (3) We will make our training data publicly available.
## 2 Methodology
### Crafting Training Corpus of Eight Programming Languages
We select \(8\) popular programming languages and construct their training data separately. Our selected languages include Python, JavaScript, TypeScript, C, C++, Java, Go, and HTML, covering diverse types such as procedure-oriented, object-oriented, script, and even markup languages. For each programming language, we construct its training data containing about 9K data pairs. Each pair includes both an instruction describing the programming problem and its corresponding response. One practical example of HTML is shown in Figure 1.
Based on these selected languages, we construct a series of monolingual datasets. We start from the dataset of CodeAlpaca 20K2, and extract those Python-related data to form our seed instruction set. Then for each selected programming language, we evolve existent instructions in the seed instruction set to get corresponding new ones by prompting OpenAI's GPT-3.53. For all the selected languages except HTML, we adopt an in-depth evolution [22], by asking GPT-3.5 to rewrite the seed instruction (Python) into a more complicated version relevant to the target language (Python, JavaScript, TypeScript, C, C++, Java, or Go). However, for HTML, we adopt in-breadth evolution to produce a brand-new HTML-related instruction, since HTML (markup language) is too different from other languages (non-markup languages).
Footnote 2: [https://huggingface.co/datasets/sahi12801/CodeAlpaca-20k](https://huggingface.co/datasets/sahi12801/CodeAlpaca-20k)
Footnote 3: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
### Instruction Tuning
Code pre-trained models such as Codex [3] and StarCoder [11] store a wealth of code knowledge. However, these models only support left-to-right code generation based on context, as they are trained solely on plain code snippets. Of late, the instruction tuning techniques [23, 19] are proposed, which can enhance the model's capabilities of following instructions so as to enable chat features. During instruction tuning, we train StarCoder using the prompt in Figure 2 to obtain our CodeM. We use DeepSpeed to accelerate the training of CodeM with fp16 enabled. Additionally, we set the batch size to \(2\) per GPU, the learning rate to \(2\)e-\(5\) with a cosine annealing schedule, the gradient accumulation steps to \(4\), and the warmup steps to \(30\). After instruction tuning, we use the prompt in Figure 3 to do the inference on downstream tasks across various programming languages. For inference, we adopt the greedy decoding strategy for sampling. Given that CodeM is a
Figure 1: A HTML training example of our crafted instruction-answer pairs.
chat-style model, the responses it generates often contain elements beyond just codes, which typically makes them non-executable. So, we extract the code snippets from the generated response to evaluate the performance of code generation.
## 3 Experiments
### Evaluation Setup
#### 3.1.1 Benchmarks and Baselines
We use HumanEval-X [22] to evaluate the multilingual abilities of models in Python, JavaScript, C++, Java, and Go. HumanEval-X is crafted by adapting HumanEval [Chen et al., 2021] (Python) to other programming languages. Following the same approach as HumanEval-X, we also create two new versions of HumanEval: HumanEval-C and HumanEval-TypeScript. Note that HumanEval can not directly be adapted to markup languages such as HTML, so our downstream evaluation languages do not include HTML.
The primary baseline for all language versions of CodeM is their base model StarCoder. We analyze whether CodeM trained on language A can improve language B, in which case the baselines are CodeM directly trained on language B.
#### 3.1.2 Metrics
We adopt pass\(@1\) as our metric to evaluate all the models. Each model generates one answer using the greedy decoding strategy for each programming task, and the answer would be executed upon the given test cases. Only when all the test cases are passed, the programming task can be considered solved with the generated code. In this setting, pass\(@1\) can be formulated as \(\frac{|P_{c}|}{|P|}\), where \(|P|\) denotes the total number of programming tasks in HumanEval and \(|P_{c}|\) represents the number of solved tasks. In essence, the pass\(@1\) metric we use can be considered as the accuracy.
### Results
#### 3.2.1 Main Results
Table 1 shows the performance of CodeM, which are a series of models trained on monolingual datasets of eight languages respectively, across different language versions of HumanEval. As we can see, all CodeM models outperform
Figure 3: Prompt format of inference. {language}, {problem}, and {signature} represent the downstream programming language, the given programming problem, and the function header, respectively.
Figure 2: Prompt format of instruction tuning. {problem} and {response} refer to the instruction and answer obtained in Section 2.1.
their base model StarCoder \(7\)B across all programming languages by a large margin. Also, we found that programming languages can boost each other significantly. For example, CodeM-Python trained solely on Python corpus is able to improve HumanEval-Java by an absolute \(14.03\)% pass\(@1\). This finding reveals the inherent commonalities among different programming languages. More surprisingly, CodeM-HTML boosts HumanEval-Java by an absolute \(15.24\)% pass\(@1\), even exceeding CodeM-Java. Similarly, CodeM-C++ beats CodeM-C on HumanEval-C, and CodeM-JavaScript beats CodeM-TypeScript on HumanEval-Typescript. Drawing upon these observations, we conjecture that the improvement in multilingual code generation performance is predominantly due to instruction tuning unlocking the model's inherent potential, such as natural or programming language understanding and following-instruction capabilities, rather than merely incorporating new knowledge. In addition to training CodeM on a monolingual training corpus, we further construct a 9K multilingual training set covering \(8\) programming languages. Although each language comprises only a small amount (\(\sim\)1.2K) of training instances, experimental findings suggest that CodeM-Mixed excels in all languages, even surpassing CodeM-Python on HumanEval-Python and CodeM-Java on HumanEval-Java. This suggests that it is possible to yield superior code generation performance by leveraging multilingual data in instruction tuning, without harming the generalization of the model.
We also conduct experiments on StarCoder \(15\)B to verify the effectiveness of CodeM. Specifically, we obtain \(108\)K Python training data following WizardCoder [11], and finetune StarCoder \(15\)B to get CodeM-Python. The results are shown in Table 2. CodeM-Python achieves state-of-the-art performance on HumanEval-Python with \(64.63\)% pass\(@1\), compared with other models of the same scale. CodeM-Python also gets a tremendous improvement in the generation of other programming languages. For instance, it improves Java and JavaScript by an absolute \(17.95\)% and \(16.77\)% pass\(@1\), respectively.
#### 3.2.2 Closer Analysis
We analyze the correlation between different programming languages. As illustrated in Figure 4 (a), the improvement of code generation performance is sensitive to training corpus of different programming languages. Moreover, we found that C and C++ can boost each other more significantly, which is the same for JavaScript and TypeScript. It is reasonable because these languages are correlated to each other in language design, sharing some common syntax and grammar. Figure 4 (b) shows that training on each programming language can boost the code generation performance of all other languages. We can see that the correlation values in Figure 4 (b) are mostly all positive, implying that the improvement trend of different language brought by one monolingual training corpus is relatively similar.
\begin{table}
\begin{tabular}{l|l l l l l l l} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{8}{c}{**HumanEval-Multilingual**} \\ & Python & JavaScript & TypeScript & C & C++ & Java & Go \\ \hline \hline StarCoder 7B & 26.83 & 24.39 & 28.57 & 24.69 & 25.61 & 23.17 & 24.39 \\ CodeM-Python & 38.41\({}^{\text{\text{\text@underline{1}}},\text{\text{\text@underline{5}}}}\) & 34.76\({}^{\text{\text{\text@underline{1}}},\text{\text{\text@underline{5}}}}\) & 33.54\({}^{\text{\text{\text@underline{4}}},\text{\text{\text@underline{7}}}}\) & 29.01\({}^{\text{\text{\text@underline{4}}},\text{\text{\text@underline{5}}}}\) & 34.15\({}^{\text{\text@underline{5}},\text{\text{\text@underline{4}}}}\) & 37.20\({}^{\text{\text{\text@underline{1}}},\text{\text{\text@underline{4}}}}\) & 27.44\({}^{\text{\text{\text@underline{5}}},\text{\text{\text@underline{5}}}}\) \\ CodeM-JavaScript & 37.20\({}^{\text{\text{\text@underline{1}}},\text{\text{\text@underline{3}}}}\) & **40.24\({}^{\text{\text{\text@underline{1}}},\text{\text{\text@underline{5}}}}\)** & **40.37\({}^{\text{\text{\text@underline{1}}},\text{\text{\text@underline{8}}}}\)** & 27.78\({}^{\text{\text{\text@underline{3}}},\text{\text{\text@underline{9}}}}\) & 32.93\({}^{\text{\text{\text@underline{3}}},\text{\text{\text@underline{2}}}}\) & 34.76\({}^{\text{\text{\text@underline{1}}},\text{\text{\text@underline{5}}}}\) & 26.22\({}^{\text{\text{\text@underline{1}}},\text{\text{\text@underline{8}}}}\) \\ CodeM-TypeScript & 33.54\({}^{\text{\text{\text@underline{6}}},\text{\text{\text@underline{1}}}}\) & 37.80\({}^{\text{\text{\text@underline{1}}},\text{\text{\text@underline{3}}}}\) & 37.28\({}^{\text{\text{\text@underline{1}}},\text{\text{\text@underline{8}}}}\) & 30.25\({}^{\text{\text{\text@underline{5}}},\text{\text{\text@underline{5}}}}\) & 30.49\({}^{\text{\text{\text@underline{4}}},\text{\text{\text@underline{8}}}}\) & 28.05\({}^{\text{\text{\text@underline{4}}},\text{\text{\text@underline{8}}}}\) & 25.61\({}^{\text{\text{\text@underline{1}}},\text{\text{\text@underline{2}}}}\) \\ CodeM-C & 39.63\({}^{\text{\text{\text@underline{1}}},\text{\text{\text@underline{2}}}}\) & 37.20\({}^{\text{\text{\text@underline{1}}},\text{\text{\text@underline{8}}}}\) & 32.30\({}^{\text{\text{\text@underline{3}}},\text{\text{\text{\text@underline{7}}}}}\) & 32.10\({}^{\text{\text{\text@underline{4}}},\text{\text{\text@underline{1}}}}\) & 35.37\({}^{\text{\text{\text@underline{7}}},\text{\text{\text@underline{6}}}}\) & 38.41\({}^{\text{\text{\text@underline{1}}},\text{\text{\text@underline{5}}}}\) & 28.66\({}^{\text{\text{\text@underline{4}}},\text{\text{\texttext@underline{7}}}}\) \\ CodeM-C++ & 34.57\({}^{\text{\texttext@underline{7}},\text{\text{\text@underline{7}}}}\) & 35.37\({}^{\text{\text{\text@underline{1}}},\text{\text{\text@underline{9}}}}\) & 32.30\({}^{\text{\text{\text@underline{3}}},\text{\text{\text{\text@underline{3}}}}}\) & **34.57\({}^{\text{\text{\text@underline{8}}},\text{\text{\text@underline{9}}}}\)** & **39.02\({}^{\text{\text{\text@underline{1}}},\text{\text{\text@underline{3}}}}\)** & 37.20\({}^{\text{\text{\text@underline{4}}},\text{\text{\texttext@underline{8}}}}\) & 28.05\({}^{\text{\text{\text@underline{3}}},\text{\text{\text@underline{6}}}}\) \\ CodeM-Java & 35.37\({}^{\text{\text@underline{8}},\text{\text{\text@underline{5}}}}\) & 33.54\({}^{\text{\text{\text@underline{9}}},\text{\text{\texttext@underline{1}}}}\) & 32.30\({}^{\text{\text{\text@underline{7}}},\text{\text{\texttext@underline{3}}}}\) & 29.63\({}^{\text{\text{\text@underline{4}}},\text{\text{\texttext@underline{9}}}}\) & 31.10\({}^{\text{\text{\text@underline{5}}},\text{\text{\text@underline{4}}}}\) & 37.80\({}^{\text{\text{\text@underline{4}}},\text{\text{\texttext@underline{6}}}}\) & 27.44\({}^{\text{\text{\text@underline{4}}},\text{\text{\texttext@underline{0}}}}\) \\ CodeM-Go & 35.98\({}^{\text{\text{\text@underline{9}}},\text{\text{\texttext@underline{1}}}}\) & 33.54\({}^{\text{\text{\text@underline{9}}},\text{\text{\texttext@underline{1}}}}\) & 31.68\({}^{\text{\text{\text@underline{1}}},\text{\text{\text@underline{3}}}}\) & 30.25\({}^{\text{\text{\text@underline{5}}},\text{\text{\text@underline{5}}}}\) & 34.15\({}^{\text{\text@underline{5}},\text{\text{\text@underline{5}}}}\) & 35.98\({}^{\text{\text@underline{1}}}\) & **32.32\({}^{\text{\text{\text@underline{3}}},\text{\text{\texttext@underline{2}}}}\)** & 37.93\({}^{\text{\text{\text@underline{2}}},\text{\text{\texttext@underline{3}}}}\) \\ CodeM-HTML & 31.71\({}^{\text{\text{\text@underline{4}}},\text{\text{\text@underline{8}}}}\) & 33.54\({}^{\text{\text{\text@underline{9}}},\text{\text{\texttext{\text@underline{1}}}}}\) & 32.30\({}^{\text{\text{\text@underline{3}}},\text{\text{\text{\texttext@underline{7}}}}}\) & 25.93\({}^{\text{\text{\text@underline{1}}},\text{\text{\texttext@underline{2}}}}\) & 28.66\({}^{\text{\text{\text@underline{3}}},\text{\text{\texttext@underline{3}}}}\) & 3
## 4 Related Work
Codex [Chen et al., 2021] with 12-billion parameters is able to solve Python programming problems automatically. This remarkable success triggered a significant buzz in both the academic and industrial realms. Followed by Codex, a plenty of code LLMs are proposed, including AlphaCode [Li et al., 2022], PaLM-Coder [Chowdhery et al., 2022], CodeGen [Nijkamp et al., 2023], InCoder [Fried et al., 2023], CodeGeeX [Zheng et al., 2023], replit4, CodeT5 [Wang et al., 2021, 2023], PyCodeGPT [Zan et al., 2022], SantaCoder [Allal et al., 2023], StarCoder [Li et al., 2023], Code Llama [Roziere et al., 2023], and phi-1 [Gunasekar et al., 2023]. These above models are trained on a large-scale code corpus and achieve impressive code generation performance. During their pre-training, some models are trained on datasets of multilingual programming languages and then fine-tuned on a monolingual dataset to produce a more powerful specialist version. As for the instruction fine-tuning phase, WizardCoder [Luo et al., 2023], PanGu-Coder2 [Shen et al., 2023], and Phind-CodeLlama5 are proposed to bolster the capability of following instructions and further boost the code generation capability. Yet, none of these aforementioned models explore the intricate interplay between different programming languages. In this report, we therefore would like to investigate whether training code LLMs on monolingual data can bolster performance in other programming languages.
Footnote 4: [https://huggingface.co/replit/replit-code-v1-3b](https://huggingface.co/replit/replit-code-v1-3b)
Footnote 5: [https://huggingface.co/Phind/Phind-CodeLlama-34B-v1](https://huggingface.co/Phind/Phind-CodeLlama-34B-v1)
## 5 Conclusion
Our findings reveal that a monolingual training corpus can enhance the multilingual code generation capabilities of code LLMs via instruction tuning. This highlights the intrinsic commonality and interconnectedness among multiple programming languages. In our future work, we plan to delve into the reasons why multiple languages can enhance each other. Also, we will explore how to leverage our findings to elevate code generation capabilities for these obscure or less-used programming languages by training on data from those popular ones.
## Acknowledgements
We would like to thank our colleagues for their valuable feedback and insights. Special thanks to An Fu (Huawei), Jingyang Zhao (Huawei), and Yuenan Guo (Huawei) for their constructive help throughout this research.
|
2309.03866 | On the singular limit problem in nonlocal balance laws: Applications to
nonlocal lane-changing traffic flow models | We present a convergence result from nonlocal to local behavior for a system
of nonlocal balance laws. The velocity field of the underlying conservation
laws is diagonal. In contrast, the coupling to the remaining balance laws
involves a nonlinear right-hand side that depends on the solution, nonlocal
term, and other factors. The nonlocal operator integrates the density around a
specific spatial point, which introduces nonlocality into the problem.
Inspired by multi-lane traffic flow modeling and lane-changing, the nonlocal
kernel is discontinuous and only looks downstream.
In this paper, we prove the convergence of the system to the local entropy
solutions when the nonlocal operator (chosen to be of an exponential type for
simplicity) converges to a Dirac distribution. Numerical illustrations that
support the main results are also presented. | Felisia Angela Chiarello, Alexander Keimer | 2023-09-07T17:27:28Z | http://arxiv.org/abs/2309.03866v1 | On the singular limit problem in nonlocal balance laws: Applications to nonlocal lane-changing traffic flow models
###### Abstract
We present a convergence result from nonlocal to local behavior for a system of nonlocal balance laws. The velocity field of the underlying conservation laws is diagonal. In contrast, the coupling to the remaining balance laws involves a nonlinear right-hand side that depends on the solution, nonlocal term, and other factors. The nonlocal operator integrates the density around a specific spatial point, which introduces nonlocality into the problem. Inspired by multi-lane traffic flow modeling and lane-changing, the nonlocal kernel is discontinuous and only looks downstream. In this paper, we prove the convergence of the system to the local entropy solutions when the nonlocal operator (chosen to be of an exponential type for simplicity) converges to a Dirac distribution. Numerical illustrations that support the main results are also presented.
keywords: Nonlocal balance law, singular limit problem, convergence to the entropy solution, lane-changing, traffic flow modeling Msc: [2010] 35L65; 90B20 +
Footnote †: journal:
## 1 Introduction and problem setup
Conservation laws with nonlocal fluxes are frequently used in vehicular traffic modeling. These models aim to describe drivers who adjust their velocity based on conditions ahead of them,see [13; 14; 15; 12; 24; 29; 34]. There are general existence and uniqueness results for nonlocal conservation laws, as discussed in [5; 29] for scalar equations in one space dimension, [20; 33] for multi-dimensional scalar equations, and [1] for multi-dimensional systems. Two different primary approaches are commonly employed to establish solutions for these models: One approach provides suitable compactness estimates for a sequence of approximate solutions constructed through finite volume schemes, as in [9; 24; 14]. The other approach relies on characteristics and fixed-point theorems, as proposed in [29; 33]. Nonlocal conservation laws on a bounded domain have been studied in [22; 25; 34], and in [21] for multi-dimensional nonlocal systems using similar methods as described above. This study focuses on the singular limit problem of nonlocal conservation laws within the context of systems consisting of two (or more) equations. Specifically, we aim to establish the convergence of nonlocal solutions to the entropy-admissible solution of the local conservation law. This convergence occurs when we replace the convolution kernel with a Dirac delta function. This problem was initially posed in [4], where the authors conducted a numerical investigation. Subsequently, several authors studied the nonlocal-to-local convergence for the general scalar one-dimensional case without specific assumptions regarding the kernel function and the initial density. In particular, some counter-examples rule out convergence in the general case, see [18]. On the contrary, within the specific framework of traffic models, which includes anisotropic convolution kernels and nonnegative density, the singular limit has been established in the scalar case for nonlocal conservation laws. This has been achieved in the case of the exponential kernel [17] or by imposing monotonicity requirements on the initial datum [30]. Recently, a more general result was obtained in [19], which considers the convexity assumption for the convolution kernels. In [10], the authors demonstrated nonlocal-to-local convergence by considering an initial datum with bounded total variation bounded away
from zero and an exponential weight. Moreover, the group established that the solution approaches an entropic state in the limit, assuming \(V\) is an affine function. This extension of the result in [11] applies to more general fluxes. In [32], the authors studied the same singular limit problem but for kernels with fixed support. They obtained the convergence to the local entropy solution in these cases.
In addition, there is also a recent study on the nonlocal \(p-\)norm [3], where, under rather general assumptions and for sufficiently large \(p\) large an Oleinik [2; 37] type inequality is derived. This inequality ensures the immediate convergence to the local entropy solution. Such an Oleinik estimate had also been obtained for the earlier mentioned "classical" singular limit problem in [16] using additional constraints on the involved velocities and/or the initial datum.
However, none of the previously mentioned studies have addressed systems of nonlocal balance laws and their singular limit, which is one of the reasons why we explore these in this paper. We obtain a convergence result with potential applications in traffic models if we consider a _system_ of nonlocal balance laws (two equations) with lane-changing functions on the right-hand side and exponential kernels in the flux functions. This can be formulated as follows:
\[\partial_{t}\boldsymbol{\rho}+\partial_{x}\big{(}\boldsymbol{V}(\gamma* \boldsymbol{\rho})\boldsymbol{\rho}\big{)}=\boldsymbol{S}(\boldsymbol{\rho}, \gamma*\boldsymbol{\rho})\quad\stackrel{{\gamma\to\delta}}{{ \longrightarrow}}\quad\partial_{t}\boldsymbol{\rho}+\partial_{x}\big{(} \boldsymbol{V}(\boldsymbol{\rho})\boldsymbol{\rho}\big{)}=\boldsymbol{S}( \boldsymbol{\rho},\boldsymbol{\rho}) \tag{1}\]
with the density \(\boldsymbol{\rho}:\Omega_{T}\to\mathbb{R}^{2}\), \(\gamma\) signifying an exponential one-sided kernel, and \(\boldsymbol{V}:\mathbb{R}^{2}\to\mathbb{R}^{2}\) a "diagonal" velocity function \(\boldsymbol{S}:\mathbb{R}^{4}\to\mathbb{R}^{2}\) a "semi-linear" right-hand side (for the precise statement see Asm. 1 and Eq. (3), Eq. (4)). To our knowledge, this represents the first instance of a nonlocal-to-local convergence result for such systems. Coupling between the equations of the system appears _only_ on the right-hand side, which means that some of the well-known methods for transitioning to the local limit remain applicable. As an application, we consider a traffic flow model with two lanes and lane-changing functions. However, our analysis is not limited to a system of two equations; we maintain the two-equation system solely for simplicity. The approach taken in this paper is as follows: we obtain a uniform Total Variation (\(TV\)) bound of the nonlocal terms as well as a maximum principle. These findings enable us to transition to the limit in the weak formulation. Furthermore, we can demonstrate the entropy admissibility, akin to the scalar case presented in [11].
The paper is organized as follows: Section 2 presents the model in the nonlocal and local settings. In Section 3, we revisit some well-posedness results, while in Section 4, we demonstrate how to transition to the limit for \(\eta\to 0\). This is accomplished by recovering uniform bounds on the total variation of the nonlocal operators and introducing a compactness argument. Section 5 is dedicated to numerical simulations that support the analytical results. Lastly, Section 6 concludes the paper by outlining some remaining problems.
## 2 Modeling and fundamental assumptions
As mentioned above, our analysis will be limited to two nonlocal scalar balance laws coupled via the right-hand side. This results in a system of nonlocal balance laws that can model lane-changing with macroscopic traffic flow equations.
In this context, it may be helpful to be aware of some classical assumptions related to the involved velocity functions, initial data, etc. We refer the reader to Eq. (3) and Eq. (4), where the introduced functions were used.
**Assumption 1** (General assumptions regarding the utilized data).: _The following was assumed:_
**Lane-wise velocities:**: \(V_{1},V_{2}\in W^{2,\infty}(\mathbb{R}):\ V_{1}^{\prime}\leqq 0\geqq V_{2}^{\prime}\)__
**Maximum lane densities:**: \(\exists\boldsymbol{\rho}_{\max}\in\mathbb{R}_{>0}^{2}\)__
**Initial datum:**: \(\boldsymbol{\rho}_{0}\in L^{\infty}\Big{(}\mathbb{R};\big{[}0,\boldsymbol{ \rho}_{\max}^{1}\big{]}\times\big{[}0,\boldsymbol{\rho}_{\max}^{2}\big{]} \Big{)}\cap TV\big{(}\mathbb{R};\mathbb{R}^{2}\big{)}\)__
**Nonlocal impact:**: \(\eta\in\mathbb{R}_{>0}\)__
**RHS, lane changing:**
\[S\big{(}\mathbf{\rho},\mathcal{W}_{\eta}[\mathbf{\rho}],x\big{)}=\Big{(}\tfrac{\mathbf{\rho} ^{2}}{\mathbf{\rho}_{\max}^{2}}-\tfrac{\mathbf{\rho}^{1}}{\mathbf{\rho}_{\max}^{1}}\Big{)}H (\mathcal{W}_{\eta}[\mathbf{\rho}],x),\qquad x\in\mathbb{R}\]
_with \(H\in W^{1,\infty}_{\mathrm{loc}}(\mathbb{R}^{3};\mathbb{R}_{\geq 0})\) such that \(\exists(\mathcal{H},\mathcal{H}_{1},\mathcal{H}_{2},\mathcal{H}_{BV})\in\mathbb{ R}_{\geq 0}^{4}:\)_
\[\|H\|_{L^{\infty}((0,|\mathbf{\rho}_{\max}|_{\infty})\times(0,|\mathbf{ \rho}_{\max}|_{\infty})\times\mathbb{R})}\leq\mathcal{H}\ \wedge\ \|\partial_{1}H\|_{L^{\infty}((0,|\mathbf{\rho}_{\max}|_{\infty})\times(0,|\mathbf{ \rho}_{\max}|_{\infty})\times\mathbb{R})}\leq\mathcal{H}_{1}\] \[\wedge\ \|\partial_{2}H\|_{L^{\infty}((0,|\mathbf{\rho}_{\max}|_{ \infty})\times(0,|\mathbf{\rho}_{\max}|_{\infty})\times\mathbb{R})}\leq\mathcal{H }_{2}\ \wedge\ \|H\|_{L^{\infty}((0,|\mathbf{\rho}_{\max}|_{\infty})\times(0,|\mathbf{ \rho}_{\max}|_{\infty});BV(\mathbb{R}))}\leq\mathcal{H}_{BV(\mathbb{R})}.\]
_Thereby, we define \(TV(\mathbb{R})\coloneqq\{f\in L^{1}_{loc}(\mathbb{R}):|f|_{TV(\mathbb{R})}<\infty\}\) and \(BV(\mathbb{R})\coloneqq\{f\in L^{1}(\mathbb{R}):|f|_{TV(\mathbb{R})}<\infty\}\) and the considered space-time horizon \(\Omega_{T}\coloneqq(0,T)\times\mathbb{R}\) for \(T\in\mathbb{R}_{>0}\)._
**Remark 1** (Reasonableness of Asm. 1).: _The assumption of the velocities being monotonically decreasing is reasonable in traffic flow modeling and one of the main reasons why a maximum principle can hold (see Theorem 3.2). The canonical assumption that the initial data set is essentially bounded and nonnegative is established. However, one might question the necessity of assuming\(TV\) regularity in addition to these criteria. As we later aim for uniform \(TV\) bounds in the nonlocal term, this assumption is required because particular nonlocal equations do not possess the well-known \(BV\) regularization (for strictly convex/concave flux). The nonlocal impact represents how far downstream traffic affects the velocity. Because we use an exponential kernel (see Section 2), the look-ahead is always infinite, but for \(\eta\) small, it is small and tends to be more localized. Finally, the R.H.S. models the potential lane change from one lane to another. It already encodes the requirement that if one road is empty, density can only come from the other road. In addition, it allows the lane change to be dependent on the location. In addition, the term \(H\) represents how the density exchange between lanes scales with regard to the density ahead. This can also be interpreted as velocity scaling. However, this condition can be considered restrictive as it disallows lane-changing on \(\mathbb{R}\) and only permits it in a way that_
\[\|H\|_{L^{\infty}((0,|\mathbf{\rho}_{\max}|_{\infty})\times(0,|\mathbf{ \rho}_{\max}|_{\infty});BV(\mathbb{R}))}\leq\mathcal{H}_{BV(\mathbb{R})} \tag{2}\]
_holds. This condition could be removed if we would either assume that the nonlocal kernel \(\gamma\) in Eq. (1) is compactly supported (and not of exponential type like in this contribution (compare Eq. (3))) or that the initial datum is in \(L^{1}(\mathbb{R})\) and not - as currently assumed - "only" in \(L^{\infty}(\mathbb{R})\cap TV(\mathbb{R})\). In both cases, both the total variation estimates in Theorem 4.2 and the compactness in Theorem 4.4 could then be established as well, and Eq. (2) would not be required._
_In conclusion, one can state that none of the assumptions are restrictive for applications in traffic flow modeling._
The system of nonlocal balance laws considered in this manuscript can be expressed as follows:
**Nonlocal problem** (The nonlocal system of balance laws).: _Let Asm. 1 hold, and consider the "weakly" coupled (via the right-hand side) system_
\[\partial_{t}\mathbf{\rho}^{1}(t,x)+\partial_{x}\Big{(}V_{1}(\mathcal{W }_{\eta}[\mathbf{\rho}^{1}](t,x))\mathbf{\rho}^{1}(t,x)\Big{)} =S\big{(}\mathbf{\rho}(t,x),\mathcal{W}_{\eta}[\mathbf{\rho}](t,x),x\big{)},\quad(t,x)\in(0,T)\times\mathbb{R} \tag{3}\] \[\partial_{t}\mathbf{\rho}^{2}(t,x)+\partial_{x}\Big{(}V_{2}(\mathcal{ W}_{\eta}[\mathbf{\rho}^{2}](t,x))\mathbf{\rho}^{2}(t,x)\Big{)} =-S\big{(}\mathbf{\rho}(t,x),\mathcal{W}_{\eta}[\mathbf{\rho}](t,x),x\big{)},\quad(t,x)\in(0,T)\times\mathbb{R}\] \[\mathbf{\rho}(0,x) =\mathbf{\rho}_{0}(x),\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad x\in\mathbb{R}\] \[\mathcal{W}_{\eta}[\rho](t,x) =\tfrac{1}{\eta}\int_{x}^{\infty}\exp\Big{(}\tfrac{x-y}{\eta} \Big{)}\,\rho(t,y)\,\mathrm{d}y,\quad(t,x)\in(0,T)\times\mathbb{R}.\]
_Then, we call \(\mathcal{W}_{\eta}\) the nonlocal operator, defined for \(\rho\in C\big{(}[0,T];L^{1}_{\mathrm{loc}}(\mathbb{R})\big{)}\cap L^{\infty}((0,T);L^{\infty}(\mathbb{R})),\) and \(\mathcal{W}_{\eta}[\mathbf{\rho}](t,x)=\big{(}\mathcal{W}_{\eta}[\mathbf{\rho}^{1}], \mathcal{W}_{\eta}[\mathbf{\rho}^{2}]\big{)}(t,x),\ (t,x)\in(0,T)\times\mathbb{R}\) the vector of nonlocal impact. \(\mathbf{\rho}=(\mathbf{\rho}^{1},\mathbf{\rho}^{2})\) is named vector of solutions of the **system of nonlocal balance laws** modeling lane-changing with two lanes._
Because we are investigating the singular limit problem for Section 2, it becomes necessary to define the corresponding local system. We detail this in the following sections:
**Local problem** (The corresponding local system of balance laws).: _Let Asm. 1 hold, and we call the "weakly" coupled (via the right-hand side) system_
\[\partial_{t}\boldsymbol{\rho}^{1}(t,x)+\partial_{x}\Big{(}V_{1} (\boldsymbol{\rho}^{1}(t,x))\boldsymbol{\rho}^{1}(t,x)\Big{)} =S\big{(}\boldsymbol{\rho}(t,x),\boldsymbol{\rho}(t,x),x\big{)}, (t,x)\in(0,T)\times\mathbb{R} \tag{4}\] \[\partial_{t}\boldsymbol{\rho}^{2}(t,x)+\partial_{x}\Big{(}V_{2} (\boldsymbol{\rho}^{2}(t,x))\boldsymbol{\rho}^{2}(t,x)\Big{)} =-S\big{(}\boldsymbol{\rho}(t,x),\boldsymbol{\rho}(t,x),x\big{)}, (t,x)\in(0,T)\times\mathbb{R}\] \[\boldsymbol{\rho}(0,x) =\boldsymbol{\rho}_{0}(x), x\in\mathbb{R}\]
_the system of local balance laws, which models lane-changing for two lanes._
Having laid out the underlying assumptions and the dynamics under consideration, we now turn our attention to the well-posedness, i.e., the existence and uniqueness of solutions.
## 3 Well-posedness of the system of (non)local conservation laws
To ensure the well-posedness of the local equations, i.e., the existence and uniqueness of solutions, we need to first define an Entropy condition. This condition helps identify the physically meaningful solutions among the potentially infinite many weak solutions. Because the system is only weakly coupled via the right-hand side, we can employ scalar entropy conditions similar to those used in [28].
**Definition 1** (Entropy conditions for local conservation laws).: _Let Section 2 be defined for_
\[\alpha\in C^{2}(\mathbb{R})\text{ convex, }\beta_{i}^{\prime}\equiv\alpha^{ \prime}\cdot f_{i}^{\prime}\text{ where }f_{i}\equiv(\cdot)V_{i}(\cdot),\text{ on }\mathbb{R},\ i\in\{1,2\},\text{ for }\varphi\in C^{1}_{c}((-42,T)\times\mathbb{R};\mathbb{R}_{\geq 0})\]
_and for \(\boldsymbol{\rho}^{1},\boldsymbol{\rho}^{2}\in C\big{(}[0,T];L^{1}_{\text{ loc}}(\mathbb{R})\big{)}\)_
\[\mathcal{E}\mathcal{F}_{1}[\varphi,\alpha,\boldsymbol{\rho}^{1}] \coloneqq\iint_{\Omega_{T}}\alpha\big{(}\boldsymbol{\rho}^{1}(t,x) \big{)}\varphi_{t}(t,x)+\beta_{1}\big{(}\boldsymbol{\rho}^{1}(t,x)\big{)} \varphi_{x}(t,x)\,\mathrm{d}x\,\mathrm{d}t+\int_{\mathbb{R}}\alpha\big{(} \boldsymbol{\rho}^{1}_{0}(x)\big{)}\varphi(0,x)\,\mathrm{d}x\] \[\mathcal{E}\mathcal{F}_{2}[\varphi,\alpha,\boldsymbol{\rho}^{2}] \coloneqq\iint_{\Omega_{T}}\alpha\big{(}\boldsymbol{\rho}^{2}(t,x) \big{)}\varphi_{t}(t,x)+\beta_{2}\big{(}\boldsymbol{\rho}^{2}(t,x)\big{)} \varphi_{x}(t,x)\,\mathrm{d}x\,\mathrm{d}t+\int_{\mathbb{R}}\alpha\big{(} \boldsymbol{\rho}^{2}_{0}(x)\big{)}\varphi(0,x)\,\mathrm{d}x\] \[\quad+\int_{\Omega_{T}}\alpha^{\prime}\big{(}\boldsymbol{\rho}^{2 }(t,x)\big{)}S\big{(}\boldsymbol{\rho}^{1}(t,x),\boldsymbol{\rho}^{2}(t,x), \boldsymbol{\rho}^{1}(t,x),\boldsymbol{\rho}^{2}(t,x),x\big{)}\varphi(t,x)\, \mathrm{d}x\,\mathrm{d}t.\]
_Then, \(\boldsymbol{\rho}_{*}\in C\big{(}[0,T];L^{1}_{\text{loc}}(\mathbb{R};\mathbb{ R}^{2})\big{)}\) is called an entropy solution if it satisfies for \(i\in\{1,2\}\)_
\[\mathcal{E}\mathcal{F}_{i}[\varphi,\alpha,\boldsymbol{\rho}_{*}^{i}]\geq 0 \forall\varphi\in C^{1}_{c}\big{(}(-42,T)\times\mathbb{R};\mathbb{R}_{\geq 0 }\big{)}\ \forall\alpha\in C^{2}(\mathbb{R})\text{ convex, with }\beta_{i}^{\prime}\equiv\alpha^{\prime}\cdot f_{i}^{\prime}.\]
After identifying the appropriate entropy condition, we can establish the existence and uniqueness of the local system as explained in the following:
**Theorem 3.1** (Existence, Uniqueness & Maximum principle of the local system).: _Let Asm. 1 hold. Then, there exists a unique, weak, entropy solution \(\boldsymbol{\rho}_{*}\in C\big{(}[0,T];L^{1}_{\text{loc}}(\mathbb{R};\mathbb{ R}^{2})\big{)}\cap L^{\infty}\big{(}(0,T);L^{\infty}(\mathbb{R};\mathbb{R}^{2}) \big{)}\) in the sense of Def. 1 to Section 2, such that_
\[0\leq\boldsymbol{\rho}^{i}(t,x)\leq\boldsymbol{\rho}^{i}_{\max},\ i\in\{1,2\},\ (t,x)\in\Omega_{T}\text{ a.e.}\]
_with \(\boldsymbol{\rho}^{i}_{\max}\) as in Asm. 1._
Proof.: The existence and uniqueness of solutions to the local system (2) can be established using the results presented in [38]. This research examined a class of weakly coupled hyperbolic multi-dimensional systems characterized by source terms dependent on unknowns, as well as spatial and temporal variables. Note that [38, Assumption 1.1] is quite stringent, but the assumption can be relaxed according to the same author. For further reference, see the proof presented in [26, 27], where the source term does not depend on the spatial variable. The Maximum principle, which is satisfied in this context, is derived from the parabolic approximation of the hyperbolic system as presented in [38].
Next, we define "weak solutions" for the considered class of nonlocal conservation laws in Section 2. Because the class of nonlocal conservation laws yields unique, weak solutions, there is no need to define an entropy (which is typically done in local conservation laws and particularly in Def. 1).
**Definition 2** (Weak solution for the system of nonlocal conservation laws).: _For a system of nonlocal conservation laws as in Eq. (3) we call \((\mathbf{\rho}^{1},\mathbf{\rho}^{2})\in C\big{(}[0,T];L^{1}_{loc}(\mathbb{R};\mathbb{R }^{2})\big{)}\cap L^{\infty}\big{(}(0,T);L^{\infty}(\mathbb{R};\mathbb{R}^{2}) \big{)}\) a weak solution to Section 2, if for all \(\varphi\in C^{1}_{c}\big{(}(-42,T)\times\mathbb{R}\big{)}\) and for \(i\in\{1,2\}\) the following holds:_
\[\iint_{\Omega_{T}}\mathbf{\rho}^{i}(t,x)\big{(}\varphi_{t}(t,x)+V_{i} (\mathcal{W}_{\eta}[\mathbf{\rho}^{i}](t,x))\varphi_{x}(t,x)\big{)}\,\mathrm{d}x\, \mathrm{d}t+\int_{\mathbb{R}}\varphi(0,x)\mathbf{\rho}^{i}_{0}(x)\,\mathrm{d}x\] \[=(-1)^{i}\iint_{\Omega_{T}}\varphi(t,x)S\big{(}\mathbf{\rho}(t,x), \mathcal{W}_{\eta}[\mathbf{\rho}](t,x),x\big{)}\,\mathrm{d}x\,\mathrm{d}t\]
_and it is complemented by the nonlocal operator:_
\[\mathcal{W}_{\eta}[\mathbf{\rho}^{i}](t,x)\coloneqq\tfrac{1}{\eta}\int_{x}^{ \infty}\exp\big{(}\tfrac{x-y}{\eta}\big{)}\mathbf{\rho}^{i}(t,y)\,\mathrm{d}y,\ (t,x)\in\Omega_{T},\ i\in\{1,2\}.\]
In the next theorem, we will establish the existence and uniqueness of solutions for the nonlocal balance law, as in Section 2:
**Theorem 3.2** (Existence & Uniqueness & Maximum principle).: _Let Asm. 1 be true. Then, there exists a unique weak \(\mathbf{\rho}\in C\big{(}[0,T];L^{1}_{\mathrm{loc}}(\mathbb{R};\mathbb{R}^{2}) \big{)}\cap L^{\infty}\big{(}(0,T);L^{\infty}(\mathbb{R};\mathbb{R}^{2}) \cap TV(\mathbb{R};\mathbb{R}^{2})\big{)}\) of Eq. (3) and the solution satisfies_
\[0\leq\mathbf{\rho}^{i}(t,x)\leq\mathbf{\rho}^{i}_{\max}\ (t,x)\in\Omega_{T}\ \text{a.e.},\ i\in\{1,2\}.\]
Proof.: This is a consequence of [8, Theorem 2.15] for a small time horizon, and, thanks to the maximum principle in [8, Theorem 3.3 & Lemma 3.4], it can be extended to any finite time horizon.
Another important result in this work is the stability of solutions in \(L^{1}\) and that we can approximate solutions using sufficiently smooth solutions.
**Lemma 3.3** (Continuous dependence of nonlocal solutions to the initial datum and smooth solutions).: _Let the assumptions of Theorem 3.2 be given, and assume that for \(\varepsilon\in\mathbb{R}_{>0}\) the functions \(\varphi_{\varepsilon}^{1}\in C^{\infty}_{c}(\mathbb{R};\mathbb{R}_{\geq 0})\) and \(\varphi_{\varepsilon}^{2}\in C^{\infty}_{c}(\mathbb{R}^{2};\mathbb{R}_{\geq 0})\) denote the standard mollifier in the sense of [36, Remark C.18]. We define_
\[\mathbf{\rho}_{0,\varepsilon}\equiv\varphi_{\varepsilon}^{1}\ast\mathbf{\rho}_{0},\ H_{ \varepsilon}=\varphi_{\varepsilon}^{2}\ast H\]
_and call \(\mathbf{\rho}_{\varepsilon}\in C([0,T];L^{1}_{\mathrm{loc}}(\mathbb{R};\mathbb{R }^{2}))\cap L^{\infty}((0,T);TV(\mathbb{R};\mathbb{R}^{2}))\) the solution to the corresponding nonlocal conservation law with the initial datum \(\mathbf{\rho}_{\varepsilon}\) and lane-changing function \(H_{\varepsilon}\). Then, \(\mathbf{\rho}_{\varepsilon}\in W^{1,\infty}_{loc}(\Omega_{T})\) and we obtain_
\[\lim_{\varepsilon\to 0}\|\mathbf{\rho}_{\varepsilon}-\mathbf{\rho}\|_{C([0,T];L^{1}( \mathbb{R};\mathbb{R}^{2}))}=0.\]
_In particular, \(\mathbf{\rho}_{\varepsilon}\) is a strong solution of Section 2 and the nonlocal operator admits additional regularity, i.e._
\[W_{\eta}[\mathbf{\rho}_{\eta}]\in W^{2,\infty}_{loc}(\Omega_{T};\mathbb{R}^{2}).\]
Proof.: The proof mainly shows that the nonlocal operator renders the velocity field of the conservation laws Lipschitz-continuous. Subsequently, one can apply classical approximation results for linear conservation laws with regard to the velocity field as well as some Gronwall estimates. We refer the reader to [8] and to [31].
We also require a technical lemma, which we detail in the following:
**Lemma 3.4** (\(\partial_{2}\mathcal{W}_{\eta}[\boldsymbol{\rho}^{i}]\) vanishing at \(\infty\)).: _It holds for \(i\in\{1,2\}\) that the spatial derivative of the nonlocal term, as in Eq. (3), vanishes at \(\infty\), i.e., \(\forall\eta\in\mathbb{R}_{>0},\ i\in\{1,2\}\)_
\[\lim_{x\to\infty}\partial_{x}\mathcal{W}_{\eta}[\boldsymbol{\rho}^{i}](t,x)=0 \qquad\forall t\in[0,T].\]
Proof.: Thanks to Lemma 3.3 we can assume that the nonlocal solution's initial datum is smooth, with a smoothing parameter \(\varepsilon\in\mathbb{R}_{>0}\) so that the corresponding solution for \(i\in\{1,2\}\)\(\boldsymbol{\rho}^{i}_{\varepsilon}\in W^{1,\infty}(\Omega_{T})\) represents a robust solution. Next, we can compute the derivative of the nonlocal operator and have for \((t,x)\in\Omega_{T}\)
\[\big{|}\partial_{x}\mathcal{W}_{\eta}[\boldsymbol{\rho}^{i}_{ \varepsilon}](t,x)\big{|} =\tfrac{1}{\eta}\big{|}\mathcal{W}_{\eta}[\boldsymbol{\rho}^{i}_ {\varepsilon}](t,x)-\boldsymbol{\rho}^{i}_{\varepsilon}(t,x)\big{|}=\tfrac{1}{ \eta}\bigg{|}\int_{x}^{\infty}\mathrm{e}^{\frac{x-y}{\eta}}\partial_{y} \boldsymbol{\rho}^{i}_{\varepsilon}(t,y)\,\mathrm{d}y\bigg{|}\] \[\leq\tfrac{1}{\eta}\int_{x}^{\infty}\mathrm{e}^{\frac{x-y}{\eta} }\big{|}\partial_{y}\boldsymbol{\rho}^{i}_{\varepsilon}(t,y)\big{|}\,\mathrm{ d}y\leq\tfrac{1}{\eta}\int_{x}^{\infty}|\partial_{y}\boldsymbol{\rho}^{i}_{ \varepsilon}(t,y)|\,\mathrm{d}y=\tfrac{1}{\eta}|\boldsymbol{\rho}^{i}_{ \varepsilon}(t,\cdot)|_{TV(x,\infty)}.\]
For \(x\to\infty\), the right-hand side vanishes, and thus, we obtain our claim for every \(\varepsilon\in\mathbb{R}_{>0}\) as well as for the non-smoothed solution.
Equipped with the well-posedness and approximation results, we can now turn to tackle the singular limit problem.
## 4 The singular limit problem or nonlocal approximation of local lane-change traffic models
In this section, we first establish an equation solely in the nonlocal operator (similar to the approach in [17]), see Lemma 4.1. This will allow us, to prove a total variation bound uniform in \(\eta\) using Theorem 4.2. We then demonstrate that whenever a nonlocal balance law converges strongly in \(C(L^{1})\), it converges to the entropy solution (Theorem 4.3). Theorem 4.4, along with the uniform \(TV\) estimate, contributes to obtained "spatial compactness", which results in time compactness as well and leads to strong convergence in \(C(L^{1})\). Eventually, in Theorem 4.5, we collect the previously established results and obtain the singular limit convergence to the (local) entropy solution.
### Total Variation bounds uniform with respect to the nonlocal terms
We start by formulating a Cauchy problem entirely in nonlocal terms. This approach has the advantage that the properties of the solutions \(\boldsymbol{\rho}^{i}\) do not need to be studied anymore, only the properties of \(\mathcal{W}[\boldsymbol{\rho}]\), which turn out to behave better (one can obtain uniform \(TV\) estimates later in Theorem 4.2).
**Lemma 4.1** (System of transport equations with nonlocal sources satisfied by the nonlocal operator).: _The nonlocal terms \(\mathcal{W}[\boldsymbol{\rho}^{i}],\ i\in\{1,2\}\) of the system dynamics in (3) are satisfied upon introducing the following abbreviations for \((t,x)\in\Omega_{T}\) and \(i\in\{1,2\}\)_
\[\boldsymbol{W}^{i}_{\eta}(t,x) \coloneqq\mathcal{W}[\boldsymbol{\rho}^{i}](t,x), \tag{5}\] \[\boldsymbol{W}_{\eta}(t,x) \coloneqq\big{(}\boldsymbol{W}^{1}_{\eta},\boldsymbol{W}^{2}_{ \eta}\big{)}(t,x),\] (6) \[\mathscr{S}\big{(}\boldsymbol{W}_{\eta},\eta\partial_{2} \boldsymbol{W}_{\eta},\cdot\big{)} \coloneqq S\big{(}\boldsymbol{W}^{1}_{\eta}-\eta\partial_{2}\boldsymbol{W}^{1 }_{\eta},\boldsymbol{W}^{2}_{\eta}-\eta\partial_{2}\boldsymbol{W}^{2}_{\eta}, \boldsymbol{W}^{1}_{\eta},\boldsymbol{W}^{2}_{\eta},\cdot\big{)}, \tag{7}\]
the coupled Cauchy problem:_
\[\begin{split}\partial_{t}\mathbf{W}^{1}_{\eta}(t,x)&=-V_{1}( \mathbf{W}^{1}_{\eta}(t,x))\partial_{x}\mathbf{W}^{1}_{\eta}(t,x)-\tfrac{1}{\eta}\int_{ x}^{\infty}\!\!\!\exp(\tfrac{x-y}{\eta})V_{1}^{\prime}(\mathbf{W}^{1}_{\eta}(t,y))\mathbf{W}^{1} _{\eta}(t,y)\partial_{y}\mathbf{W}^{1}_{\eta}(t,y)\,\mathrm{d}y\\ &\quad+\tfrac{1}{\eta}\int_{x}^{\infty}\!\!\!\exp(\tfrac{x-y}{ \eta})\mathscr{S}\big{(}\mathbf{W}_{\eta}(t,y),\eta\partial_{y}\mathbf{W}_{\eta}(t,y),y \big{)}\,\mathrm{d}y,\\ \partial_{t}\mathbf{W}^{2}_{\eta}(t,x)&=-V_{2}(\mathbf{W}^{ 2}_{\eta}(t,x))\partial_{x}\mathbf{W}^{2}_{\eta}(t,x)-\tfrac{1}{\eta}\int_{x}^{ \infty}\!\!\!\exp(\tfrac{x-y}{\eta})V_{2}^{\prime}(\mathbf{W}^{2}_{\eta}(t,y))\mathbf{ W}^{2}_{\eta}(t,y)\partial_{y}\mathbf{W}^{2}_{\eta}(t,y)\,\mathrm{d}y\\ &\quad-\tfrac{1}{\eta}\int_{x}^{\infty}\!\!\!\exp(\tfrac{x-y}{ \eta})\mathscr{S}\big{(}\mathbf{W}_{\eta}(t,y),\eta\partial_{y}\mathbf{W}_{\eta}(t,y),y \big{)}\,\mathrm{d}y,\end{split} \tag{8}\]
_which is supplemented by the following initial conditions:_
\[\big{(}\mathbf{W}^{1}_{\eta}(0,x),\mathbf{W}^{2}_{\eta}(0,x)\big{)}=\tfrac{1}{\eta} \left(\int_{x}^{\infty}\exp(\tfrac{x-y}{\eta})\mathbf{\rho}^{1}_{0}(y)\,\mathrm{d} y,\int_{x}^{\infty}\exp(\tfrac{x-y}{\eta})\mathbf{\rho}^{2}_{0}(y)\,\mathrm{d}y \right),\qquad x\in\mathbb{R}. \tag{9}\]
Proof.: We take advantage of Lemma 3.3 and assume first that the initial datum is smooth enough to obtain strong solutions (we suppress the additional dependency on the regularization parameter). Then, we can compute the partial derivative with respect to \(x\) of \(\mathcal{W}\), and we obtain for \((t,x)\in\Omega_{T}\) and \(i\in\{1,2\}\)
\[\partial_{x}\mathbf{W}^{i}_{\eta}(t,x)=\tfrac{1}{\eta}\big{(}\mathbf{W}^{i}_{\eta}(t,x )-\mathbf{\rho}^{i}(t,x)\big{)}\implies\mathbf{\rho}^{i}(t,x)=\mathbf{W}^{i}_{\eta}(t,x)- \eta\partial_{x}\mathbf{W}^{i}_{\eta}(t,x). \tag{10}\]
Then, we can compute the time derivative of \(\mathbf{W}^{1}_{\eta}\) (and analogously, also \(\mathbf{W}^{2}_{\eta}\)) and obtain
\[\begin{split}\partial_{t}\mathbf{W}^{1}_{\eta}(t,x)& \stackrel{{\eqref{eq:2}}}{{=}}-\tfrac{1}{\eta}\int_{x}^{ \infty}\exp(\tfrac{x-y}{\eta})\partial_{y}\Big{(}V_{1}(\mathbf{W}^{1}_{\eta}(t,y)) \mathbf{\rho}^{1}(t,y)\Big{)}\,\mathrm{d}y\\ &\quad+\tfrac{1}{\eta}\int_{x}^{\infty}\exp(\tfrac{x-y}{\eta})S \big{(}\mathbf{\rho}^{1}(t,y),\mathbf{\rho}^{2}(t,y),\mathbf{W}^{1}_{\eta}(t,y),\mathbf{W}^{2 }_{\eta}(t,y),y\big{)}\,\mathrm{d}y\end{split}\]
and using partial integration
\[\begin{split}&=-\tfrac{1}{\eta^{2}}\int_{x}^{\infty}\exp(\tfrac{x- y}{\eta})V_{1}(\mathbf{W}^{1}_{\eta}(t,y))\mathbf{\rho}^{1}(t,y)\,\mathrm{d}y+\tfrac{1}{ \eta}V_{1}(\mathbf{W}^{1}_{\eta}(t,x))\mathbf{\rho}^{1}(t,x)\\ &\quad+\tfrac{1}{\eta}\int_{x}^{\infty}\exp(\tfrac{x-y}{\eta})S \big{(}\mathbf{\rho}^{1}(t,y),\mathbf{\rho}^{2}(t,y),\mathbf{W}^{1}_{\eta}(t,y),\mathbf{W}^{2 }_{\eta}(t,y),y\big{)}\,\mathrm{d}y\end{split}\]
after inserting Eq. (10) for \(\mathbf{\rho}^{1}\) and \(\mathbf{\rho}^{2}\), and using the notation in Eq. (7)we obtain
\[\begin{split}&=-\tfrac{1}{\eta^{2}}\int_{x}^{\infty}\exp(\tfrac{x- y}{\eta})V_{1}(\mathbf{W}^{1}_{\eta}(t,y))\mathbf{W}^{1}_{\eta}(t,y)\,\mathrm{d}y\\ &\quad+\tfrac{1}{\eta}\int_{x}^{\infty}\exp(\tfrac{x-y}{\eta})V_{1 }(\mathbf{W}^{1}_{\eta}(t,y))\partial_{y}\mathbf{W}^{1}_{\eta}(t,y)\,\mathrm{d}y\\ &\quad+\tfrac{1}{\eta}V_{1}(\mathbf{W}^{1}_{\eta}(t,x))\mathbf{W}^{1}_{ \eta}(t,x)-V_{1}(\mathbf{W}^{1}_{\eta}(t,x))\partial_{x}\mathbf{W}^{1}_{\eta}](t,x)\\ &\quad+\tfrac{1}{\eta}\int_{x}^{\infty}\exp(\tfrac{x-y}{\eta}) \mathscr{S}\big{(}\mathbf{W}_{\eta}(t,y),\eta\partial_{y}\mathbf{W}_{\eta}(t,y),y\big{)} \,\mathrm{d}y\end{split}\]
another integration by parts in the second term yields
\[\begin{split}&=-\tfrac{1}{\eta}\int_{x}^{\infty}\exp(\tfrac{x-y}{ \eta})V_{1}^{\prime}(\mathbf{W}^{1}_{\eta}(t,y))\mathbf{W}^{1}_{\eta}(t,y)\partial_{y} \mathbf{W}^{1}_{\eta}(t,y)\,\mathrm{d}y\\ &\quad-V_{1}(\mathbf{W}^{1}_{\eta}(t,x))\partial_{x}\mathbf{W}^{1}_{\eta} (t,x)+\tfrac{1}{\eta}\int_{x}^{\infty}\exp(\tfrac{x-y}{\eta})\mathscr{S}\big{(} \mathbf{W}_{\eta}(t,y),\eta\partial_{y}\mathbf{W}_{\eta}(t,y),y\big{)}\,\mathrm{d}y. \end{split}\]
Repeating the same argument for \(\mathbf{W}^{2}_{\eta}\) yields the claim for the robust solutions, i.e., in particular, for the smooth initial datum. However, thanks to Lemma 3.3, this holds also for the general datum, which concludes the proof.
**Remark 2** (Reasonableness of the nonlocal dynamics).: _The system in Eq. (8) is for \(i\in\{1,2\}\) and \((t,x)\in\Omega_{T}\) indeed a nonlocal approximation of_
\[\partial_{t}\mathbf{\rho}^{i}(t,x) =-V_{i}(\mathbf{\rho}^{i}(t,x))\partial_{x}\mathbf{\rho}^{i}(t,x)-V_{i}^{ \prime}(\mathbf{\rho}^{i}(t,x))\mathbf{\rho}^{i}(t,x)\partial_{x}\mathbf{\rho}^{i}(t,x)\] \[\quad+(-1)^{i+1}S\big{(}\mathbf{\rho}^{1}(t,x),\mathbf{\rho}^{2}(t,x),\bm {\rho}^{1}(t,x),\mathbf{\rho}^{2}(t,x),x\big{)}\] \[=\partial_{x}\big{(}V_{i}(\mathbf{\rho}^{i}(t,x))\mathbf{\rho}^{i}(t,x) \big{)}+(-1)^{i+1}S\big{(}\mathbf{\rho}^{1}(t,x),\mathbf{\rho}^{2}(t,x),\mathbf{\rho}^{1}(t,x),\mathbf{\rho}^{2}(t,x),x\big{)}\]
_which can be easily observed for \(\eta\to 0\)._
Following the same method of proof as in [17], the formulation of the nonlocal terms in Lemma 4.1 makes it possible to derive total variation estimates directly, which are uniform in the nonlocal parameter \(\eta\).
**Theorem 4.2** (Total variation bound uniform in \(\eta\)).: _Given Asm. 1, the solution \(\mathbf{W}_{\,\eta}\coloneqq\big{(}\mathbf{W}_{\,\eta}^{1},\,\mathbf{W}_{\,\eta}^{2}\big{)}\) to the system in Eq. (8) with the initial datum, as in Eq. (9), satisfies the following total variation bound \(\forall t\in[0,T]\)_
\[\begin{split}\big{|}\mathbf{W}_{\,\eta}(t,\cdot)\big{|}_{TV(\mathbb{R };\mathbb{R}^{2})}&\leq\bigg{(}|\mathbf{q}_{0}|_{TV(\mathbb{R}; \mathbb{R}^{2})}+4\Big{(}\frac{\|\mathbf{\rho}_{\max}\|_{\infty}}{\mathbf{\rho}_{\max }^{2}}+\frac{\|\mathbf{\rho}_{\max}\|_{\infty}}{\mathbf{\rho}_{\max}^{1}}+1\Big{)} \mathcal{H}_{BV}\bigg{)}\\ &\cdot\exp\bigg{(}2t\Big{(}\frac{\|\mathbf{\rho}_{\max}\|_{\infty} \,\mathcal{H}_{1}}{\mathbf{\rho}_{\max}^{2}}+\frac{\mathcal{H}}{\mathbf{\rho}_{\max}^ {1}}+\frac{\|\mathbf{\rho}_{\max}\|_{\infty}\,\mathcal{H}_{1}}{\mathbf{\rho}_{\max}^{ 1}}+2\mathcal{H}_{1}+\frac{\|\mathbf{\rho}_{\max}\|_{\infty}\,\mathcal{H}_{1}}{ \mathbf{\rho}_{\max}^{1}}+\frac{\mathcal{H}}{\mathbf{\rho}_{\max}^{2}}+\frac{\|\mathbf{ \rho}_{\max}\|_{\infty}\,\mathcal{H}_{1}}{\mathbf{\rho}_{\max}^{2}}\Big{)}\bigg{)} \end{split} \tag{11}\]
_with the constants involved in the estimate as shown in Asm. 1._
Proof.: Let us first assume that our initial datum is smooth, which is, thanks to Lemma 3.3, not a restriction. Recalling the identities in \(\mathbf{W}\) in Lemma 4.1 as well as the notation in Eq. (7), we compute at first the spatial derivative of \(\partial_{t}\mathbf{W}_{\,\eta}^{1}(t,x)\) and \(\partial_{t}\mathbf{W}_{\,\eta}^{2}(t,x)\) for \((t,x)\in\Omega_{T}\) and arrive at
\[\partial_{t}\partial_{x}\mathbf{W}_{\,\eta}^{1}(t,x) =-\tfrac{1}{\eta^{2}}\!\!\int_{x}^{\infty}\!\!\!\exp(\frac{x-y}{ \eta})V_{1}^{\prime}(\mathbf{W}_{\,\eta}^{1}(t,y))\mathbf{W}_{\,\eta}^{1}(t,y)\partial _{y}\mathbf{W}_{\,\eta}^{1}(t,y)\,\mathrm{d}y+\tfrac{1}{\eta}V_{1}^{\prime}(\mathbf{W} _{\,\eta}^{1}(t,x))\mathbf{W}_{\,\eta}^{1}(t,x)\partial_{x}\mathbf{W}_{\,\eta}^{1}(t,x)\] \[\qquad-\tfrac{1}{\eta}V_{1}^{\prime}(\mathbf{W}_{\,\eta}^{1}(t,x)) \big{(}\partial_{x}\mathbf{W}_{\,\eta}^{1}(t,x)\big{)}^{2}-\tfrac{1}{\eta}V_{1}( \mathbf{W}_{\,\eta}^{1}(t,x))\partial_{x}^{2}\mathbf{W}_{\,\eta}^{1}(t,x)\] \[\qquad+\tfrac{1}{\eta^{2}}\int_{x}^{\infty}\exp(\frac{x-y}{\eta}) \mathscr{S}\big{(}\mathbf{W}(t,y),\eta\partial_{y}\mathbf{W}(t,y),y\big{)}\,\mathrm{d}y -\tfrac{1}{\eta}\mathscr{S}\big{(}\mathbf{W}(t,x),\eta\partial_{x}\mathbf{W}(t,x),x \big{)}\] \[\partial_{t}\partial_{x}\mathbf{W}_{\,\eta}^{2}(t,x) =-\tfrac{1}{\eta^{2}}\!\!\int_{x}^{\infty}\!\!\!\exp(\frac{x-y}{ \eta})V_{2}^{\prime}(\mathbf{W}_{\,\eta}^{2}(t,y))\mathbf{W}_{\,\eta}^{2}(t,y)\partial _{y}\mathbf{W}_{\,\eta}^{2}(t,y)\,\mathrm{d}y+\tfrac{1}{\eta}V_{2}^{\prime}(\mathbf{W} _{\,\eta}^{2}(t,x))\mathbf{W}_{\,\eta}^{2}(t,x)\partial_{x}\mathbf{W}_{\,\eta}^{2}(t,x)\] \[\qquad-\tfrac{1}{\eta^{2}}V_{2}^{\prime}(\mathbf{W}_{\,\eta}^{2}(t,x) \big{)}\big{(}\partial_{x}\mathbf{W}_{\,\eta}^{2}(t,x)\big{)}^{2}-\tfrac{1}{\eta}V_{ 2}(\mathbf{W}_{\,\eta}^{2}(t,x))\partial_{x}^{2}\mathbf{W}_{\,\eta}^{2}(t,x) \tag{12}\]
Next, we compute the total variation of \(\mathcal{W}[\mathbf{\rho}]\), i.e., \(|\mathbf{W}_{\,\eta}^{1}(t,\cdot)|_{TV(\mathbb{R})}+|\mathbf{W}_{\,\eta}^{2}(t,\cdot)|_{TV (\mathbb{R})}\) starting with \(|\mathbf{W}_{\,\eta}^{1}(t,\cdot)|_{TV(\mathbb{R})}\)
\[\begin{split}&\tfrac{\mathrm{d}}{\mathrm{d}t}\int_{\mathbb{R}}| \partial_{x}\mathbf{W}_{\,\eta}^{1}(t,x)|\,\mathrm{d}x=\int_{\mathbb{R}}\mathrm{ sgn}(\partial_{x}\mathbf{W}_{\,\eta}^{1}(t,x))\partial_{t}\partial_{x}\mathbf{W}_{\,\eta}^{1}(t,x) \,\mathrm{d}x\\ &\overset{\eqref{eq:2}}{=}-\tfrac{1}{\eta^{2}}\int_{\mathbb{R}} \mathrm{sgn}(\partial_{x}\mathbf{W}_{\,\eta}^{1}(t,x))\int_{x}^{\infty}\exp(\frac{x-y}{ \eta})V_{1}^{\prime}(\mathbf{W}_{\,\eta}^{1}(t,y))\mathbf{W}_{\,\eta}^{1}(t,y) \partial_{y}\mathbf{W}_{\,\eta}^{1}(t,y)\,\mathrm{d}y\,\mathrm{d}x\\ &\quad+\tfrac{1}{\eta}\int_{\mathbb{R}}\mathrm{sgn}(\partial_{x}\mathbf{W} _{\,\eta}^{1}(t,x))V_{1}^{\prime}(\mathbf{W}_{\,\eta}^{1}(t,x))\mathbf{W}_{\,\eta}^{1}(t,x) \partial_{x}\mathbf{W}_{\,\eta}^{1}(t,x)\,\mathrm{d}x\\ &\quad-\tfrac{1}{\eta}\int_{\mathbb{R}}\mathrm{sgn}(\partial_{x}\mathbf{W} _{\,\eta}^{1}(t,x))V_{1}^{\prime}(\mathbf{W}_{\,\eta}^{1}(t,x))\big{(}\partial_{x} \mathbf{W}_{\,\eta}^{1}(t,x)\big{)}^{2}\,\mathrm{d}x\\ &\quad-\tfrac{1}{\eta}\int_{\mathbb{R}}\mathrm{sgn}(\partial_{x} \mathbf{W}_{\,\eta}^{1}(t,x))V_{1}(\mathbf{W}_{\,\eta}^{1}(t,x))\partial_{x}^{2}\mathbf{W}_{ \,\eta}^{1}(t,x)\,\mathrm{d}x\\ &\quad+\tfrac{1}{\eta^{2}}\int_{\mathbb{R}}\mathrm{sgn}(\partial_{x} \mathbf{W}_{\,\eta}^{1}(t,x))\int_{x}^{\infty}\exp(\frac{x-y}{\eta})\mathscr{S} \big{(}\mathbf{W}(t,y),\eta\partial_{y}\mathbf{W}(t,y),y\big{)}\,\mathrm{d}y\,\mathrm{d}x \end{split}\]
\[-\tfrac{1}{\eta}\int_{\mathbb{R}}\operatorname{sgn}(\partial_{x} \boldsymbol{W}^{1}_{\eta}(t,x))\mathscr{S}\big{(}\boldsymbol{W}(t,x),\eta \partial_{x}\boldsymbol{W}(t,x),x\big{)}\,\mathrm{d}x.\]
Performing an integration by parts in the fourth term, using \(\operatorname{sgn}(\partial_{x}\boldsymbol{W}^{1}_{\eta}(t,x))\partial_{x}^{2 }\boldsymbol{W}_{1}(t,x)=\tfrac{\mathrm{d}}{\mathrm{d}x}|\partial_{x} \boldsymbol{W}^{1}_{\eta}(t,x)|\)
\[=-\tfrac{1}{\eta^{2}}\int_{\mathbb{R}}\operatorname{sgn}(\partial _{x}\boldsymbol{W}^{1}_{\eta}(t,x))\int_{x}^{\infty}\exp(\tfrac{x-y}{\eta})V^ {\prime}_{1}(\boldsymbol{W}^{1}_{\eta}(t,y))\boldsymbol{W}^{1}_{\eta}(t,y) \partial_{y}\boldsymbol{W}^{1}_{\eta}(t,y)\,\mathrm{d}y\,\mathrm{d}x \tag{14}\] \[\quad+\tfrac{1}{\eta}\int_{\mathbb{R}}|\partial_{x}\boldsymbol{W }^{1}_{\eta}(t,x)|V^{\prime}_{1}(\boldsymbol{W}^{1}_{\eta}(t,x))\boldsymbol{W }^{1}_{\eta}(t,x)\,\mathrm{d}x\] \[\quad+\tfrac{1}{\eta^{2}}\int_{\mathbb{R}}\operatorname{sgn}( \partial_{x}\boldsymbol{W}^{1}_{\eta}(t,x))\int_{x}^{\infty}\exp(\tfrac{x-y}{ \eta})\mathscr{S}\big{(}\boldsymbol{W}(t,y),\eta\partial_{x}\boldsymbol{W}(t,y ),y\big{)}\,\mathrm{d}y\,\mathrm{d}x\] \[\quad-\tfrac{1}{\eta}\int_{\mathbb{R}}\operatorname{sgn}( \partial_{x}\boldsymbol{W}^{1}_{\eta}(t,x))\mathscr{S}\big{(}\boldsymbol{W}(t,x),\eta\partial_{x}\boldsymbol{W}(t,x),x\big{)}\,\mathrm{d}x\]
and exchanging the order of integration
\[\leq-\tfrac{1}{\eta^{2}}\int_{\mathbb{R}}V^{\prime}_{1}( \boldsymbol{W}^{1}_{\eta}(t,y))\boldsymbol{W}^{1}_{\eta}(t,y)|\partial_{y} \boldsymbol{W}^{1}_{\eta}(t,y)|\int_{-\infty}^{y}\exp(\tfrac{x-y}{\eta})\, \mathrm{d}x\,\mathrm{d}y \tag{15}\] \[\quad+\tfrac{1}{\eta}\int_{\mathbb{R}}|\partial_{x}\boldsymbol{W }^{1}_{\eta}(t,x)|V^{\prime}_{1}(\boldsymbol{W}^{1}_{\eta}(t,x))\boldsymbol{W }^{1}_{\eta}(t,x)\,\mathrm{d}x\] \[\quad-\tfrac{1}{\eta}\int_{\mathbb{R}}\operatorname{sgn}( \partial_{x}\boldsymbol{W}^{1}_{\eta}(t,x))\mathscr{S}\big{(}\boldsymbol{W}( t,x),\eta\partial_{x}\boldsymbol{W}(t,x),x\big{)}\,\mathrm{d}x\] \[\quad-\tfrac{1}{\eta}\int_{\mathbb{R}}\operatorname{sgn}( \partial_{x}\boldsymbol{W}^{1}_{\eta}(t,x))\mathscr{S}\big{(}\boldsymbol{W}( t,x),\eta\partial_{x}\boldsymbol{W}(t,x),x\big{)}\,\mathrm{d}x\] \[=\tfrac{1}{\eta^{2}}\int_{\mathbb{R}}\operatorname{sgn}( \partial_{x}\boldsymbol{W}^{1}_{\eta}(t,x))\int_{x}^{\infty}\exp(\tfrac{x-y}{ \eta})\mathscr{S}\big{(}\boldsymbol{W}(t,y),\eta\partial_{y}\boldsymbol{W}(t,y),y\big{)}\,\mathrm{d}y\,\mathrm{d}x\] \[\quad-\tfrac{1}{\eta}\int_{\mathbb{R}}\operatorname{sgn}( \partial_{x}\boldsymbol{W}^{1}_{\eta}(t,x))\mathscr{S}\big{(}\boldsymbol{W}( t,x),\eta\partial_{x}\boldsymbol{W}(t,x),x\big{)}\,\mathrm{d}x\]
also, an integration by parts in the first term with regard to the exponential function yields
\[=\tfrac{1}{\eta}\int_{\mathbb{R}}\operatorname{sgn}(\partial_{x }\boldsymbol{W}^{1}_{\eta}(t,x))\int_{x}^{\infty}\exp(\tfrac{x-y}{\eta})\tfrac{ \mathrm{d}}{\mathrm{d}y}\mathscr{S}\big{(}\boldsymbol{W}(t,y),\eta\partial_{y} \boldsymbol{W}(t,y),y\big{)}\,\mathrm{d}y\,\mathrm{d}x. \tag{16}\]
We still need to investigate the spatial derivative of the source term \(\mathscr{S}\) in greater detail. Recalling its definition in Eq. (7) and Asm. 1, we can compute for \((t,y)\in\Omega_{T}\) as follows:
\[\tfrac{\mathrm{d}}{\mathrm{d}y}\mathscr{S}\big{(}\boldsymbol{W}( t,y),\eta\partial_{y}\boldsymbol{W}(t,y),y\big{)}\] \[=\tfrac{\mathrm{d}}{\mathrm{d}y}S\big{(}\boldsymbol{W}^{1}_{\eta}( t,y)-\eta\partial_{2}\boldsymbol{W}^{1}_{\eta}(t,y),\boldsymbol{W}^{2}_{\eta}(t,y)- \eta\partial_{2}\boldsymbol{W}^{2}_{\eta}(t,y),\boldsymbol{W}^{1}_{\eta}(t,y), \boldsymbol{W}^{2}_{\eta}(t,y),y\big{)}\] \[=\tfrac{\mathrm{d}}{\mathrm{d}y}\bigg{(}\Big{(}\tfrac{\boldsymbol{W }^{2}_{\eta}(t,y)-\eta\partial_{2}\boldsymbol{W}^{2}_{\eta}(t,y)}{\boldsymbol{ \rho}^{2}_{\max}}-\tfrac{\boldsymbol{W}^{1}_{\eta}(t,y)-\eta\partial_{2} \boldsymbol{W}^{1}_{\eta}(t,y)}{\boldsymbol{\rho}^{1}_{\max}}\Big{)}H\big{(} \boldsymbol{W}^{1}_{\eta}(t,y),\boldsymbol{W}^{2}_{\eta}(t,y),y\big{)}\bigg{)}\] \[=\Big{(}\tfrac{\partial_{2}\boldsymbol{W}^{2}_{\eta}(t,y)-\eta \partial_{2}\boldsymbol{W}^{2}_{\eta}(t,y)}{\boldsymbol{\rho}^{2}_{\max}}- \tfrac{\partial_{2}\boldsymbol{W}^{1}_{\eta}(t,y)-\eta\partial_{2}\boldsymbol{W }^{1}_{\eta}(t,y)}{\boldsymbol{\rho}^{1}_{\max}}\Big{)}H\big{(}\boldsymbol{W}^ {1}_{\eta}(t,y),\boldsymbol{W}^{2}_{\eta}(t,y),y\big{)}\] \[\quad+\Big{(}\tfrac{\boldsymbol{W}^{2}_{\eta}(t,y)-\eta\partial_{2} \boldsymbol{W}^{2}_{\eta}(t,y)}{\boldsymbol{\rho}^{2}_{\max}}-\tfrac{ \boldsymbol{W}^{1}_{\eta}(t,y)-\eta\partial_{2}\boldsymbol{W}^{1}_{\eta}(t,y)}{ \boldsymbol{\rho}^{1}_{\max}}\Big{)}\cdot\Big{(}\partial_{1}H\big{(} \boldsymbol{W}^{1}_{\eta}(t,y),\boldsymbol{W}^{2}_{\eta}(t,y),y\big{)}\partial_ {2}\boldsymbol{W}^{1}_{\eta}(t,y)\] \[\qquad\qquad\qquad+\partial_{2}H\big{(}\boldsymbol{W}^{1}_{\eta}(t,y), \boldsymbol{W}^{2}_{\eta}(t,y),y\big{)}\partial_{2}\boldsymbol{W}^{2}_{\eta}(t,y )+\partial_{3}H\big{(}\boldsymbol{W}^{1}_{\eta}(t,y),\boldsymbol{W}^{2}_{\eta}(t,y ),y\big{)}\Big{)}.\]
Because \(\tfrac{\mathrm{d}}{\mathrm{d}y}\mathscr{S}\) involves higher order derivatives of \(\boldsymbol{W}\), integration by parts is necessary, and we continue our estimate in Eq. (16) by changing the order of integration to arrive at:
\[\eqref{
\[-\frac{1}{\mathbf{\rho}_{\max}^{2}}\int_{\mathbb{R}}\partial_{2}^{2}\mathbf{W} _{\eta}^{2}(t,y)H\big{(}\mathbf{W}_{\eta}^{1}(t,y),\mathbf{W}_{\eta}^{2}(t,y),y\big{)} \int_{-\infty}^{y}\operatorname{sgn}(\partial_{x}\mathbf{W}_{\eta}^{1}(t,x)) \exp\big{(}\frac{x-y}{\eta}\big{)}\,\mathrm{d}x\,\mathrm{d}y\] \[-\frac{1}{\eta\mathbf{\rho}_{\max}^{1}}\int_{\mathbb{R}}\partial_{2} \mathbf{W}_{\eta}^{1}(t,y)H\big{(}\mathbf{W}_{\eta}^{1}(t,y),\mathbf{W}_{\eta}^{2}(t,y),y \big{)}\int_{-\infty}^{y}\operatorname{sgn}(\partial_{x}\mathbf{W}_{\eta}^{1}(t,x) )\exp\big{(}\frac{x-y}{\eta}\big{)}\,\mathrm{d}x\,\mathrm{d}y\] \[+\frac{2}{\eta}\|\partial_{1}H\|_{L^{\infty}((0,|\mathbf{\rho}_{\max} |_{\infty})\times(0,|\mathbf{\rho}_{\max}|_{\infty})\times\mathbb{R})}\int_{ \mathbb{R}}\big{|}\partial_{2}\mathbf{W}_{\eta}^{1}(t,y)\big{|}\int_{-\infty}^{y} \exp\big{(}\frac{x-y}{\eta}\big{)}\,\mathrm{d}x\,\mathrm{d}y\] \[+\frac{2}{\eta}\|\partial_{2}H\|_{L^{\infty}((0,|\mathbf{\rho}_{\max} |_{\infty})\times(0,|\mathbf{\rho}_{\max}|_{\infty})\times\mathbb{R})}\int_{ \mathbb{R}}\big{|}\partial_{2}\mathbf{W}_{\eta}^{2}(t,y)\big{|}\int_{-\infty}^{y} \exp\big{(}\frac{x-y}{\eta}\big{)}\,\mathrm{d}x\,\mathrm{d}y\] \[+\frac{1}{\eta}\|H\|_{L^{\infty}((0,|\mathbf{\rho}_{\max}|_{\infty}) \times(0,|\mathbf{\rho}_{\max}|_{\infty});TV(\mathbb{R}))}\sup_{y\in\mathbb{R}} \int_{-\infty}^{y}\exp\big{(}\frac{x-y}{\eta}\big{)}\,\mathrm{d}x\,\mathrm{d}y.\]
An integration by parts in the terms involving \(\partial_{2}^{2}\mathbf{W}_{\eta}^{i},\ i\in\{1,2\}\) and subsequent straightforward computations yield
\[\leq\|H\|_{L^{\infty}((0,|\mathbf{\rho}_{\max}|_{\infty})\times(0,| \mathbf{\rho}_{\max}|_{\infty})\times\mathbb{R})}\frac{1}{\eta\mathbf{\rho}_{\max}^{2 }}\int_{\mathbb{R}}|\partial_{2}\mathbf{W}_{\eta}^{2}(t,y)|\int_{-\infty}^{y}\exp \big{(}\frac{x-y}{\eta}\big{)}\,\mathrm{d}x\,\mathrm{d}y\] \[-\frac{1}{\mathbf{\rho}_{\max}^{2}}\lim_{y\to\infty}\partial_{2}\mathbf{W} _{\eta}^{2}(t,y)H\big{(}\mathbf{W}_{\eta}^{1}(t,y),\mathbf{W}_{\eta}^{2}(t,y),y\big{)} \int_{-\infty}^{y}\operatorname{sgn}(\partial_{x}\mathbf{W}_{\eta}^{1}(t,x))\exp \big{(}\frac{x-y}{\eta}\big{)}\,\mathrm{d}x\,\mathrm{d}y\] \[+\frac{1}{\mathbf{\rho}_{\max}^{2}}\int_{\mathbb{R}}\partial_{2}\mathbf{W} _{\eta}^{2}(t,y)H\big{(}\mathbf{W}_{\eta}^{1}(t,y),\mathbf{W}_{\eta}^{2}(t,y),y\big{)} \operatorname{sgn}(\partial_{y}\mathbf{W}_{\eta}^{1}(t,y))\,\mathrm{d}y\] \[+\frac{1}{\mathbf{\rho}_{\max}^{2}}\int_{\mathbb{R}}\partial_{2}\mathbf{W} _{\eta}^{2}(t,y)\frac{\,\mathrm{d}}{\mathrm{d}y}H\big{(}\mathbf{W}_{\eta}^{1}(t,y), \mathbf{W}_{\eta}^{2}(t,y),y\big{)}\int_{-\infty}^{y}\operatorname{sgn}(\partial_{ x}\mathbf{W}_{\eta}^{1}(t,x))\exp\big{(}\frac{x-y}{\eta}\big{)}\,\mathrm{d}x\, \mathrm{d}y\] \[+\frac{1}{\mathbf{\rho}_{\max}^{2}}\|H\|_{L^{\infty}((0,|\mathbf{\rho}_{ \max}|_{\infty})\times(0,|\mathbf{\rho}_{\max}|_{\infty})\times\mathbb{R})}\int_{ \mathbb{R}}|\partial_{2}\mathbf{W}_{\eta}^{1}(t,y)|\int_{-\infty}^{y}\exp\big{(} \frac{x-y}{\eta}\big{)}\,\mathrm{d}x\,\mathrm{d}y\] \[+\frac{1}{\mathbf{\rho}_{\max}^{1}}\lim_{y\to\infty}\partial_{2}\mathbf{W} _{\eta}^{1}(t,y)H\big{(}\mathbf{W}_{\eta}^{1}(t,y),\mathbf{W}_{\eta}^{2}(t,y),y\big{)} \int_{-\infty}^{y}\operatorname{sgn}(\partial_{x}\mathbf{W}_{\eta}^{1}(t,x))\exp \big{(}\frac{x-y}{\eta}\big{)}\,\mathrm{d}x\,\mathrm{d}y\] \[-\frac{1}{\mathbf{\rho}_{\max}^{1}}\int_{\mathbb{R}}\partial_{2}\mathbf{W} _{\eta}^{1}(t,y)H\big{(}\mathbf{W}_{\eta}^{1}(t,y),\mathbf{W}_{\eta}^{2}(t,y),y\big{)} \operatorname{sgn}(\partial_{y}\mathbf{W}_{\eta}^{1}(t,y))\,\mathrm{d}y\] \[-\frac{1}{\mathbf{\rho}_{\max}^{1}}\int_{\mathbb{R}}\partial_{2}\mathbf{W} _{\eta}^{1}(t,y)\frac{\,\mathrm{d}}{\mathrm{d}y}H\big{(}\mathbf{W}_{\eta}^{1}(t,y), \mathbf{W}_{\eta}^{2}(t,y),y\big{)}\int_{-\infty}^{y}\operatorname{sgn}(\partial_{ x}\mathbf{W}_{\eta}^{1}(t,x))\exp\big{(}\frac{x-y}{\eta}\big{)}\,\mathrm{d}x\, \mathrm{d}y\] \[+2\|\partial_{1}H\|_{L^{\infty}((0,|\mathbf{\rho}_{\max}|_{\infty}) \times(0,|\mathbf{\rho}_{\max}|_{\infty})\times\mathbb{R})}\big{|}\mathbf{W}_{\eta}^{1}(t, \cdot)\big{|}_{TV(\mathbb{R})}\] \[+2\|\partial_{2}H\|_{L^{\infty}((0,|\mathbf{\rho}_{\max}|_{\infty}) \times(0,|\mathbf{\rho}_{\max}|_{\infty})\times\mathbb{R})}\big{|}\mathbf{W}_{\eta}^{2}(t, \cdot)\big{|}_{TV(\mathbb{R})}\] \[+\|H\|_{L^{\infty}((0,|\mathbf{\rho}_{\max}|_{\infty})\times(0,|\mathbf{ \rho}_{\max}|_{\infty});TV(\mathbb{R}))}\]
applying Lemma 3.4, i.e., \(\lim_{y\to\infty}\partial_{2}\mathbf{W}_{\eta}^{i}t,y)=0,\ \forall t\in[0,T],\ i\in\{1,2\}\) and recalling the postulated bounds on \(H\) in Asm. 1
\[\leq 2\frac{\mathcal{H}}{\mathbf{\rho}_{\max}^{2}}|\mathbf{W}_{\eta}^{2}(t, \cdot)|_{TV(\mathbb{R})}+\frac{\eta}{\mathbf{\rho}_{\max}^{2}}\int_{\mathbb{R}} \big{|}\partial_{2}\mathbf{W}_{\eta}^{2}(t,y)\big{|}\big{|}\frac{\,\mathrm{d}}{ \mathrm{d}y}H\big{(}\mathbf{W}_{\eta}^{1}(t,y),\mathbf{W}_{\eta}^{2}(t,y),y\big{)} \big{|}\,\mathrm{d}y\] \[+2\mathcal{H}_{1}\big{|}\mathbf{W}_{\eta}^{1}(t,\cdot)\big{|}_{TV( \mathbb{R})}+2\mathcal{H}_{2}\big{|}\mathbf{W}_{\eta}^{2}(t,\cdot)\big{|}_{TV( \mathbb{R})}+\mathcal{H}_{BV}\]
and taking advantage of Eq. (10), and in particular \(\eta\partial_{2}\mathbf{W}^{i}(t,x)=\mathbf{W}^{i}_{\eta}(t,x)-\mathbf{\rho}^{i}(t,x)\ \implies\eta\|\partial_{2}\mathbf{W}^{i}(t,\cdot)\|_{L^{\infty}(\mathbb{R})}\leq 2 \|\mathbf{q}_{\max}\|_{\infty}\ \forall(t,x)\in\Omega_{T}\)
\[\leq 2\frac{\mathcal{H}}{\mathbf{\rho}_{\max}^{2}}\big{|}\mathbf{W}_{\eta} ^{2}(t,\cdot)\big{|}_{TV(\mathbb{R})}+2\frac{\|\mathbf{\rho}_{\max}\|_{\infty}}{ \mathbf{\rho}_{\max}^{2}}\int_{\mathbb{R}}\big{|}\frac{\mathrm{d}}{\mathrm{d}y}H \big{(}\mathbf{W}_{\eta}^{1}(t,y),\mathbf{W}_{\eta}^{2}(t,y),y\big{)}\big{|}\,\mathrm{d}y\] \[\quad+2\mathcal{H}_{1}\big{|}\mathbf{W}_{\eta}^{1}(t,\cdot)\big{|}_{ TV(\mathbb{R})}+2\mathcal{H}_{2}\big{|}\mathbf{W}_{\eta}^{2}(t,\cdot)\big{|}_{TV( \mathbb{R})}+\mathcal{H}_{BV}\] \[\leq 2\frac{\mathcal{H}}{\mathbf{\rho}_{\max}^{2}}\big{|}\mathbf{W}_{\eta} ^{2}(t,\cdot)\big{|}_{TV(\mathbb{R})}+2\frac{\|\mathbf{\rho}_{\max}\|_{\infty}}{ \mathbf{\rho}_{\max}^{2}}\big{(}\mathcal{H}_{1}\big{|}\mathbf{W}_{\eta}^{1}(t,\cdot) \big{|}_{TV(\mathbb{R})}+\mathcal{H}_{2}\big{|}\mathbf{W}_{\eta}^{2}(t,\cdot) \big{|}_{TV(\mathbb{R})}+\mathcal{H}_{BV}\big{)}\] \[\quad+2\mathcal{H}_{1}\big{|}\mathbf{W}_{\eta}^{1}(t,\cdot)\big{|}_{ TV(\mathbb{R})}+2\mathcal{H}_{2}\big{|}\mathbf{W}_{\eta}^{2}(t,\cdot)\big{|}_{TV( \mathbb{R})}+\mathcal{H}_{BV}\] \[=2\Big{(}\|\underline{\rho}_{\max}\|_{\infty}\mathcal{H}_{1}+ \frac{\mathcal{H}}{\mathbf{\rho}_{\max}^{2}}+\underline{\|\mathbf{\rho}_{\max}\|_{ \infty}}{\mathbf{\rho}_{\max}^{2}}\mathcal{H}_{1}+\mathcal{H}_{1}\Big{)}\big{|} \mathbf{W}_{\eta}^{1}(t,\cdot)\big{|}_{TV(\mathbb{R})}\] \[\quad+2\Big{(}\|\underline{\rho}_{\max}\|_{\infty}\mathcal{H}_{1} +\frac{\mathcal{H}}{\mathbf{\rho}_{\max}^{2}}+\underline{\|\mathbf{\rho}_{\max}\|_{ \infty}}{\mathbf{\rho}_{\max}^{2}}\mathcal{H}_{1}+\mathcal{H}_{1}\Big{)}\big{|} \mathbf{W}_{\eta}^{2}(t,\cdot)\big{|}_{TV(\mathbb{R})}\] \[\quad+2\Big{(}\underline{\|\mathbf{\rho}_{\max}\|_{\infty}}{\mathbf{\rho }_{\max}^{2}}+\underline{\|\mathbf{\rho}_{\max}\|_{\infty}}{\mathbf{\rho}_{\max}^{2}}+ 1\Big{)}\mathcal{H}_{BV}.\]
In a similar manner, we can derive the (almost) identical estimate for the change in time of the total variation of \(\mathbf{W}_{2}\), leading us to the estimate
\[\frac{\mathrm{d}}{\mathrm{d}t}\Big{(}\big{|}\mathbf{W}_{\eta}^{1}(t, \cdot)\big{|}_{TV(\mathbb{R})}+\big{|}\mathbf{W}_{\eta}^{2}(t,\cdot)\big{|}_{TV( \mathbb{R})}\Big{)}=\frac{\mathrm{d}}{\mathrm{d}t}\big{|}\mathbf{W}_{\eta}(t, \cdot)\big{|}_{TV(\mathbb{R};\mathbb{R}^{2})}\] \[\leq 2\Big{(}\frac{\|\mathbf{\rho}_{\max}\|_{\infty}}{\mathbf{\rho}_{\max}^ {2}}\mathcal{H}_{1}+\frac{\mathcal{H}}{\mathbf{\rho}_{\max}^{2}}+\frac{\|\mathbf{\rho}_ {\max}\|_{\infty}}{\mathbf{\rho}_{\max}^{2}}\mathcal{H}_{1}+2\mathcal{H}_{1}+ \frac{\mathcal{H}}{\mathbf{\rho}_{\max}^{2}}\underline{\|_{\infty}}+\frac{\|\mathbf{ \rho}_{\max}\|_{\infty}}{\mathbf{\rho}_{\max}^{2}}\mathcal{H}_{1}\Big{)}\big{|} \mathbf{W}_{\eta}(t,\cdot)\big{|}_{TV(\mathbb{R};\mathbb{R}^{2})}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+4\Big{(} \frac{\|\underline{\rho}_{\max}\|_{\infty}}{\mathbf{\rho}_{\max}^{2}}+\frac{\|\mathbf{ \rho}_{\max}\|_{\infty}}{\mathbf{\rho}_{\max}^{2}}+1\Big{)}\mathcal{H}_{BV}.\]
Using Gronwall's inequality [23] yields:
\[\big{|}\mathbf{W}_{\eta}(t,\cdot)\big{|}_{TV(\mathbb{R};\mathbb{R}^{2 })}\] \[\quad\cdot\exp\Big{(}2t\Big{(}\frac{\|\mathbf{\rho}_{\max}\|_{\infty}}{ \mathbf{\rho}_{\max}^{2}}\mathcal{H}_{1}+\frac{\mathcal{H}}{\mathbf{\rho}_{\max}^{2}}+ \frac{\|\mathbf{\rho}_{\max}\|_{\infty}}{\mathbf{\rho}_{\max}^{2}}\mathcal{H}_{1}+2 \mathcal{H}_{1}+\frac{\|\mathbf{\rho}_{\max}\|_{\infty}}{\mathbf{\rho}_{\max}^{2}} \mathcal{H}_{1}+\frac{\mathcal{H}}{\mathbf{\rho}_{\max}^{2}}+\frac{\|\mathbf{\rho}_{ \max}\|_{\infty}}{\mathbf{\rho}_{\max}^{2}}\mathcal{H}_{1}+\frac{\mathcal{H}}{\mathbf{ \rho}_{\max}^{2}}+\frac{\|\mathbf{\rho}_{\max}\|_{\infty}}{\mathbf{\rho}_{\max}^{2}} \mathcal{H}_{1}\Big{)}\Big{)}.\]
As this estimate is uniform in the approximation, and it holds
\[\big{|}\mathbf{W}_{\eta}(0,\cdot)\big{|}_{TV(\mathbb{R};\mathbb{R}^{2})}\leq|\mathbf{q}_ {0}|_{TV(\mathbb{R};\mathbb{R}^{2})}, \tag{17}\]
we obtain the uniform \(TV\) bound for any initial datum of given \(TV\) regularity.
**Remark 3** (Consistency with the \(TV\) estimate for nonlocal conservation laws).: _Assuming there is no lane change, i.e., \(S\equiv 0\), the total variation estimate derived in Theorem 4.2 reduces to:_
\[\big{|}\mathbf{W}_{\eta}(t,\cdot)\big{|}_{TV(\mathbb{R};\mathbb{R}^{2})}\leq|\mathbf{q}_ {0}|_{TV(\mathbb{R};\mathbb{R}^{2})}\ \forall t\in[0,T]. \tag{18}\]
_Thus, the nonlocal term exhibits total variation diminishing behavior. This observation is not surprising because there is no coupling between the two nonlocal equations in this case. Consequently, we are dealing with the singular limit problem for scalar nonlocal conservation laws for which an estimate/bound similar toEq. (18) was obtained in [17, Theorem 3.2]._
### Entropy admissibility
In this section, we demonstrate that, given strong convergence, the solutions to the nonlocal system are entropy-admissible in the limit. The approach parallels the strategies outlined in[11; 19]:
**Theorem 4.3** (Entropy admissibility).: _Let \(\mathbf{\rho}_{\eta}\in C\big{(}[0,T];L^{1}_{\text{loc}}(\mathbb{R};\mathbb{R}^{2}) \big{)}\cap L^{\infty}\big{(}(0,T);L^{\infty}(\mathbb{R};\mathbb{R}^{2})\big{)}\) be the unique solution of Eq. (3). Assume that there exists \(\mathbf{\rho}^{*}\in C\big{(}[0,T];L^{1}_{\text{loc}}(\mathbb{R};\mathbb{R}^{2}) \big{)}\cap L^{\infty}\big{(}(0,T);L^{\infty}(\mathbb{R};\mathbb{R}^{2})\big{)}\) such that_
\[\lim_{\eta\to 0}\|\mathbf{\rho}_{\eta}-\mathbf{\rho}^{*}\|_{C([0,T];L^{1}_{\text{loc}}( \mathbb{R};\mathbb{R}^{2}))}=0,\qquad\exists C\in\mathbb{R}_{>0}:\;\sup_{\eta \in\mathbb{R}_{>0}}|\mathcal{W}_{\eta}[\mathbf{\rho}_{\eta}]|_{L^{\infty}((0,T); TV(\mathbb{R};\mathbb{R}^{2}))}\leq C.\]
_Then, \(\mathbf{\rho}^{*}\) satisfies the entropy admissibility condition in Def. 1 for a general convex entropy \(\alpha^{\prime\prime}(x)\geq 0\), \(\beta^{\prime}(x)=\alpha^{\prime}(x)[V(x)+xV^{\prime}(x)]\)._
Proof.: Let us define \((\alpha,\beta)\), \(\alpha,\beta\in C^{2}(\mathbb{R};\mathbb{R})\) such that \(\alpha^{\prime\prime}(x)\geq 0\), \(\beta^{\prime}(x)=\alpha^{\prime}(x)[V(x)+xV^{\prime}(x)]\). We also fix \(0\leq\varphi\in C^{\infty}_{c}(\Omega_{T})\). Our goal is to prove that
\[\mathcal{EF}_{1}[\varphi,\alpha,\mathbf{\rho}^{1}_{*}] \coloneqq\iint_{\Omega_{T}}\alpha(\mathbf{\rho}^{1}_{*}(t,x))\varphi_ {t}(t,x)+\beta_{1}(\mathbf{\rho}^{1}_{*}(t,x))\varphi_{x}(t,x)\,\mathrm{d}x\, \mathrm{d}t+\int_{\mathbb{R}}\alpha(\mathbf{\rho}^{1}_{0}(x))\varphi(0,x)\, \mathrm{d}x\] \[\quad-\iint_{\Omega_{T}}\alpha^{\prime}(\mathbf{\rho}^{1}_{*}(t,x))S \big{(}\mathbf{\rho}^{1}_{*}(t,x),\mathbf{\rho}^{2}_{*}(t,x),\mathbf{\rho}^{1}_{*}(t,x), \mathbf{\rho}^{2}_{*}(t,x),x\big{)}\varphi(t,x)\,\mathrm{d}x\,\mathrm{d}t\geq 0,\] \[\mathcal{EF}_{2}[\varphi,\alpha,\mathbf{\rho}^{2}_{*}] \coloneqq\iint_{\Omega_{T}}\alpha(\mathbf{\rho}^{2}_{*}(t,x))\varphi_ {t}(t,x)+\beta_{2}(\mathbf{\rho}^{2}_{*}(t,x))\varphi_{x}(t,x)\,\mathrm{d}x\, \mathrm{d}t+\int_{\mathbb{R}}\alpha(\mathbf{\rho}^{2}_{0}(x))\varphi(0,x)\, \mathrm{d}x\] \[\quad+\iint_{\Omega_{T}}\alpha^{\prime}(\mathbf{\rho}^{2}_{*}(t,x))S \big{(}\mathbf{\rho}^{1}_{*}(t,x),\mathbf{\rho}^{2}_{*}(t,x),\mathbf{\rho}^{1}_{*}(t,x), \mathbf{\rho}^{2}_{*}(t,x),x\big{)}\varphi(t,x)\,\mathrm{d}x\,\mathrm{d}t\geq 0.\]
We choose a sequence \(\eta_{k}\), which is still denoted by \(\eta\) and set \(\mathbf{W}^{i}_{\eta}\coloneqq\frac{1}{\eta}\int_{x}^{\infty}\exp\big{(}\frac{x-y }{\eta}\big{)}\mathbf{\rho}^{i}(t,y)\,\mathrm{d}y\). Then, we set
\[\mathcal{EF}_{1}[\varphi,\alpha,\mathbf{W}^{1}_{\eta}] \coloneqq\iint_{\Omega_{T}}\alpha(\mathbf{W}^{1}_{\eta})\varphi_{t}(t, x)+\beta_{1}(\mathbf{W}^{1}_{\eta})\varphi_{x}(t,x)\,\mathrm{d}x\,\mathrm{d}t+\int_{ \mathbb{R}}\alpha(\mathbf{W}^{1}_{\eta}(0,x))\varphi(0,x)\,\mathrm{d}x, \tag{18}\] \[\quad-\iint_{\Omega_{T}}\alpha^{\prime}(\mathbf{W}^{1}_{\eta})S\big{(} \mathbf{W}_{\eta},\mathbf{W}_{\eta},x\big{)}\varphi(t,x)\,\mathrm{d}x\,\mathrm{d}t\] \[\mathcal{EF}_{2}[\varphi,\alpha,\mathbf{W}^{2}_{\eta}] \coloneqq\iint_{\Omega_{T}}\alpha(\mathbf{W}^{2}_{\eta})\varphi_{t}(t, x)+\beta_{2}(\mathbf{W}^{2}_{\eta})\varphi_{x}(t,x)\,\mathrm{d}x\,\mathrm{d}t+\int_{ \mathbb{R}}\alpha(\mathbf{W}^{2}_{\eta}(0,x))\varphi(0,x)\,\mathrm{d}x\] \[\quad+\iint_{\Omega_{T}}\alpha^{\prime}(\mathbf{W}^{2}_{\eta})S\big{(} \mathbf{W}_{\eta},\mathbf{W}_{\eta},x\big{)}\varphi(t,x)\,\mathrm{d}x\,\mathrm{d}t.\]
We recall that by assumption \(\mathbf{W}^{i}_{\eta}\to\mathbf{\rho}^{i}\) in \(L^{1}_{\text{loc}}(\Omega_{T})\) and that
\[\lim_{k\to+\infty}\mathcal{EF}_{i}[\varphi,\alpha,\mathbf{W}^{i}_{\eta_{k}}]= \mathcal{EF}_{i}[\varphi,\alpha,\mathbf{\rho}^{i}_{*}].\]
Hence, we need to show:
\[\lim_{k\to\infty}\mathcal{EF}_{i}[\varphi,\alpha,\mathbf{W}^{i}_{\eta}]\geq 0 \quad\forall\varphi\in C^{\infty}_{c}(\Omega_{T};\mathbb{R}_{\geq 0}),\quad\forall \alpha\in C^{2}(\mathbb{R})\text{ convex},\ \forall i\in\{1,2\}. \tag{19}\]
For simplicity, we use the notation \(\mathbf{\rho}\ast\exp_{\eta}\coloneqq\frac{1}{\eta}\int_{x}^{\infty}\exp\big{(} \frac{x-y}{\eta}\big{)}\mathbf{\rho}(t,y)\,\mathrm{d}y\), First, we rewrite \(\mathcal{EF}_{i},\ i\in\{1,2\}\) and obtain, suppressing the subsequence index for \(\eta\in\mathbb{R}_{>0}\),
\[\iint_{\Omega_{T}}\alpha(\mathbf{W}^{i}_{\eta})\partial_{t}\varphi+ \big{[}\big{(}V(\mathbf{W}^{i}_{\eta})\mathbf{\rho}^{i}_{\eta}\big{)}\ast\exp_{\eta} \big{]}\,\partial_{x}\left[\alpha^{\prime}(\mathbf{W}^{i}_{\eta})\varphi\right]\, \mathrm{d}x\,\mathrm{d}t+\int_{\mathbb{R}}\alpha\big{(}\mathbf{W}^{i}_{\eta}(0,x) \big{)}\varphi(0,x)\,\mathrm{d}x\] \[\qquad\qquad=(-1)^{i+1}\iint_{\Omega_{T}}\alpha^{\prime}(\mathbf{W}^{i}_ {\eta})\mathscr{S}\big{(}\mathbf{W}_{\eta},\eta\partial_{x}\mathbf{W}_{\eta}(t,x),x \big{)}\ast\exp_{\eta}\varphi(t,x)\,\mathrm{d}x\,\mathrm{d}t\]
for \(\mathscr{S}\), as reported in Eq. (7). Thanks to the equality \(\beta^{\prime}_{i}(x)=\alpha^{\prime}(x)\left[V(x)+xV^{\prime}(x)\right],\ x\in \mathbb{R}\), we obtain
\[\iint_{\Omega_{T}}\beta_{i}(\mathbf{W}^{i}_{\eta})\partial_{x}\varphi \,\mathrm{d}x\,\mathrm{d}t =-\iint_{\Omega_{T}}\beta^{\prime}_{i}(\mathbf{W}^{i}_{\eta})\partial _{x}\mathbf{W}^{i}_{\eta}\varphi\,\mathrm{d}x\,\mathrm{d}t\] \[=-\iint_{\Omega_{T}}\alpha^{\prime}(\mathbf{W}^{i}_{\eta})V(\mathbf{W}^{ i}_{\eta})\partial_{x}\mathbf{W}^{i}_{\eta}\varphi\,\mathrm{d}x\,\mathrm{d}t-\iint_{ \Omega_{T}}\alpha^{\prime}(\mathbf{W}^{i}_{\eta})V^{\prime}(\mathbf{W}^{i}_{\eta})\mathbf{ W}^{i}_{\eta}\partial_{x}\mathbf{W}^{i}_{\eta}\varphi\,\mathrm{d}x\,\mathrm{d}t\]
and integration by parts in the last term leads to (interpreting \(\frac{\mathrm{d}}{\mathrm{d}x}V(\mathbf{W}^{i}_{\eta})=V^{\prime}(\mathbf{W}^{i}_{\eta })\partial_{x}\mathbf{W}^{i}_{\eta}\))
\[=\iint_{\Omega_{T}}V(\mathbf{W}^{i}_{\eta})\mathbf{W}^{i}_{\eta}\partial_{x}[\alpha^{ \prime}(\mathbf{W}^{i}_{\eta})\varphi]\,\mathrm{d}x\,\mathrm{d}t.\]
Then, by referencingEq. (18) for \(i\in\{1,2\}\), we obtain the following:
\[\mathcal{EF}_{i}[\varphi,\alpha,\mathbf{W}^{i}_{\eta}] \tag{20}\] \[=\iint_{\Omega_{T}}\left[V(\mathbf{W}^{i}_{\eta})\mathbf{W}^{i}_{\eta}- \left(V(\mathbf{W}^{i}_{\eta})\mathbf{\rho}^{i}_{\eta}\right)\ast\exp_{\eta}\right] \partial_{x}[\alpha^{\prime}(\mathbf{W}^{i}_{\eta})\varphi]\,\mathrm{d}x\, \mathrm{d}t\] (21) \[(-1)^{i+1}\iint_{\Omega_{T}}\alpha^{\prime}(\mathbf{W}^{i}_{\eta}) \left[S\big{(}\mathbf{\rho}_{\eta},\mathbf{W}_{\eta},x\big{)}\ast\exp_{\eta}\right] \varphi(t,x)\,\mathrm{d}x\,\mathrm{d}t\] (22) \[(-1)^{i}\iint_{\Omega_{T}}\alpha^{\prime}(\mathbf{W}^{i}_{\eta})S \big{(}\mathbf{W}_{\eta},\mathbf{W}_{\eta},x\big{)}\varphi(t,x)\,\mathrm{d}x\, \mathrm{d}t\] (23) \[=\iint_{\Omega_{T}}\left[V(\mathbf{W}^{i}_{\eta})\mathbf{W}^{i}_{\eta}- \left(V(\mathbf{W}^{i}_{\eta})\mathbf{\rho}^{i}_{\eta}\right)\ast\exp_{\eta}\right] \partial_{x}[\alpha^{\prime}(\mathbf{W}^{i}_{\eta})\varphi]\,\mathrm{d}x\, \mathrm{d}t\] (24) \[(-1)^{i}\iint_{\Omega_{T}}\alpha^{\prime}(\mathbf{W}^{i}_{\eta})\left( \int_{x}^{+\infty}\exp\left(\frac{x-y}{\eta}\right)\left[-S\big{(}\mathbf{\rho}_{ \eta}(y,t),\mathbf{W}_{\eta}(y,t),y\big{)}+S\big{(}\mathbf{W}_{\eta}(t,x),\mathbf{W}_{ \eta}(x,t),x\big{)}\right]\,\mathrm{d}y\right)\varphi(t,x)\,\mathrm{d}x\, \mathrm{d}t\] (25) \[=\iint_{\Omega_{T}}\left[V(\mathbf{W}^{i}_{\eta})\mathbf{W}^{i}_{\eta}- \left(V(\mathbf{W}^{i}_{\eta})\mathbf{\rho}^{i}_{\eta}\right)\ast\exp_{\eta}\right] \partial_{x}[\alpha^{\prime}(\mathbf{W}^{i}_{\eta})]\varphi\,\mathrm{d}x\, \mathrm{d}t\] (26) \[+\iint_{\Omega_{T}}\left[V(\mathbf{W}^{i}_{\eta})\mathbf{W}^{i}_{\eta}- \left(V(\mathbf{W}^{i}_{\eta})\mathbf{\rho}^{i}_{\eta}\right)\ast\exp_{\eta}\right] \alpha^{\prime}(\mathbf{W}^{i}_{\eta})\partial_{x}\varphi\,\mathrm{d}x\,\mathrm{d}t\] (27) \[(-1)^{i}\iint_{\Omega_{T}}\!\!\alpha^{\prime}(\mathbf{W}^{i}_{\eta}) \left(\int_{x}^{+\infty}\!\!\!\exp\left(\frac{x-y}{\eta}\right)\left[-S\big{(} \mathbf{\rho}_{\eta}(t,y),\mathbf{W}_{\eta}(t,y),y\big{)}+S\big{(}\mathbf{W}_{\eta}(t,x), \mathbf{W}_{\eta}(t,x),x\big{)}\right]\,\mathrm{d}y\right)\varphi(t,x)\,\mathrm{d}x \,\mathrm{d}t. \tag{28}\]
Note that the second term in the previous equality converges to zero for \(\eta\to 0\):
\[|\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:
\[\begin{split}&=\eta\|\alpha^{\prime}\|_{L^{\infty}((0,\|\boldsymbol{\rho}_ {\max}\|_{\infty}))}\|\partial_{2}\varphi\|_{L^{\infty}(\Omega_{T})}\int_{0}^{T }\int_{\mathbb{R}}\Big{|}V^{\prime}(\boldsymbol{W}^{i}_{\eta}(t,y))\boldsymbol {W}^{i}_{\eta}(t,y)\partial_{y}\boldsymbol{W}^{i}_{\eta}(t,y)\Big{|}\,\mathrm{ d}y\\ &\leq\eta\|\alpha^{\prime}\|_{L^{\infty}((0,\|\boldsymbol{\rho}_ {\max}\|_{\infty}))}\|\partial_{2}\varphi\|_{L^{\infty}(\Omega_{T})}T\|V^{ \prime}\|_{L^{\infty}((0,\|\boldsymbol{\rho}_{\max}\|_{\infty}))}\boldsymbol {\rho}^{i}_{\max}|\boldsymbol{W}^{i}_{\eta}|_{L^{\infty}((0,T);TV(\mathbb{R})) }\end{split}\]
The last term is bounded by assumption and converges to zero for \(\eta\to 0\), as claimed.
The third term cancels out because, practically speaking, \(S\) and \(\alpha^{\prime}\) are bounded, and \(\varphi\) has compact support. Consequently, the integration in the exponential kernel yields the following (recalling the assumptions on the lane-changing in Asm. 1):
\[\begin{split}|(28)|&\leq\|\alpha^{\prime}\|_{L^{ \infty}((0,\|\boldsymbol{\rho}_{\max}\|_{\infty}))}2\|\boldsymbol{\rho}_{ \max}\|_{\infty}\iint_{\Omega_{T}}|\varphi(t,x)|\int_{x}^{\infty}\exp\big{(} \tfrac{x-y}{\eta}\big{)}H(\boldsymbol{W}_{\eta},y)\,\mathrm{d}y\,\mathrm{d}x \\ &\leq\mathcal{H}\|\alpha^{\prime}\|_{L^{\infty}((0,\|\boldsymbol{ \rho}_{\max}\|_{\infty}))}2\|\boldsymbol{\rho}_{\max}\|_{\infty}\iint_{ \Omega_{T}}|\varphi(t,x)|\int_{x}^{\infty}\exp\big{(}\tfrac{x-y}{\eta}\big{)} \,\mathrm{d}y\,\mathrm{d}x\\ &\leq\eta\mathcal{H}\|\alpha^{\prime}\|_{L^{\infty}((0,\| \boldsymbol{\rho}_{\max}\|_{\infty}))}2\|\boldsymbol{\rho}_{\max}\|_{\infty}\| \varphi\|_{L^{\infty}(\Omega_{T})}\operatorname{supp}(\varphi)\end{split}\]
which converges to zero for \(\eta\to 0\). Hence, the only term left needed to treat is the term in (26). To accomplish this, we defined
\[T_{1}^{\eta}\coloneqq\iint_{\Omega_{T}}\big{[}V(\boldsymbol{W}^{i}_{\eta}) \boldsymbol{W}^{i}_{\eta}-\big{(}V(\boldsymbol{W}^{i}_{\eta})\boldsymbol{ \rho}^{i}_{\eta}\big{)}\ast\exp_{\eta}\big{]}\,\partial_{x}[\alpha^{\prime}( \boldsymbol{W}^{i}_{\eta})]\varphi\,\mathrm{d}x\,\mathrm{d}t,\]
so we can write:
\[\begin{split} T_{1}^{\eta}&=\iint_{\Omega_{T}}\int_ {x}^{+\infty}\big{[}V(\boldsymbol{W}^{i}_{\eta}(t,x))-\big{(}V(\boldsymbol{W}^ {i}_{\eta}(t,y))\big{]}\,\partial_{x}[\alpha^{\prime}(\boldsymbol{W}^{\eta}_{i })](t,x)\varphi(t,x)\tfrac{1}{\eta}\exp\Big{(}\tfrac{x-y}{\eta}\Big{)}\, \boldsymbol{\rho}^{i}_{\eta}(t,y)\,\mathrm{d}y\,\mathrm{d}x\,\mathrm{d}t\\ &=\iint_{\Omega_{T}}\boldsymbol{\rho}^{i}_{\eta}(t,y)\omega^{i}_ {\eta}(t,y)\,\mathrm{d}y\,\mathrm{d}t,\end{split}\]
where
\[\begin{split}\omega^{i}_{\eta}(t,y)&\coloneqq\int_{- \infty}^{y}\big{[}V(\boldsymbol{W}^{i}_{\eta}(t,x))-\big{(}V(\boldsymbol{W}^ {i}_{\eta}(t,y))\big{]}\,\partial_{x}[\alpha^{\prime}(\boldsymbol{W}^{\eta}_{i })](t,x)\varphi(t,x)\tfrac{1}{\eta}\exp\Big{(}\tfrac{x-y}{\eta}\Big{)}\,\, \mathrm{d}x\\ &\qquad=\int_{-\infty}^{y}\underbrace{V(\boldsymbol{W}^{i}_{\eta} (t,x))\partial_{x}[\alpha^{\prime}(\boldsymbol{W}^{\eta}_{i})](t,x)}_{=: \partial_{x}I(\boldsymbol{W}^{i}_{\eta})}\varphi(t,x)\tfrac{1}{\eta}\exp \Big{(}\tfrac{x-y}{\eta}\Big{)}\,\,\mathrm{d}x\\ &\qquad-V(\boldsymbol{W}^{i}_{\eta}(t,y))\int_{-\infty}^{y} \partial_{x}[\alpha^{\prime}(\boldsymbol{W}^{\eta}_{i})](t,x)\varphi(t,x) \tfrac{1}{\eta}\exp\Big{(}\tfrac{x-y}{\eta}\Big{)}\,\,\mathrm{d}x.\end{split} \tag{29}\]
Using partial integration, we obtain
\[\begin{split}\omega^{i}_{\eta}(t,y)&=\frac{1}{\eta}I( \boldsymbol{W}^{i}_{\eta}(t,y))\varphi(t,y)-\int_{-\infty}^{y}I(\boldsymbol{W}^ {i}_{\eta}(t,x))\partial_{x}\left[\varphi(t,x)\tfrac{1}{\eta}\exp\Big{(}\tfrac{ x-y}{\eta}\Big{)}\right]\,\mathrm{d}x\\ &\quad-V(\boldsymbol{W}^{i}_{\eta}(t,y))\left[\alpha^{\prime}( \boldsymbol{W}^{i}_{\eta}(t,y))\varphi(t,y)\tfrac{1}{\eta}-\int_{-\infty}^{y} \alpha^{\prime}(\boldsymbol{W}^{i}_{\eta}(t,x))\partial_{x}\left[\varphi(t,x )\tfrac{1}{\eta}\exp\Big{(}\tfrac{x-y}{\eta}\Big{)}\right]\,\mathrm{d}x \right]\\ &\quad=\int_{-\infty}^{y}[I(\boldsymbol{W}^{i}_{\eta}(t,y))-I( \boldsymbol{W}^{i}_{\eta}(t,x))]\partial_{x}\left[\varphi(t,x)\tfrac{1}{\eta} \exp\Big{(}\tfrac{x-y}{\eta}\Big{)}\right]\,\mathrm{d}x\\ &\quad-V(\boldsymbol{W}^{i}_{\eta}(t,y))\int_{-\infty}^{y}[\alpha^{ \prime}(\boldsymbol{W}^{i}_{\eta}(t,y))-\alpha^{\prime}(\boldsymbol{W}^{i}_{\eta} (t,x))]\partial_{x}\left[\varphi(t,x)\tfrac{1}{\eta}\exp\Big{(}\tfrac{x-y}{ \eta}\Big{)}\right]\,\mathrm{d}x\\ &\quad=G_{\eta}(t,y)+L_{\eta}(t,y)+P_{\eta}(t,y),\end{split} \tag{30}\]
with
\[G_{\eta}(t,y)\coloneqq\int_{-\infty}^{y}[I(\boldsymbol{W}^{i}_{\eta}(t,y))-I( \boldsymbol{W}^{i}_{\eta}(t,x))]\left[\tfrac{1}{\eta}\exp\Big{(}\tfrac{x-y}{ \eta}\Big{)}\right]\partial_{x}\varphi(t,x)\,\mathrm{d}x, \tag{31}\]
\[L_{\eta}(t,y)\coloneqq-V(\boldsymbol{W}^{i}_{\ \eta}(t,y))\int_{-\infty}^{y}[ \alpha^{\prime}(\boldsymbol{W}^{i}_{\ \eta}(t,y))-\alpha^{\prime}(\boldsymbol{W}^{i}_{\ \eta}(t,x))]\left[\tfrac{1}{\eta}\exp\left(\tfrac{x-y}{\eta}\right)\right] \partial_{x}\varphi(t,x)\,\mathrm{d}x, \tag{32}\]
and
\[P_{\eta}(t,y) \coloneqq\int_{-\infty}^{y}H(\boldsymbol{W}^{i}_{\ \eta}(t,x),\boldsymbol{W}^{i}_{\ \eta}(t,y))\varphi(t,x)\partial_{x}\left[\tfrac{1}{\eta}\exp\left(\tfrac{x-y}{ \eta}\right)\right]\,\mathrm{d}x \tag{33}\] \[=\tfrac{1}{\eta^{2}}\int_{-\infty}^{y}H(\boldsymbol{W}^{i}_{\ \eta}(t,x),\boldsymbol{W}^{i}_{\ \eta}(t,y))\varphi(t,x)\exp\left(\tfrac{x-y}{\eta}\right)\, \mathrm{d}x, \tag{34}\]
where
\[H(a,b)\coloneqq I(b)-I(a)-V(b)(\alpha^{\prime}(b)-\alpha^{\prime}(a)).\]
Next, by plugging Eq. (31), Eq. (32) and Eq. (34) into Eq. (30), we can formulate:
\[\mathcal{EF}_{i}[\varphi,\alpha,\boldsymbol{W}^{i}_{\ \eta}]\geq\iint_{\Omega_{T}} \boldsymbol{\rho}^{i}_{\eta}(t,y)\left[G_{\eta}(t,y)+L_{\eta}(t,y)+P_{\eta}(t,y)\right]\,\mathrm{d}y\,\mathrm{d}t.\]
We now can show that
\[\mathcal{EF}_{i}[\varphi,\alpha,\boldsymbol{W}^{i}_{\ \eta}]\geq\iint_{\Omega_{T}} \boldsymbol{\rho}^{i}_{\eta}(t,y)\left[G_{\eta}(t,y)+L_{\eta}(t,y)\right]\, \mathrm{d}y\,\mathrm{d}t. \tag{35}\]
It is sufficient to prove that \(P_{\eta}\geq 0\). To accomplish this, we compute
\[\frac{\partial H}{\partial a}(u,b)=-I^{\prime}(u)+V(b)\alpha^{\prime\prime}(u )=\alpha^{\prime\prime}(u)[V(b)-V(u)]\]
and apply the same argument as in [19, Proof of Theorem 1.2], so it can be concluded that \(P_{\eta}\geq 0\). To establish Eq. (19), it suffices to show that the right-hand side of Eq. (35) vanishes for \(\eta\to 0\). We now can show that
\[\lim_{\eta\to 0}\iint_{\Omega_{T}}\boldsymbol{\rho}^{i}_{\eta}(t,y)G_{\eta}(t,y) \,\mathrm{d}y\,\mathrm{d}t=0.\]
To achieve this, we can write the following:
\[\iint_{\Omega_{T}}\boldsymbol{\rho}^{i}_{\eta}(t,y)G_{\eta}(t,y) \,\mathrm{d}y\,\mathrm{d}t\] \[\leq\iint_{\Omega_{T}}\boldsymbol{\rho}^{i}_{\eta}(t,y)\int_{- \infty}^{y}|I(\boldsymbol{W}^{i}_{\ \eta}(t,y)))-I(\boldsymbol{W}^{i}_{\ \eta}(t,x)))|\tfrac{1}{\eta}\exp\left(\tfrac{x-y}{\eta}\right)|\partial_{x} \varphi(t,x)|\,\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}t\] \[\overset{\mathrm{Fubini}}{=}\iint_{\Omega_{T}}|\partial_{x} \varphi(t,x)|\int_{x}^{+\infty}\boldsymbol{\rho}^{i}_{\eta}(t,y)|I(\boldsymbol {W}^{i}_{\ \eta}(t,y)))-I(\boldsymbol{W}^{i}_{\ \eta}(t,x)))|\tfrac{1}{\eta}\exp\left(\tfrac{x-y}{\eta}\right)\, \mathrm{d}y\,\mathrm{d}x\,\mathrm{d}t.\]
Because \(\varphi\) is compactly supported by applying [19, Lemma 4.1], we can conclude that it vanishes for \(\eta\to 0\).
Analogously, one can show that
\[\lim_{\eta\to 0}\iint_{\Omega_{T}}\boldsymbol{\rho}^{i}_{\eta}(t,y)L_{\eta}(t,y) \,\mathrm{d}y\,\mathrm{d}t=0,\]
which concludes the proof.
### Main Theorem and some Corollaries
So far, we have proven entropy admissibility in Theorem 4.3 and for the nonlocal operator a \(TV\) bound uniform in \(\eta\in\mathbb{R}_{>0}\) in Theorem 4.2. However, this \(TV\) bound is only in space, and to obtain compactness in \(C(L^{1})\), a "time-compactness" is required as well. This is what is established in the next theorem:
**Theorem 4.4** (Compactness of \(\mathbf{W}_{\eta}\)).: _The set of nonlocal terms \((\mathbf{W}_{\eta})_{\eta\in\mathbb{R}_{>0}}\subseteq C\big{(}[0,T];L^{1}_{\rm loc }(\mathbb{R};\mathbb{R}^{2})\big{)}\) of solutions to Eq. (8) is compactly embedded into \(C\big{(}[0,T];L^{1}_{\rm loc}(\mathbb{R};\mathbb{R}^{2})\big{)}\), i.e.,_
\[\{\mathbf{W}_{\eta},\ \eta\in\mathbb{R}_{>0}\}\overset{c}{\hookrightarrow}C\big{(}[0,T]; L^{1}_{\rm loc}(\mathbb{R};\mathbb{R}^{2})\big{)}\]
Proof.: We now apply [39, Lemma 1]. In particular, according to the notation in [39, Lemma 1], we set the Banach space \(B=L^{1}_{\rm loc}(\Omega)\) with \(\Omega\subset\mathbb{R}\) open bounded and for \(t\in[0,T]\)
\[F(t)\coloneqq\big{\{}\mathbf{W}_{\eta}(t,\cdot)\in L^{1}_{\rm loc}(\mathbb{R}), \quad\eta\in\mathbb{R}_{>0}\big{\}}.\]
According to [36, Theorem 13.35], the set \(F(t)\) is compact in \(L^{1}_{\rm loc}(\mathbb{R})\) because of the total uniform variation bound in the spatial component of \(\mathbf{W}_{\eta}\) proved in Theorem 4.2. Moreover, the set \((\mathbf{W}_{\eta})_{\eta\in\mathbb{R}_{>0}}\) is uniformly equi-continuous. To accomplish this, we estimate for \((t_{1},t_{2})\in[0,T]\) (assuming we have regular enough solutions, that we can assume thanks to Lemma 3.3)
\[\|\mathbf{W}_{\eta}^{1}(t_{1},\cdot)-\mathbf{W}_{\eta}^{1}(t_{2},\cdot)\|_ {L^{1}(\Omega)}=\Big{\|}\int_{t_{1}}^{t_{2}}\partial_{t}\mathbf{W}_{\eta}^{1}(s, \cdot)\,\mathrm{d}s\Big{\|}_{L^{1}(\Omega)}\] \[\overset{(\ref{eq:1})}{\leq}\bigg{\|}\int_{t_{1}}^{t_{2}}V_{1}( \mathbf{W}_{\eta}^{1}(s,\cdot))\partial_{x}\mathbf{W}_{\eta}^{1}(s,\cdot)\,\mathrm{d}s \bigg{\|}_{L^{1}(\Omega)}\] \[\quad+\bigg{\|}\int_{t_{1}}^{t_{2}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Proof.: Applying Theorem 4.4 the set of nonlocal terms \(\mathbf{W}_{\eta_{k}}\) is compact in \(C\big{(}[0,T];L^{1}_{\mathrm{loc}}(\mathbb{R};\mathbb{R}^{2})\big{)}\). This is why there exists a limit function \(\mathbf{\rho}_{*}\in C\big{(}[0,T];L^{1}_{\mathrm{loc}}(\mathbb{R};\mathbb{R}^{2}) \big{)}\) such that
\[\lim_{k\to\infty}\|\mathbf{W}_{\eta_{k}}-\mathbf{\rho}_{*}\|_{C([0,T];L^{1}_{\mathrm{ loc}}(\mathbb{R};\mathbb{R}^{2}))}=0.\]
Thanks to Eq. (10), we can write, for \(t\in[0,T]\),
\[\|\mathbf{W}_{\eta_{k}}(t,\cdot)-\mathbf{\rho}_{\eta_{k}}(t,\cdot)\|_{L^{1}(\mathbb{R };\mathbb{R}^{2})}=\eta_{k}|\mathbf{W}_{\eta_{k}}(t,\cdot)|_{TV(\mathbb{R}; \mathbb{R}^{2})}\leq\eta_{k}|\mathbf{\rho}_{0}|_{TV(\mathbb{R};\mathbb{R}^{2})}\]
and, thus, we also (as \(\lim_{k\to\infty}\eta_{k}=0\)) obtain
\[\lim_{k\to\infty}\|\mathbf{\rho}_{\eta_{k}}-\mathbf{\rho}_{*}\|_{C([0,T];L^{1}_{\mathrm{ loc}}(\mathbb{R}^{2}))}=0.\]
\(\mathbf{\rho}_{*}\) is a weak solution of the local system in Section 2 thanks to convergence in \(C\big{(}[0,T];L^{1}_{\mathrm{loc}}(\mathbb{R};\mathbb{R}^{2})\big{)}\), and due to the uniform bounds on \(\|\mathbf{\rho}_{\eta_{k}}\|_{L^{\infty}((0,T);L^{\infty}(\mathbb{R};\mathbb{R}^{2 }))}\).
This brings us to our final and most significant result. By bringing together the findings of the previous theorem, we ultimately assert the strong convergence of both the nonlocal term and nonlocal solution to the entropy solution of the local conservation law for \(\eta\to 0\).
**Theorem 4.5** (Convergence to the Entropy solution).: _Given Asm. 1, the nonlocal term \(\mathcal{W}_{\eta}[\mathbf{\rho}_{\eta}]\) and the corresponding nonlocal solution \(\mathbf{\rho}_{\eta}\in C\big{(}[0,T];L^{1}_{\mathrm{loc}}(\mathbb{R};\mathbb{R}^ {2})\big{)}\) of the nonlocal system in Section 2 converge in \(C\big{(}[0,T];L^{1}_{\mathrm{loc}}(\mathbb{R};\mathbb{R}^{2})\big{)}\) to the entropy solution of the corresponding local system of balance laws in Section 2._
Proof.: This is a direct consequence of Corollary 4.4.1 and Theorem 4.3.
**Remark 4** (Generalization to larger systems and more general kernels).:
**Larger Systems:**: _By slightly adjusting the right-hand side of the system of nonlocal balance laws and imposing the corresponding assumptions on the source term, as reported in Asm. 1, the same type of convergence can be proven for a system of any dimension (and not solely, as we did here, for \(N=2\)). The primary purpose of all our arguments is that the nonlocal fluxes decoupled. Coupling different equations within the fluxes might undermine the required uniform maximum principle. This will undoubtedly complicate any representation of the nonlocal terms in Lemma_ 4.1_._
**More general kernels:**: _It is very likely that the obtained convergence can be extended to more general kernels, such as a convex kernel, as described in_ _[_19_]__. The result should also hold for kernels with fixed support of the type reported in_ _[_32_]__._
## 5 Numerical simulations
In this section, we present several numerical simulations conducted using an Upwind-type numerical scheme, as detailed in [15; 24]. In particular, we consider the source term
\[S(\mathbf{\rho}_{1},\mathbf{\rho}_{2})\coloneqq\big{(}\mathbf{\rho}_{2}-\mathbf{\rho}_{1} \big{)}\chi_{[-2,2]}(x),\ x\in\mathbb{R}.\]
Figure 1 shows the convergence of the approximate nonlocal solution to the local one for decreasing values of \(\eta\). The corresponding \((t,x)-\)plots are shown in Fig. 2. As can be observed, over time, the densities of both lanes converge due to the lane-changing behavior.
Clearly, the claimed convergence can be observed for smaller \(\eta\in\mathbb{R}_{>0}\). Moreover, in Figure 3, the total variation is depicted as it varies with different values of \(\eta\). Furthermore, it can be seen that, for the chosen source term \(S(\mathbf{\rho}_{1},\mathbf{\rho}_{2})\coloneqq\mathbf{\rho}_{2}-\mathbf{\rho}_{1}\) the total variation decreases (and not just finite as proven in Theorem 4.2). However, as anticipated in a nonlocal approximation, the total variation decreases as \(\eta\) increases.
Figure 2: \((t,x)-\)plots of the local and nonlocal solutions with \(\eta\in\{0,0.1,0.005\}\), from left to right. In the first row, \(\mathbf{\rho}_{1}\) is shown, and in the second row, \(\mathbf{\rho}_{2}\).
## 6 Conclusions and open problems
In this paper, an analytical proof of nonlocal-to-local convergence for a system of balance laws, which models lane-changing traffic flow, was presented. Coupling occurred via the right-hand side. One crucial aspect was the ability to express the nonlocal system in terms of a system of nonlocal terms, facilitated by selecting the exponential kernel (though generalizations similar to Remark 4 should be readily achievable).
The presented work, however, only scratches the surface of the singular limit problem for systems due to its "weak" coupling via the right-hand sides only. In a future study, it would be desirable to take into account coupling in the velocity functions of the dynamics.
Another interesting related problem involves investigating the singular limit problem for scalar nonlocal conservation laws in the context of bounded domains. Existence, uniqueness, and stability results have already been established i in this regard (for example, see [35; 7; 21]. However, in the system case, addressing the singular limit problem remains an open challenge. We currently lack the capability to obtain uniform \(TV\) estimates, and the manner in which we would converge to the boundary conditions, as defined by Bardos-Leroux-Nedelec [6] in the local case, remains unclear.
|